Civil War at OpenAI: Ousted CEO Altman Returns Amidst Battle for Future

​After a tumultuous week in the tech industry with the abrupt firing of Sam Altman from OpenAI, the company has announced that he will be reinstated as CEO. On top of the business lessons to be learned from this debacle, these events speak to a broader debate about the future of AI and how to control its impact on humanity.

​In the year 2015, a group of well-known figures in the tech industry including Elon Musk, Sam Altman, Greg Brockman, and many others, founded what was at the time a non-profit called “OpenAI.” Its mission was to develop artificial intelligence that would benefit humanity in a safe manner. The company has since released multiple successful AI products including Dactyl, DALL-E, and most notably, Chat-GPT. 

OpenAI has exploded to around a $90 Billion valuation since the release of its groundbreaking Chat-GPT product in 2022, which quickly became the fastest-growing consumer internet app of all time. Under the backdrop of this release, Microsoft had pledged an investment of $10 Billion to OpenAI. 

​However, the company’s adherence to its mission statement has been hotly contested after some of its actions in the years since its founding. In 2019, OpenAI decided it would become a for-profit company, still controlled by a non-profit board, to boost recruitment efforts.

Musk, having resigned from the board in 2018 along with other members due to conflicts of interest with their other ventures, spoke out in an interview with CNBC, criticizing this move for straying away from the firm’s founding purpose. 

The reason OpenAI exists at all is because I used to be really close friends with Larry Page and stay at his house in Palo Alto. We would talk late in the night about AI safety, and my impression was that Larry wasn’t taking AI safety seriously enough. He really seemed just focused on achieving digital super intelligence — essentially a Digital God, if you will, and as soon as possible. This is not good.

So I thought, ‘What is the furthest thing from Google?’ which would be a fully open non-profit. So the ‘open’ in OpenAI stands for open source and transparency so people know what is going on. I’m normally in favor of for-profit companies, but the idea was not to be a profit-maximizing demon from hell that never stops.

So that is why OpenAI was founded. Very, unfortunately, they decided to become a for-profit company.
— Elon Musk

OpenAI CEO, Sam Altman, Testifying Before Congress

 This criticism was shared by certain board members, as demonstrated by the events of the last few days. By 2023, the company’s leadership was split into two conflicting “ideological camps.” One side, led by Altman, felt innovating the business and generating revenue should be the company’s utmost priority. The other camp, namely co-founder and Chief Scientist Ilya Sutskever, wanted the firm to focus on AI safety. 

The ideological divide culminated in Sutskever taking drastic measures to remove his chief opposition - CEO Sam Altman. On Friday, Sutskever voted Greg Brockman, OpenAI President and close friend of Altman, off the board, bringing the total number of board members down to 5. With this smaller board, he then held a vote to fire Altman and had enough votes to succeed. Ilya did all this without warning Altman or Brockman and without any notification to Microsoft, Open AI’s minority owner.

In the days since Brockman resigned as President, Microsoft announced that they would hire Altman and Brockman to lead their “new advanced AI research team,” OpenAI’s largest investors called for Altman’s rehiring, and 700 out of 770 OpenAI employees threatened to “jump ship” to Microsoft without the reinstatement of Altman as CEO and the resignation of the entire board. In response to the harsh blowback, Sutskever posted an apology on X, saying he “never intended to harm OpenAI.”

The captivating Succession-esque story ended in OpenAI announcing that Sam Altman would become CEO again and a new board would be formed with only one remaining member from the old one, founder of Quora Adam D’Angelo. 

​The OpenAI story contains valuable business lessons and important questions we must consider:

When a board is too small and ideologically homogenous, chaos ensues. Most companies have a board consisting of 7 or 9 members with diverse viewpoints. These are the most popular setups because they’re not too big or small and are odd numbers that help balance out votes. At OpenAI, many founding board members resigned because of conflicts of interest and were not replaced, leaving behind a small and ineffective board. Why did Altman not replace the board when he had the chance?

  • Despite being one of its founders, Sam Altman had zero equity in OpenAI. As Valuetainment CEO Patrick Bet-David put it, “Who the hell in their right mind would think it’s a good idea to have the Founder and the CEO of the company not have any equity?” When the company turned for-profit in 2019, Altman did not take any equity in the firm because he wanted to stay consistent with the company’s “philanthropic mission.” However, not having equity means a flawed incentive structure for the CEO, who, in typical companies, is driven by financial incentives to make decisions in the firm’s best interests. Was Altman’s lack of equity a factor in his ability to disagree with the board’s vision?

  • Even though Microsoft made a sizable investment into OpenAI, they offered to take their leaders and 700 employees to start their own AI competitor. Was Microsoft really willing to take the $10 Billion loss on their investment in OpenAI for the chance to become the AI industry leader? The answer is likely yes. They’re worth over $2 Trillion, and the loss wouldn’t significantly impact them. The opportunity to become the dominating player in AI is potentially worth trillions more than the investment they would be forfeiting. 

  • CEOs across Silicon Valley should take an example from Altman’s outstanding leadership, as demonstrated by his employees’ loyalty to him. Even in a company immersed in AI, human relationships ended up deciding who would lead the way.

The debate over how to handle the rapid advancement of artificial intelligence and its impact on humanity is far from over. In a speech given to NYU Stern undergraduate and MBA students in October, Microsoft Vice Chair and President Brad Smith stressed the need for tech companies and governments to cooperate on a plan to balance the innovative and beneficial aspects of AI with the potentially harmful effects it can cause for humanity.