OpenAI's unusual nonprofit structure led to dramatic ouster of sought-after CEO

1 / 2

Copyright 2023 The Associated Press. All rights reserved.

The OpenAI logo appears on a mobile phone in front of a screen showing a portion of the company website in this photo taken on Tuesday, Nov. 21, 2023 in New York. (AP Photo/Peter Morgan)

SAN FRANCISCO – Unlike Google, Facebook and other tech giants, the company behind ChatGPT was not created to be a business. It was set up as a nonprofit by founders who hoped that it wouldn’t be beholden to commercial interests.

But the arrangement got complicated.

Recommended Videos



While OpenAI later transitioned to a for-profit model, its controlling shareholder remains the nonprofit OpenAI Inc. and its board of directors. This unique structure made it possible for four OpenAI board members — the company's chief scientist, two outside tech entrepreneurs and an academic — to oust CEO Sam Altman on Friday.

The abrupt removal of one of the world's most sought-after AI experts led to an employee revolt that has put the entire organization’s future in jeopardy and underscored the unusual arrangement that sets OpenAI apart from other tech enterprises.

It's exceedingly rare for major tech companies to have such a structure.

Facebook parent Meta, as well as Google and others, are essentially set up the opposite way — giving founders ultimate control over the company and the board of directors through a special class of voting shares not available to the masses. The idea comes from Berkshire Hathaway, which was established with two classes of stock so the company and its leaders would not be beholden to investors seeking short-term profit.

OpenAI’s stated mission is to safely build artificial intelligence that is “generally smarter than humans.” Debates have swirled around that goal and whether it conflicts with the company’s increasing commercial success.

“What was revealed with this board structure is they just idealistically thought, well, we’re aligned, and we all want the same thing. And it won’t become a problem because we’re going to stay aligned,” said Sarah Kreps, director of Cornell University’s Tech Policy Institute.

As AI technology accelerated in the last year because of new investment coming in, "I think that’s where these issues erupted.”

The board has refused to give specific reasons that it fired Altman, who was quickly hired Monday by Microsoft Corp., which has invested billions in OpenAI. Microsoft also hired OpenAI President Greg Brockman, who resigned in protest after Altman was fired, along with at least three others.

In addition, Microsoft has extended job offers to all of OpenAI's 770 employees. If enough employees accept Microsoft’s offer or join rivals now openly recruiting them, OpenAI could all but disappear without a workforce. Much of its existing technology will remain with Microsoft, which holds an exclusive license to use it.

When OpenAI announced that Altman had been removed, it released a vague statement saying a review found that he was “not consistently candid in his communications” with the board, which had lost confidence in his ability to lead the company.

The statement did not give details or examples of Altman's alleged lack of candor. The company said his behavior hindered the board’s ability to exercise its responsibilities.

Kreps said the board, which “seems to be associated with the safer, more cautious approach" to AI, did itself a disservice with Altman's firing. It alienated the bulk of the company's workforce and acted in such a way that "there is no company left to implement a pro-safety philosophy.”

After a dramatic weekend that saw one interim CEO replaced by a second interim CEO, OpenAI board member Ilya Sutskever, a key driver of the shakeup, expressed regrets for his participation in the ouster.

“I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company,” he posted Monday on X, formerly known as Twitter.

Until Friday, OpenAI had six board members. Now the board consists of Sutskever, OpenAI co-founder and chief scientist; Adam D’Angelo, CEO of the question-and-answer site Quora; tech entrepreneur Tasha McCauley; and Helen Toner of the Georgetown Center for Security and Emerging Technology.

As recently as earlier this year, the board had more members.

Those who departed the board were LinkedIn founder and investor Reid Hoffman, who co-founded another AI company last year; former Republican U.S. Rep. Will Hurd of Texas, who was briefly a 2024 presidential candidate; Neuralink executive Shivon Zilis; and Brockman, who left in the wake of Altman's dismissal.

When it was founded, OpenAI’s original board co-chairs were Altman and Tesla CEO Elon Musk.

The board might not have found itself straddling the tensions between its nonprofit structure and the company’s for-profit arm if not for a pivotal falling out in 2018 involving Altman and Musk.

Musk abruptly bolted from OpenAI, ostensibly because of a potential conflict of interest between the fledgling startup and Tesla, the electric automaker responsible for a personal fortune now valued at more than $240 billion.

Earlier this year, Musk tweeted his concern that Microsoft was leading OpenAI astray in a quest for ever higher profits. Musk recently launched his own AI startup, xAI, to compete with OpenAI, Microsoft and Google, among others.

OpenAI's board members have not responded to requests for comment. Of the four who remain, one of the better-known members is D’Angelo, an early Facebook employee who co-founded Quora in 2009 and remains its CEO.

D’Angelo first joined the OpenAI board in 2018, tweeting at the time: “I continue to think that work toward general AI (with safety in mind) is both important and underappreciated, and I’m happy to contribute to it.”

He’s publicly waded into the possibility of AI that surpasses humans as recently as Nov. 6, when he questioned the conclusions of a Google research paper that showed evidence that current AI systems cannot generalize beyond their training data. That suggests their abilities are more limited than some scientists thought.

D’Angelo posted a few months earlier that artificial general intelligence “will probably be the most important event in the history of the world, and it will happen in our lifetimes.”

___

Associated Press Technology Writers Matt O'Brien in Providence, Rhode Island, and Michael Liedtke in San Francisco contributed to this story.


Loading...

Recommended Videos