"Lies" and "psychological abuse": former OpenAI board members' reasons behind Sam Altman's firing and return

midian182

Posts: 9,835   +125
Staff member
Recap: The tech world was shaken last November following OpenAI CEO Sam Altman's unceremonious firing. The official reasons for the move were vague, and Altman was reinstated as head of the firm a few days later. Now, former board members have given more details on what happened and why, including accusations that Altman cultivated "a toxic culture of lying" and engaged in "behavior [that] can be characterized as psychological abuse."

OpenAI said that Altman was removed as CEO last year following a review process by the board of directors who no longer had confidence in his ability to lead the company and stay "consistently candid in his communications with the board."

Days after the firing, Microsoft CEO Satya Nadella said Altman was joining the Windows maker to lead a new AI team. Two days after that announcement, Altman was back at OpenAI under a new board.

Helen Toner and Tasha McCauley, two of the board members who helped oust Altman and later left the board, described a toxic environment at OpenAI, accusing Altman of "psychological abuse" and "lying and being manipulative in different situations."

In an interview on The TED AI Show podcast, Toner said the board felt they needed to go behind Altman's back once they decided a new CEO was required.

"It was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him, he would pull out all the stops, do everything in his power to undermine the board, to prevent us from even getting to the point of being able to fire him," Toner said.

Toner said that the board lost its trust in Altman after he failed to tell them that he owned the OpenAI Startup Fund. He also provided inaccurate information about the company's safety processes "on multiple occasions." Toner added that she was singled out by Altman because she published a research paper he did not like. "Sam started lying to other board members in order to try and push me off the board," she says.

Toner also gives an insight into how the OpenAI board was often left in the dark about what was happening at the company. "When ChatGPT came out November 2022, the board was not informed in advance. We learned about ChatGPT on Twitter."

The outcry over Altman's ousting led to investors threatening to sue the company. Some employees, including one of the board members who fired him, called for his return, which Toner believes was due to workers fearing for the future of OpenAI without Altman, while others were scared of his retaliation.

Toner said Altman has a history of being problematic, including at his previous job at Y Combinator, which fired him, and at his startup Loopt, where the management team apparently asked the board to fire him twice for his deceptive and chaotic behavior.

Both Toner and McCauley say that the pressure to keep increasing profits meant OpenAI's self-regulation was unenforceable. They called for government regulation of the AI industry.

Jan Leike, a senior OpenAI safety researcher, resigned from the company this month, claiming that "safety culture and processes have taken a backseat to shiny products."

OpenAI board chair Bret Taylor responded to the podcast with the following statement:

We are disappointed that Ms. Toner continues to revisit these issues. An independent committee of the board worked with the law firm Wilmer Hale, to conduct an extensive review of the events of November. The review concluded that the prior board's decision was not based on concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners. Additionally, over 95 percent of employees, including senior leadership, asked for Sam's reinstatement as CEO and the resignation of the prior board. Our focus remains on moving forward and pursuing OpenAI's mission to ensure AGI benefits all of humanity.

Permalink to story:

 
While we’ll probably never get the real truth, it’s hard to take as gospel someone’s opinions who are obviously blatantly biased against Altman.
 
I'm confused. According to Musk and a few others, OpenAI is (or was), a not for profit organization. Shareholders? Microsoft? Sounds like Elon was right, it went from non-profit to profit at all costs with enough cheerleaders to let Altman do as he pleases.
 
I'm confused. According to Musk and a few others, OpenAI is (or was), a not for profit organization. Shareholders? Microsoft? Sounds like Elon was right, it went from non-profit to profit at all costs with enough cheerleaders to let Altman do as he pleases.

It was a non-profit organisation with funding from public once, but has been a "capped-profit" company since about 2019.
 
Back