A former senior employee at OpenAI has said the company behind ChatGPT is prioritizing “shiny products” over safety, revealing that he quit after a disagreement over key aims reached a “breaking point” The Guardian reported.
Jan Leike was a key safety officer at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.
Leike resigned days after the San Francisco-based company launched its latest AI model, GPT-4o. His departure meant two senior safety figures at OpenAI have left this week following the resignation of Ilya Sutskever, OpenAI’s co-founder and fellow co-head of superalignment.
Leike detailed the reasons for his departure in a thread on X posted on Friday, in which he said safety culture had become a lower priority.
“Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote.
OpenAI was founded with the goal of ensuring that artificial general intelligence, which it describes as “AI systems that are generally smarter than humans”, benefits all of humanity. In his X posts, Leake said he had been disagreeing with OpenAI’s leadership about the company’s priorities for some tome but that standoff had “finally reached a breaking point.”
Engadget reported in the summer of 2023, OpenAI created a “Superalignment” team whose goal was to steer and control future AI systems that could be so powerful they could lead to human extinction. Less than a year later, that team is dead.
OpenAI told Bloomberg that the company was “integrating the group more deeply across its research efforts to help the company achieve its safety goals.” But a series of tweets by Jan Leike, one of the team’s leaders who recently quit revealed internal tensions between the safety team and the larger company.
In a statement posted on X on Friday, Leike said that the Superalignment team had been fighting for resources to get research done. “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”
Leike’s departure earlier this week came hours after OpenAI chief scientist Sutskevar announced that he was leaving the company. Sutksevar was not only one of the leads on the Superalignment team, but helped co-found the company as well.
CNBC reported OpenAI has disbanded its team focused on the long-term risk of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.
OpenAI did not provide a comment and instead directed CNBC to co-founder and CeO Sam Altman’s recent post on X, where he shared that he was sad to see Leike leave and that the company had more work to do.
In my opinion, it sounds like some of those who worked for OpenAI are dissatisfied with how things are going. This appears to be why some have left. One cannot run a company simply by focusing on “shiny products.”