IIya Sutskever, one of OpenAI’s co-founders, has launched a new company, Safe Superintelligence Inc. (SSI), just one month after formerly leaving OpenAI, TechCrunch reported.
Sutskever, who was OpenAI’s longtime chief scientist, founded SSI with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy.
At OpenAI, Sutskever was intregral to the company’s efforts to improve AI safety with the rise of AI systems “superintelligent”AI systems, an area he worked on alongside Jan Leake, who co-led OpenAI’s Superalignment team. Yet both Sutskever and then Leigh left the company dramatically in May after falling out with leadership at OpenAI over how to approach AI safety. Leike now heads a team at rival AI shop Anthropic.
Sutskiver has been shining a light on the thornier aspects of AI safety for a long time now. In a blog post published in 2023, Sutskever, writing with Leike, predicted that AI with intelligence exceeding that of humans could arrive within the decade — and that when it does, it won’t necessarily be benevolent, necessitating research into ways to control and restrict it.
Iyla Sutskiever, Daniel Gross, and Daniel Levy posted the following on June 19, 2024:
Safe Superintelligence Inc.
Building safe super intelligence (SSI) is the most important technical problem of our time.
We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe super intelligence.
It is called Safe Superintelligence Inc.
SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.
We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.
If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.
Now is the time. Join us.
The Verge reported: Last year, Sutskever led the push to oust OpenAI Sam Altman. Sutskever left OpenAI in May and hinted at the start of a new project. Shortly after Sutskever’s departure, AI researcher Jan Leike announced his resignation from OpenAI, citing safety processes that have “taken a backseat to shiny products.” Gretchen Krueger, a policy researcher at OpenAI, also mentioned safety concerns when announcing her departure.
In my opinion, it sounds like some of the people who were working on OpenAI have decided to create an entirely new company. Whether that decision was done out of frustration, or because they wanted to branch out on their own, it appears they are looking for other people to join them.