OpenAI is forming a new safety team, and it’s led by CEO Sam Altman, along with board member Adam D’Angelo and Nicole Seligman. The committee will make recommendations on critical safety and security decisions for OpenAI projects and operations” — a concern several key AI researchers shard when leaving the company this month, The Verge reported.
For its first task, the new team will “evaluate and further develop OpenAI’s processes and safeguards.” It will then present its findings to OpenAI’s board, which all three of the safety team’s leaders have a seat on. The board will then decide how to implement the safety team’s recommendations.
It’s formation follows the departure of OpenAI co-founder and chief scientist Ilya Sutskever, who supported the board’s attempted coup to dethrone Altman last year. He also co-led OpenAI’s Superalignment team, which was created to “steer and control AI systems much smarter than us.”
The OpenAI Board posted the following:
Today, the OpenAI Board formed a Safety and Security Committee led by director Bret Taylor (Chair), Adam D’Angelo, Nicole Selgiman, and Sam Altman (CEO). This committee will be responsible for making recommendations to the full Board on critical safety and security and security decisions for OpenAI projects and operations.
OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.
A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations wit hotel full board. Following the Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security….
OpenAI says it’s training the next frontier model, according to a press release on Tuesday, and anticipates it will bring the startup one step closer to artificial intelligence systems that are generally smarter than humans. The company also announced a new Safety and Security Committee to guide critical safety and security decisions, led by CEO Sam Altman and other OpenAI board members, Gizmodo reported.
“While we are proud to build and release models that are industry-leading on both capabilities and safety,” OpenAI said in a press release. “We welcome a robust debate at this important moment.”
The announcement follows a tumultuous month for OpenAI, where a group led by Ilya Sutskever and Jan Leake that researched AI risks existential to humanity was disbanded. Former OpenAI board members Helen Toner and Tasha McCauley wrote in The Economist on Sunday these developments and others following the return of Altman “bode ill for the OpenAI experiment in self-governance.”
In my opinion, companies like OpenAI appear to be pushing boundaries to see what their AI can do. This, unfortunately, includes the company taking Scarlett Johansen’s voice without her permission.