OpenAI's 'SuperAlignment' team, which was researching the control and safety of superintelligence, has been disbanded, with a former executive saying 'flashy products are taking priority over safety'


By Jernej Furman

It has been reported that the 'Super Alignment' team, a department dedicated to ensuring AI safety at OpenAI, has been effectively disbanded. The deciding factor was the departure of two team executives, Ilya Satskiver and Jan Reich, who posted on X (formerly Twitter) that 'safety is being neglected at OpenAI.'

OpenAI researcher resigns, claiming safety has taken 'a backseat to shiny products' - The Verge
https://www.theverge.com/2024/5/17/24159095/openai-jan-leike-superalignment-sam-altman-ai-safety

What happened to OpenAI's long-term AI risk team? | Ars Technica
https://arstechnica.com/ai/2024/05/what-happened-to-openais-long-term-ai-risk-team/

Wired reported that OpenAI has confirmed that 'with the departure of Satskivar and Reich, the Superalignment team will be disbanded and its research will be absorbed into other OpenAI research activities.'

The Super Alignment Team was established in July 2023 and has been conducting research to control AI in preparation for the development of a 'super intelligence' that will surpass human intelligence. For more information on the specifics of the research, please read the following article.

A research team preparing for the birth of OpenAI's 'super intelligence' details how a weak AI model like GPT-2 can control a strong AI like GPT-4 - GIGAZINE



On May 15, 2024, OpenAI's Principal Investigator and head of the SuperAlignment Team, Dr. Satskivar, announced at X that he would be leaving OpenAI. On the same day, Jan Reich, who led the SuperAlignment Team with Dr. Satskivar, also announced his retirement.

OpenAI's chief scientist, Ilya Satkivar, who led the dismissal of CEO Sam Altman, resigns - GIGAZINE



'I am confident that under the excellent research leadership of CEO Sam Altman, President Greg Brockman, CTO Mira Murati, and new Principal Investigator Jakub Paczowski, we will build a safe and beneficial AGI (artificial general intelligence),' Satskever wrote in a post to X.



Meanwhile, Reich said in a statement that he left X because 'for quite some time I had disagreements with OpenAI's leadership about its core priorities and I had finally reached a breaking point.'



'Over the past few years, safety culture and process has lagged behind flashy product,' Reich wrote in his post, 'OpenAI must become a safety-first AGI company.'



In response to Reich's appointment, CEO Altman said, 'I am incredibly grateful for Reich's contributions to OpenAI's aligned research and culture of safety, and I am very sad to see him go. He is right: we have much more work to do, and we are committed to getting it done.'



'In light of the questions raised by Mr. Reich's departure, I would like to provide a brief overview of our overall strategy,' Brockman said in a joint statement with Altman. 'We will continue to pursue safety studies across a range of timescales, and we will continue to collaborate with governments and many other stakeholders on safety.'



Satkiver and Reik are not the only ones who left OpenAI. According to the IT news site The Information , OpenAI fired Leopold Aschenbrenner and Pavel Izmailov, researchers on the Super Alignment team, in April 2024 for leaking trade secrets. William Sanders, a member of the Super Alignment team, also posted on an Internet forum that he had left OpenAI. In addition, Karen O'Keefe and Daniel Kokotajuro, who researched AI policy governance at OpenAI, have also left the company, with Kokotajuro commenting that he 'left because I was no longer confident that OpenAI would act responsibly in the age of AGI.'

Although the Super Alignment team has been disbanded, OpenAI has a 'Preparedness' team to address catastrophic risks posed by AI, such as ethical issues related to privacy violations, emotional manipulation, and cybersecurity risks.

OpenAI, the developer of ChatGPT, forms a special team to analyze 'catastrophic risks of AI' and protect humanity - GIGAZINE

in Note,   Software, Posted by log1i_yk