OpenAI's 'SuperAlignment' team, which was researching the control and safety of superintelligence, has been disbanded, with a former executive saying 'flashy products are taking priority over safety'

By Jernej Furman
It has been reported that the 'Super Alignment' team, a department dedicated to ensuring AI safety at OpenAI, has been effectively disbanded. The deciding factor was the departure of two team executives, Ilya Satskiver and Jan Reich, who posted on X (formerly Twitter) that 'safety is being neglected at OpenAI.'
OpenAI researcher resigns, claiming safety has taken 'a backseat to shiny products' - The Verge
https://www.theverge.com/2024/5/17/24159095/openai-jan-leike-superalignment-sam-altman-ai-safety
What happened to OpenAI's long-term AI risk team? | Ars Technica
https://arstechnica.com/ai/2024/05/what-happened-to-openais-long-term-ai-risk-team/
Wired reported that OpenAI has confirmed that 'with the departure of Satskivar and Reich, the Superalignment team will be disbanded and its research will be absorbed into other OpenAI research activities.'
The Super Alignment Team was established in July 2023 and has been conducting research to control AI in preparation for the development of a 'super intelligence' that will surpass human intelligence. For more information on the specifics of the research, please read the following article.
A research team preparing for the birth of OpenAI's 'super intelligence' details how a weak AI model like GPT-2 can control a strong AI like GPT-4 - GIGAZINE

On May 15, 2024, OpenAI's Principal Investigator and head of the SuperAlignment Team, Dr. Satskivar, announced at X that he would be leaving OpenAI. On the same day, Jan Reich, who led the SuperAlignment Team with Dr. Satskivar, also announced his retirement.
OpenAI's chief scientist, Ilya Satkivar, who led the dismissal of CEO Sam Altman, resigns - GIGAZINE

'I am confident that under the excellent research leadership of CEO Sam Altman, President Greg Brockman, CTO Mira Murati, and new Principal Investigator Jakub Paczowski, we will build a safe and beneficial AGI (artificial general intelligence),' Satskever wrote in a post to X.
After almost a decade, I have made the decision to leave OpenAI. The company's trajectory has been nothing short of miraculous, and I'm confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama , @gdb , @miramurati and now, under the…
— Ilya Sutskever (@ilyasut) May 14, 2024
Meanwhile, Reich said in a statement that he left X because 'for quite some time I had disagreements with OpenAI's leadership about its core priorities and I had finally reached a breaking point.'
I joined because I thought OpenAI would be the best place in the world to do this research.
— Jan Leike (@janleike) May 17, 2024
However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.
'Over the past few years, safety culture and process has lagged behind flashy product,' Reich wrote in his post, 'OpenAI must become a safety-first AGI company.'
But over the past years, safety culture and processes have taken a backseat to shiny products.
— Jan Leike (@janleike) May 17, 2024
In response to Reich's appointment, CEO Altman said, 'I am incredibly grateful for Reich's contributions to OpenAI's aligned research and culture of safety, and I am very sad to see him go. He is right: we have much more work to do, and we are committed to getting it done.'
I'm super appreciative of @janleike 's contributions to OpenAi's alignment of research and safety culture, and very sad to see him leave. He's right, we have a lot more to do; we are committed to doing it. I'll have a longer post in the next couple of days.
— Sam Altman (@sama) May 17, 2024
???? https://t.co/t2yexKtQEk
'In light of the questions raised by Mr. Reich's departure, I would like to provide a brief overview of our overall strategy,' Brockman said in a joint statement with Altman. 'We will continue to pursue safety studies across a range of timescales, and we will continue to collaborate with governments and many other stakeholders on safety.'
We're really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to explain a bit about how we think about our overall strategy.
— Greg Brockman (@gdb) May 18, 2024
First, we have… https://t.co/djlcqEiLLN
Satkiver and Reik are not the only ones who left OpenAI. According to the IT news site The Information , OpenAI fired Leopold Aschenbrenner and Pavel Izmailov, researchers on the Super Alignment team, in April 2024 for leaking trade secrets. William Sanders, a member of the Super Alignment team, also posted on an Internet forum that he had left OpenAI. In addition, Karen O'Keefe and Daniel Kokotajuro, who researched AI policy governance at OpenAI, have also left the company, with Kokotajuro commenting that he 'left because I was no longer confident that OpenAI would act responsibly in the age of AGI.'
Although the Super Alignment team has been disbanded, OpenAI has a 'Preparedness' team to address catastrophic risks posed by AI, such as ethical issues related to privacy violations, emotional manipulation, and cybersecurity risks.
OpenAI, the developer of ChatGPT, forms a special team to analyze 'catastrophic risks of AI' and protect humanity - GIGAZINE

Related Posts: