In the summer of 2023, OpenAI created a team for “Superalignment”, whose goal was to manage and control future artificial intelligence systems that could be so powerful that they would cause human extinction. Less than a year later, that team is dead.

OpenAI said Bloomberg that the company is “integrating the group more deeply into its research efforts to help the company achieve its safety goals.” But a series of tweets from Jan Leicke, one of the team’s leaders who recently left, revealed internal tensions between the safety team and the larger company.

In a statement published on X on Friday, Leike said the Superalignment team struggled for resources to do the research. “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI assumes a huge responsibility on behalf of all humanity. But in recent years, safety culture and processes have taken a back seat to shiny products.” OpenAI did not immediately respond to a request for comment from Engadget.

Jan Leicke


Leyke’s departure earlier this week came hours after OpenAI Chief Scientist Sutskevar announced he was leaving the company. Sutzkevar was not only one of the leaders on the Superalignment team, but also helped co-found the company. Sutskevar’s move came six months after he was involved in the decision to fire CEO Sam Altman over concerns that Altman had not been “consistently forthright” with the board. Altman’s all-too-brief suspension sparked an internal revolt at the company, with nearly 800 employees signing a letter threatening to quit if Altman was not reinstated. Five days later, Altman returned as CEO of OpenAI after Sutskevar signed a letter saying he regretted his actions.

When it announced creation of the Superalignment team, OpenAI said it would devote 20 percent of its computing power over the next four years to solving the problem of controlling powerful AI systems of the future. “[Getting] this right is critical to achieving our mission,” the company wrote at the time. On X, Leike wrote that the Superalignment team is “struggling for computing and it’s getting harder and harder” to do important research on AI safety. “For the past few months, my team has been sailing against the wind,” he wrote, adding that he had reached a “breaking point” with OpenAI’s leadership due to disagreements over the company’s top priorities.

There have been more departures from the Superalignment team over the past few months. In April OpenAI it is reported fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information.

OpenAI said Bloomberg that future safety efforts will be led by John Shulman, another co-founder whose research focuses on large language patterns. Jakub Pachotsky, a director who led the development of GPT-4 – one of OpenAI’s flagship large language models – would replace Sutskevar as Principal Research Associate.

Superalignment wasn’t the only team at OpenAI focused on AI safety. In October, the company started a brand new “readiness” team to prevent potential “catastrophic risks” from AI systems, including cybersecurity issues and chemical, nuclear and biological threats.

Update, May 17, 2024, 3:28 PM ET: In response to a request for comment on Leike’s allegations, an OpenAI PR person referred Engadget to Sam Altman a tweet saying he would say something in the next few days.

This article contains affiliate links; if you click on such a link and make a purchase, we may earn a commission.