OpenAI has effectively disbanded a team focused on ensuring the safety of possible future ultra-capable artificial intelligence systems, following the departure of the group’s two leaders, including OpenAI co-founder and chief scientist Ilya Sutzkever.

Instead of maintaining the so-called super-alignment team as a standalone entity, OpenAI is now integrating the group more deeply into its research efforts to help the company meet its safety goals, the company told Bloomberg News. The team was formed less than a year ago under the leadership of Sutskever and Jan Leike, another OpenAI veteran.

The decision to rethink the team comes as a series of recent departures from OpenAI has revived questions about the company’s approach to balancing speed versus safety in the development of its AI products. Sutskever, a widely respected researcher, announced on Tuesday that he was leaving OpenAI after previously clashing with CEO Sam Altman over how quickly to develop artificial intelligence.

Leyke revealed his departure shortly after with a short post on social media. “I resigned,” he said. For Leike, Sutskever’s departure was the last straw after falling out with the company, according to a person familiar with the situation who spoke on condition of anonymity to discuss private conversations.

In a statement Friday, Leike said the superstack team has been struggling for resources. “For the past few months, my team has been sailing against the wind,” Leike wrote to X. “Sometimes we struggled with calculations and it became harder and harder to do this crucial research.”

Hours later, Altman responded to Leike’s post. “He’s right, we still have a lot of work to do,” Altman wrote to X. “We’re committed to doing it.”

Other members of the superstack team have also left the company in recent months. Leopold Aschenbrenner and Pavel Izmailov were released from OpenAI. Information previously reported their departures. Izmailov was removed from the team before his departure, according to a person familiar with the matter. Aschenbrenner and Izmailov did not respond to requests for comment.

John Shulman, co-founder of the startup whose research centers on large-scale language models, will be the scientific lead for future work aligning OpenAI, the company said. Separately, OpenAI said in a blog post that it has appointed director of research Jakub Paczocki to take over Sutskever’s role as chief scientist.

“I am very confident that he will lead us to make rapid and safe progress toward our mission to ensure that AGI benefits everyone,” Altman said in a statement Tuesday about Pachocki’s appointment. AGI, or Artificial General Intelligence, refers to AI that can perform as well or better than humans at most tasks. AGI doesn’t exist yet, but its creation is part of the company’s mission.

OpenAI also has employees involved in AI safety work in teams across the company as well as separate safety-focused teams. One, a preparedness team, was launched last October and focuses on analyzing and trying to prevent potential “catastrophic risks” of AI systems.

The super array team aimed to fend off the most long term threats. OpenAI announced the formation of the Superalignment Team last July, saying it would focus on how to control and ensure the safety of future AI software that is smarter than humans — something the company has long stated as a technology goal . In the announcement, OpenAI said it would commit 20% of its computing power at this time to the team’s work.

In November, Sutskever was one of several OpenAI board members who decided to fire Altman, a decision that caused a whirlwind at the company for five days. OpenAI president Greg Brockman quit in protest, investors revolted, and within days nearly all of the startup’s roughly 770 employees signed a letter threatening to quit unless Altman was brought back. In a remarkable twist, Sutskever also signed the letter and said he regretted his role in Altman’s removal. Altman was reinstated soon after.

In the months since Altman’s departure and return, Sutzkever has largely disappeared from the public eye, prompting speculation about his continued role at the company. Sutskever also stopped working from OpenAI’s San Francisco office, according to a person familiar with the matter.

In his statement, Leike said his departure came after a series of disagreements with OpenAI over the company’s “core priorities,” which he said were not focused enough on the safety measures involved in creating an AI that could be more capable from the people.

In a post earlier this week announcing his departure, Sutskever said he was “confident” that OpenAI would develop AGI “that is both safe and useful” under its current leadership, including Altman.

© 2024 Bloomberg LP


(This story has not been edited by NDTV staff and is automatically generated by a syndicated channel.)

Affiliate links may be automatically generated – see our ethics statement for details.

https://www.gadgets360.com/ai/news/openai-superlignment-ai-safety-team-dissolved-ilya-sutskever-exit-5689637#rss-gadgets-all