OpenAI has disbanded its team focused on the long-term risks of artificial intelligence, just a year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.

The person, who spoke on condition of anonymity, said some of the team members have been reassigned to multiple other teams within the company.

The news comes days after both team leaders, OpenAI co-founder Ilya Sutzkever and Jan Leicke, announced their departure from the Microsoft-backed startup. Leike wrote on Friday that “OpenAI’s safety culture and processes have taken a backseat to shiny products.”

The OpenAI Superalignment Team, announced last year, focused on “scientific and technical breakthroughs for managing and controlling AI systems far smarter than us.” At the time, OpenAI said it would dedicate 20 percent of its computing power to the initiative over four years.

OpenAI did not provide comment and instead referred CNBC to co-founder and CEO Sam Altman recent post on X, where he shared that he was sad to see Lake leave and that the company still has work to do.

News of the team’s disbandment was first reported by With cable.

Sutzkever and Leicke announced their departures on Tuesday on the X social media platformhours apart, but on Friday Leike shared more details about why he left the startup.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leicke wrote to X. “However, I have disagreed with OpenAI’s leadership about the company’s core priorities for quite some time until we finally reached a tipping point.”

Leike writes that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite difficult to fix, and I am concerned that we are not on a trajectory to get there,” he wrote. “For the past few months, my team has been sailing against the wind. Sometimes we fought for [computing resources] and it became harder and harder to do this all-important research.”

Leike added that OpenAI needs to become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI assumes a huge responsibility on behalf of all humanity. But in recent years, safety culture and processes have taken a back seat to shiny products.”

Leike did not immediately respond to a request for comment.

The high-profile departures come months after OpenAI went through a leadership crisis involving Altman.

In November, OpenAI’s board ousted Altman, saying in a statement that Altman had not been “consistently forthright in his communications with the board.”

The matter seemed to grow more complicated every day, with The Wall Street Journal and other media reported that Sutskever focused on AI not harming humans, while others, including Altman, were instead more eager to move forward with the delivery of new technology.

Altman’s ouster prompted resignations or threats of resignations, including an open letter signed by nearly all OpenAI employees, and an uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutkever, who had voted to remove Altman, left. Sutskever remained on staff at the time, but no longer as a board member. Adam D’Angelo, who had also voted to remove Altman, remained on the board.

When Altman was asked about Sutskever’s condition in a call with reporters on Zoom in March, he said he had no updates to share. “I love Ilya … I hope we work together for the rest of my career, my career, whatever it is,” Altman said. “Nothing to announce today.”

On Tuesday, Altman shared his thoughts on Sutzkever’s departure.

“It is very sad for me; Ilya is one of the greatest minds of our generation, a guiding light in our field, and a dear friend,” Altman wrote. of X. “His brilliance and vision are well known; his warmth and compassion are less well-known, but no less important.” Altman said research director Jakub Paczocki, who has been with OpenAI since 2017, will replace Sutskever as chief scientist.

The news of Sutskever and Leike’s departure and the disbandment of the superalignment team comes days after OpenAI released a new AI model and desktop version of ChatGPT, along with an updated user interface, the company’s latest effort to expand the use of its popular chatbot.

The update makes the GPT-4 model available to everyone, including free OpenAI users, CTO Mira Murati said Monday in a live event. She added that the new GPT-4o model is “much faster” with improved text, video and audio capabilities.

OpenAI said it plans to eventually allow users to video chat with ChatGPT. “This is the first time we’ve really taken a huge step forward when it comes to ease of use,” Muratti said.



https://www.cnbc.com/2024/05/17/openai-superalignment-sutskever-leike.html