OpenAI disbands Superalignment AI security team

OpenAI has disbanded its team focused on the long-term risks of artificial intelligence only a yr after the corporate announced the group, an individual aware of the situation confirmed to CNBC on Friday.

The person, who spoke on condition of anonymity, said a number of the team members could be reassigned to multiple other teams inside the company.

The news comes days after each team leaders, OpenAI co-founders Ilya Sutskever and Jan Leike, announced their departure from the Microsoft-backed startup. Leike wrote Friday that OpenAI's “security culture and processes have taken a back seat to shiny products.”

The OpenAI Superalignment team, announced last yr focused on “scientific and engineering breakthroughs to command and control AI systems that are much smarter than us.” At the time, OpenAI said it could commit 20% of its computing power to the initiative over 4 years .

OpenAI didn’t comment, as a substitute referring CNBC to co-founder and CEO Sam Altman last contribution on X, where he shared that he was sad to see Leike go and that the corporate still had more work to do. On Saturday: OpenAI co-founder Greg Brockman Posted a press release attributed to him and Altman on

News of the team's dissolution was first reported by Wired.

Sutskever and Leike announced their departures on Tuesday on the social media platform XHours apart, but on Friday Leike shared more details about why he left the startup.

“I joined because I thought OpenAI would be the best place in the world to conduct this research,” Leike wrote on X. “However, I had been disagreeing with OpenAI’s leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote that he believes a much larger portion of the corporate's scope must be focused on security, surveillance, preparedness, security and societal impact.

“These problems are quite difficult to solve, and I worry that we are not on track to get there,” he wrote. “In the last few months my team has been sailing against the wind. Sometimes we had problems [computing resources] and it became increasingly difficult to conduct this important research.”

Leike added that OpenAI must change into a “security-first AGI company.”

“Building machines more intelligent than humans is an inherently dangerous endeavor,” he wrote. “OpenAI assumes an enormous responsibility on behalf of all humanity. But in recent years, safety culture and processes have taken a back seat to shiny products.”

Leike didn’t immediately reply to a request for comment.

The high-profile departures come months after OpenAI endured a leadership crisis involving Altman.

In November, OpenAI's board ousted Altman, saying in a press release that Altman “has not been consistently open in his communications with the board.”

The problem appeared to be getting more complex daily The Wall Street Journal and other media outlets reported that Sutskever focused his attention on ensuring that artificial intelligence doesn’t harm people, while others, including Altman, were as a substitute more focused on driving the deployment of recent technologies.

Altman's ouster led to resignations or threats of resignation, including an open letter signed by virtually all OpenAI employees, and an uproar from investors, including Microsoft. Within per week, Altman was back at the corporate and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to remove Altman, resigned. Sutskever remained on staff at the moment, but not in his role as a board member. Adam D'Angelo, who also voted to remove Altman, remained on the board.

When asked about Sutskever's status on a Zoom call with reporters in March, Altman said there have been no updates he could provide. “I love Ilya… I hope we work together for the rest of our careers, my career, whatever,” Altman said. “There is nothing to announce today.”

On Tuesday, Altman shared his thoughts on Sutskever's departure.

“That makes me very sad; “Ilya is certainly one of the greatest minds of our generation, a role model in our field and a dear friend,” Altman wrote on X. “His brilliance and vision are well known; his warmth and compassion are less well known, but no less important.” Altman said research director Jakub Pachocki, who has worked at OpenAI since 2017, will replace Sutskever as chief scientist.

The news of Sutskever and Leike's departures and the dissolution of the Superalignment team comes just days after OpenAI launched a new AI model and a desktop version of ChatGPT, as well as an updated user interface. This is the company's latest attempt to expand the use of its popular chatbot.

The update brings the GPT-4 model to everyone, including free OpenAI users, said technology chief Mira Murati in a livestream event on Monday. She added that the new GPT-4o model is “much faster” and offers improved text, video and audio capabilities.

OpenAI said it eventually plans to allow users to video chat with ChatGPT. “This is the primary time we're really taking an enormous step forward when it comes to usability,” Murati said.



image credit : www.cnbc.com