Coins by Cryptorank
NewsEditorial office

Ex-OpenAI Chief Scientist Ilya Sutskever Launches Safe Superintelligence Inc. to Focus on AI Safety

Jun 20, 2024 at 02:44

Ilya Sutskever, former co-founder and chief scientist of OpenAI, has teamed up with ex-OpenAI engineer Daniel Levy and investor Daniel Gross to establish Safe Superintelligence Inc. (SSI). The new company’s mission is reflected in its name, emphasizing the simultaneous development of AI safety and capabilities.

SSI, based in the United States with offices in Palo Alto and Tel Aviv, aims to advance artificial intelligence while ensuring its safety. In a June 19 announcement, the founders highlighted their dedication to AI safety, stating, “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Background on Sutskever and AI Safety Concerns

Sutskever departed from OpenAI on May 14, after being involved in the controversial firing of CEO Sam Altman. Following Altman’s return, Sutskever’s role at OpenAI remained unclear, leading to his exit. Daniel Levy also left OpenAI shortly after Sutskever’s departure.

Previously, Sutskever and Jan Leike co-led OpenAI’s Superalignment team, which was created in July 2023 to address the control of AI systems potentially smarter than humans, known as artificial general intelligence (AGI). OpenAI allocated 20% of its computing power to this team. However, after Sutskever and Leike left in May, OpenAI dissolved the Superalignment team. Leike now heads a team at the AI startup Anthropic, backed by Amazon.

The brand new newsletter with insights, market analysis and daily opportunities.

Let’s grow together!

Industry Concerns and Actions

The departure of these researchers underscores the broader concern among scientists regarding the direction of AI. Ethereum co-founder Vitalik Buterin has labeled AGI as “risky,” though he noted it poses less immediate danger compared to corporate or military misuse. Additionally, over 2,600 tech leaders, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, have called for a six-month pause in AI system training to consider the significant risks involved.

SSI’s announcement also mentioned that the company is actively hiring engineers and researchers to further its mission.

All information provided on this website is for educational and informational purposes only. Please consult with our Disclaimer.

Home » News » Ex-OpenAI Chief Scientist Ilya Sutskever Launches Safe Superintelligence Inc. to Focus on AI Safety

Your complaint has been sent to a moderator