Categories: News

Ex-OpenAI Chief Scientist Ilya Sutskever Launches Safe Superintelligence Inc. to Focus on AI Safety

Published by
blyzniukova

Ilya Sutskever, former co-founder and chief scientist of OpenAI, has teamed up with ex-OpenAI engineer Daniel Levy and investor Daniel Gross to establish Safe Superintelligence Inc. (SSI). The new company’s mission is reflected in its name, emphasizing the simultaneous development of AI safety and capabilities.

SSI, based in the United States with offices in Palo Alto and Tel Aviv, aims to advance artificial intelligence while ensuring its safety. In a June 19 announcement, the founders highlighted their dedication to AI safety, stating, “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Background on Sutskever and AI Safety Concerns

Sutskever departed from OpenAI on May 14, after being involved in the controversial firing of CEO Sam Altman. Following Altman’s return, Sutskever’s role at OpenAI remained unclear, leading to his exit. Daniel Levy also left OpenAI shortly after Sutskever’s departure.

Previously, Sutskever and Jan Leike co-led OpenAI’s Superalignment team, which was created in July 2023 to address the control of AI systems potentially smarter than humans, known as artificial general intelligence (AGI). OpenAI allocated 20% of its computing power to this team. However, after Sutskever and Leike left in May, OpenAI dissolved the Superalignment team. Leike now heads a team at the AI startup Anthropic, backed by Amazon.

The brand new newsletter with insights, market analysis and daily opportunities.

Let’s grow together!

Industry Concerns and Actions

The departure of these researchers underscores the broader concern among scientists regarding the direction of AI. Ethereum co-founder Vitalik Buterin has labeled AGI as “risky,” though he noted it poses less immediate danger compared to corporate or military misuse. Additionally, over 2,600 tech leaders, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, have called for a six-month pause in AI system training to consider the significant risks involved.

SSI’s announcement also mentioned that the company is actively hiring engineers and researchers to further its mission.

blyzniukova

Recent Posts

Bitcoin Drops Below $58K on Coinbase for the First Time in Two Months

On July 4, Bitcoin briefly fell to $57,874 on Coinbase, marking its first dip below…

8 hours ago

Riot Platforms Sees 50% Hash Rate Surge in June

In June, Bitcoin mining company Riot Platforms boosted its deployed hash rate by 50%, resulting…

8 hours ago

Surge in “Buy the Dip” Mentions as Bitcoin Dips

Mentions of “buy the dip” have surged on platforms like Reddit, X, 4chan, and Bitcoin…

13 hours ago

Navigating the digital frontier

Greetings, crypto warriors! 💥   👉 Check these early-stage DePin projects 💲Analysts have named Justin…

1 day ago

CleanSpark Exceeds Hash Rate Target, Mined 445 Bitcoin in June

CleanSpark, a Bitcoin mining company, achieved a 6.7% increase in Bitcoin production for June, surpassing…

1 day ago

Galaxy Predicts Ethereum ETFs to Gain Approval Within Weeks

Steve Kurz, head of asset management at Galaxy Digital, is optimistic that Ethereum exchange-traded funds…

1 day ago