Ilya Sutskever has launched Safe Superintelligence Inc. (SSI) to develop artificial superintelligence. Sutskever, a co-founder of OpenAI and a pivotal figure in the AI community announced the initiative through a tweet, emphasizing a singular focus on creating safe superintelligence. According to the company's website, SSI aims to tackle this technical challenge by advancing capabilities while ensuring safety remains paramount.
SSI is a reworking of the term Artificial Superintelligence (ASI), a step beyond the commonly used term Artificial General Intelligence (AGI). AGI aims to achieve human-level general intelligence, while ASI refers to a hypothetical future AI system superior to human intelligence across all cognitive domains. AGI is considered a stepping stone towards the potential development of ASI, but the latter remains a speculative concept.
Based in Palo Alto and Tel Aviv, SSI positions itself as the first straight-shot SSI lab solely dedicated to building safe superintelligence. The company emphasizes a streamlined approach without distractions from management overhead or product cycles. SSI aims to attract top technical talent, focusing on revolutionary engineering and scientific breakthroughs to achieve its goal.
Ilya Sutskever's and OpenAI
Sutskever's journey in AI began at the University of Toronto under Geoffrey Hinton, where he contributed to significant advancements in deep learning, including the co-invention of AlexNet. His work at OpenAI further cemented his reputation, where he played a critical role in developing GPT-3 and DALL-E. However, his concerns about the potential risks of advanced AI systems led to his departure from OpenAI in 2024 following internal disagreements about AI safety measures.
Sutskever's commitment to AI safety and alignment, highlighted in his co-authored paper "Introducing Superalignment," highlights the urgency of controlling and aligning superintelligence with human values. His new venture reflects this mission, seeking to address one of our time's most critical technical challenges.
However, Sutskever's focus shifted towards the potential risks of advanced AI systems, particularly the development of superintelligence – an intelligence that surpasses human capabilities. In 2023, he co-authored a paper titled "Introducing Superalignment," which warned about the potential dangers of superintelligence and the need for research into controlling and aligning such systems with human values.
Sutskever's concerns about AI safety and alignment led to tensions within OpenAI, culminating in a leadership crisis in November 2023. He was part of the group that briefly ousted Sam Altman, the CEO of OpenAI, over disagreements about the pace of commercialization and the necessary safety measures. While Altman was reinstated within a week, Sutskever resigned from the board and later announced his departure from OpenAI in May 2024.
Throughout the past year, the AI community has discussed Sutskever's stance on AI safety and the potential risks of superintelligence. In interviews and podcasts, he emphasized the importance of controlling superintelligence and "imprinting onto them a strong desire to be nice and kind to people."
Sutskever's departure from OpenAI was seen as a significant loss for the company. Altman acknowledged his immense contributions and described him as "easily one of the greatest minds of our generation." His next endeavor, however, remained a mystery until the recent announcement of SSI.
SSI co-founders
According to the website's single-page letter, Sutskever's SSI co-founders appear to be Daniel Levy and Daniel Gross.
Daniel Levy is a former OpenAI engineer. He holds a bachelor's degree from École Polytechnique in France and a Ph.D. in computer science from Stanford University. Before joining OpenAI, he interned at companies like Microsoft, Meta, and Google.
Daniel Gross is an American entrepreneur and investor. He co-founded the startup Cue, later acquired by Apple, where he led the company's artificial intelligence efforts. Gross also served as a partner at the startup accelerator Y Combinator and has invested in companies like Uber, Instacart, Figma, GitHub, Airtable, and Perplexity.ai. He was born in Jerusalem, Israel, in 1991.
The trio's focus with SSI is solely on developing safe superintelligence, which they believe is "the most important technical problem of our time." They aim to advance AI capabilities while ensuring safety remains ahead, insulated from short-term commercial pressures.
Read more on cryptoslate.com