Sign up for our daily and weekly newsletters to get the latest updates and exclusive content on top-tier AI topics.
Ilya Sutskever, after stepping down as OpenAI’s chief scientist in May, has announced his next venture. Together with his OpenAI colleague Daniel Levy and Apple’s former AI lead and Cue co-founder Daniel Gross, they’ve launched a startup called Safe Superintelligence Inc. (SSI). The goal of SSI is to develop safe superintelligence.
On SSI’s currently minimalistic website, the founders emphasized that creating safe superintelligence is “the most important technical problem of our time.” They stated that they treat safety and capabilities as intertwined challenges that require groundbreaking engineering and scientific innovation. Their aim is to advance capabilities rapidly while ensuring safety measures take the lead.
Superintelligence refers to a hypothetical entity with intelligence far beyond that of the smartest human. Sutskever’s new project continues his efforts from OpenAI, where he was part of a team focused on managing powerful AI systems. However, after his departure, that team was disbanded, a decision that received heavy criticism from former lead Jean Leike.
OpenAI’s co-founder, Sutskever, was also a key figure in the brief removal of chief executive Sam Altman in November 2023, a move he later expressed regret over.
SSI aims to achieve safe superintelligence with a single-minded focus. The team is enthusiastic about starting this new venture and is looking for dedicated individuals to join their small, highly trusted team to achieve extraordinary results.