Ilya Sutskever, a name synonymous with OpenAI since its inception, is taking his expertise and passion for artificial intelligence (AI) safety to a new venture. This week, Sutskever unveiled Safe Superintelligence Inc. (SSI), a company with a singular mission: building a safe and powerful AI system.
SSI promises a unique approach to AI development. The company emphasizes working on safety and capabilities simultaneously, ensuring advancements don’t outpace safeguards. Sutskever highlights the challenges faced by AI teams within large corporations like OpenAI, Google, and Microsoft. These teams often grapple with external pressures, hindering their ability to prioritize safety. SSI, on the other hand, boasts a “singular focus” that eliminates distractions from management or product cycles.
“Our business model prioritizes safety, security, and progress, independent of short-term commercial pressures,” the company’s announcement declares. “This way, we can scale in peace.”
Sutskever isn’t venturing out alone. Joining him at SSI are Daniel Gross, who previously led AI efforts at Apple, and Daniel Levy, an alumnus of OpenAI’s technical staff.
Last year, Sutskever spearheaded a movement to remove OpenAI CEO Sam Altman. He subsequently left OpenAI in May, hinting at his next project. These developments were followed by the departures of AI researcher Jan Leike and policy researcher Gretchen Krueger, both citing concerns about safety processes taking a backseat to commercial goals at OpenAI.
With OpenAI forging partnerships with tech giants like Apple and Microsoft, SSI appears to be charting a different course. In a recent interview with Bloomberg, Sutskever made it clear that SSI’s sole focus, for now, is on creating safe superintelligence. “With one goal and one product,” he stated, “a safe superintelligence.”
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
