back to top
Wednesday, July 24, 2024

OpenAI’s former chief scientist launches a new AI company

Ilya Sutskever, co-founder of OpenAI, launches Safe Superintelligence Inc. to develop robust and safe AI systems, avoiding commercial pressures.

Published:

Published:

Trending Stories

- Advertisement -

Ilya Sutskever, co-founder and former chief scientist of OpenAI, is venturing into a new project dedicated to creating safe AI. On Wednesday, Sutskever announced the launch of Safe Superintelligence Inc. (SSI), a startup with a clear mission: developing a powerful AI system with a primary focus on .

SSI's unique approach to AI

SSI aims to balance safety and capabilities simultaneously, ensuring rapid advancement of its AI system while maintaining safety as a top priority. The company seeks to avoid the external pressures that often burden AI teams at big tech firms like OpenAI, Google, and . By maintaining a “singular focus,” SSI hopes to avoid distractions caused by management overhead or product cycles.

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement reads. “This way, we can scale in peace.” This approach allows SSI to focus solely on its goal without being sidetracked by the typical commercial demands.

Notable figures in the AI field co-founded SSI. Alongside Sutskever, Daniel Gross, a former AI lead at Apple, and Daniel Levy, previously a member of the technical staff at OpenAI, join the team. This powerhouse trio brings a wealth of experience and expertise to the new venture.

Background and motivation

Last year, Sutskever was at the forefront of a movement to remove OpenAI CEO Sam Altman. After leaving OpenAI in May, Sutskever hinted at starting a new project. His departure was followed by AI researcher Jan Leike, who expressed concerns about safety processes taking a backseat to the development of flashy products. Additionally, Gretchen Krueger, a policy researcher at OpenAI, cited safety concerns when she resigned.

SSI's mission reflects Sutskever's ongoing commitment to addressing these safety issues in AI development. By focusing on safety from the start, SSI aims to build AI systems that are not only advanced but also secure and reliable.

The announcement reiterates that “our business model means safety, security, and progress are all insulated from short-term commercial pressures.” This approach enables SSI to maintain its focus on developing safe AI without the distractions of commercial demands.

In conclusion, SSI represents a new chapter in AI development, emphasising safety and a clear mission. With the combined expertise of Sutskever, Gross, and Levy, SSI is well-positioned to make significant strides in artificial intelligence.

Tech Edition has partnerships that involve sponsored content. While this financial support helps us with daily operations, it doesn't affect the integrity of our reviews. We remain committed to delivering honest and insightful content to our readers.

Tech Edition is now on Telegram! Join our channel here and catch all the latest tech news!



Emma Job
Emma Job
Emma is a freelance news editor at Tech Edition. With a decade's experience in content writing, she revels in both crafting and immersing herself in narratives. From tracking down viral trends to delving into the most recent news stories, her goal is to deliver insightful and timely content to her readers.

Featured Content

Best broadband in Singapore for your household needs: Singtel vs M1 vs StarHub vs MyRepublic & more

Explore the best ISPs in Singapore, each tagged with their unique strengths to help you find the perfect internet service provider for your needs in 2024.

Related Stories