Archives

OpenAI cofounder’s startup Safe Superintelligence Raises $1 Billion to Advance Safe AI Development

open ai

Safe Superintelligence (SSI), the new AI start-up co-founded by former OpenAI chief scientist Ilya Sutskever, has secured a massive $1 billion in funding to accelerate the development of AI systems that safely surpass human intelligence.

With a vision to create AI that revolutionizes industries while ensuring safety and ethical standards, SSI is attracting significant attention from top investors, even as the wider AI funding landscape has cooled. Backed by leading venture capital firms like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, this investment underscores the confidence in Sutskever and his team to shape the future of AI responsibly.

SSI, which operates with a compact team of 10 highly skilled engineers and researchers based in Palo Alto and Tel Aviv, plans to expand its resources through strategic hires and advanced computing power. Their aim? To assemble a small, elite group dedicated to pioneering AI advancements that benefit society without compromising safety.

Also Read: Cognigy Selected as Winner for Artificial Intelligence Innovation in 2024 AI Breakthrough Awards Program 

While SSI has yet to disclose its official valuation, sources close to the company suggest it’s already valued at an impressive $5 billion. Investors are making bold bets on the transformative potential of AI, despite the inherent challenges and long-term risks associated with foundational AI research.

Notable investors include NFDG, a partnership led by Nat Friedman, and SSI CEO Daniel Gross, further solidifying SSI’s credibility in the AI ecosystem.

As the global race to develop superintelligent AI heats up, SSI stands out with its commitment to not just advancing technology but ensuring its safe integration into society.