Technology

Alphabet, Nvidia back OpenAI Co-founder Ilya Sutskever’s SSI

Our takeaway: with the ongoing backlash to how LLM developers have been approaching the copyright violation, theft issues, privacy threats, and security concerns around model training, there is a countercurrent slowly beginning in the industry to try and do better. But is it overhyped?

Alphabet and Nvidia have invested in Safe SuperIntelligence (SSI), a startup co-founded by OpenAI's former chief scientist Ilya Sutskever, according to Reuters. SSI with roots in Palo Alto, California, and Tel Aviv was launched in June 2024, becoming one of the highest-valued AI startups, reaching a staggering $32 billion valuation in its latest funding round.

In a strategic shift, Alphabet’s cloud division has inked a deal to supply Tensor Processing Units (TPUs) to SSI—marking a departure from Google’s prior policy of keeping its custom AI chips in-house.

SSI is reportedly leaning toward using TPUs over Nvidia’s dominant GPUs, which hold more than 80% of the AI chip market share. This move may hint at a broader shift, as Google now offers both options through its cloud to support evolving AI demands.

The latest funding round was led by venture capital firm Greenoaks. With this, SSI has emerged as one of the most prominent startups in the field of AI research. The company has garnered significant attention not only for its cutting-edge work on developing foundational AI models but also due to the involvement of Sutskever, one of the co-founders of OpenAI. 

Known for his pivotal role in advancing deep learning and his ability to foresee transformative trends in AI development, Sutskever brings a high level of credibility and anticipation to SSI’s work, positioning the company at the forefront of the next wave of breakthroughs in artificial intelligence.

Further, SSI raised $1 billion in September last year, three months after its inception, in a funding round led by prominent investors including Andreessen Horowitz and Sequoia Capital. The investment valued the company at $5 billion. 

This substantial investment came at a time when concerns about the potential risks of advanced AI technologies were growing. SSI’s mission to create safe AI systems aligns with increasing public and regulatory scrutiny of AI’s impact on society.

SSI founded amid leadership tensions at OpenAI

Sutskever, who exited OpenAI last year after internal conflict over its leadership, launched SSI alongside former Apple AI lead Daniel Gross and another ex-OpenAI researcher Daniel Levy. 

"We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence," said a blog post by the founders.

SSI is making waves as the world’s first “straight-shot superintelligence lab” with a bold mission: build safe superintelligence—and nothing else. Unlike many AI labs juggling research with product launches and profit targets, SSI is laser-focused on advancing AI capabilities and safety in lockstep, a philosophy they call “peaceful scaling.”

What sets SSI apart isn’t just its ambition—it’s the discipline. The company is deliberately structured to avoid the usual tech-industry distractions: no short-term revenue hustle, no bloated management layers, and no pivoting to the trend of the week. Everything is optimised for one goal: creating superintelligence safely and sustainably.

The timing of SSI’s debut last year was no coincidence. It came amid internal turbulence at OpenAI, marked by high-profile exits and rising concerns about governance and direction. With AI’s future at a tipping point, SSI is positioning itself as the focused, principled alternative ready to lead the next chapter.

Browse more in: