your123 - stock.adobe.com

Former OpenAI scientist raises $1B for AI safety venture

Ilya Sutskever's new company is focused on providing safe artificial general intelligence. While some see a need for it, others find it distracting.

OpenAI's former chief scientist's new AI company has raised $1 billion.

Ilya Sutskever's newly founded Safe Superintelligence (SSI) raised the funds to build safe AI systems that are better than humans -- a concept akin to that of artificial general intelligence, Reuters first reported.

Artificial general intelligence is a type of AI that tries to mimic the human brain and even exceed its capabilities.

Sutskever founded Safe Superintelligence in June to develop safe AI systems.

Investors including Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel contributed to the $1 billion funding round. The company reportedly is now valued at about $5 billion, according to media reports. Safe Superintelligence has not confirmed this valuation.

AI safety

Safe Superintelligence's fast financial growth tells of how much investors continue to look for opportunities to invest in AI startups and foundational aspects of AI.

"Investors are interested in this idea of safe artificial intelligence," said Nick Patience, an analyst at Futurum Group.

The focus on safety differentiates SSI from other startups that are popping up in the market.

Also, that Sutskever came originally from OpenAI -- the leading independent generative AI vendor that started with a goal of developing open and safe AI technology, but has been criticized for not appearing to fulfill that promise -- highlights Sutskever's AI safety credentials.

Sutskever is one of several former OpenAI board members who fired CEO Sam Altman last November for not being candid with the board. While Altman was later rehired, Sutskever left months later.

In founding the company, Sutskever and his co-founders wrote that it would advance AI capabilities as fast as possible while ensuring that safety remains paramount.

If people view them as the [UL Solutions] for AI safety, then they have a role to play in this ecosystem.
R 'Ray' WangCEO, Constellation Research

One way Safe Superintelligence can find its value is by being a checkpoint for AI systems.

"If people view them as the Underwriters Labs for AI safety, then they have a role to play in this ecosystem," Constellation Research CEO R "Ray" Wang said, referring to the applied science safety organization now known as UL Solutions.

Like UL, Safe Superintelligence will be able to test AI products before they go to market, and be the safety check, he added.

While Safe Superintelligence might not end up making a lot of money, it could find a balance between profit and being an AI safety checkpoint. The venture capitalists that invested in the startup also think SSI could be this AI safety checkpoint, which is why they are backing the company, Wang said.

Too much focus on safety

However, putting so much focus into AI safety might be too distracting, Patience said.

"I don't think it's possible for us to understand what's safe and what isn't safe because we don't know where the technology is going," he said. "You don't know exactly where the technology is, [so] how do we know whether it's safe or not?"

Therefore, focusing too much on safety might distract from other venture capitalists looking to invest in AI vendors, he said. Sutskever's company, founded in June 2024, hasn't proven itself yet. Meanwhile, other startups are already providing products and services that have gone beyond the conceptual phase.

Moreover, SSI's focus on creating an AI system that can do everything a human can do and more might be too much of a lofty goal, he continued.

"I don't think it's necessarily possible, or necessarily that desirable," Patience said. "We can get very far without doing something like that."

"It's quite a risk to plow that much money into something like that," he continued, adding that it's also risky because Sutskever's company has yet to release products.

Anthropic's Claude Enterprise

Another generative AI vendor that has also shown interest in AI safety and has released widely used products is OpenAI's rival Anthropic.

On Wednesday, Anthropic launched the Claude Enterprise plan, which rivals OpenAI's ChatGPT Enterprise. Claude Enterprise includes key features such as a 500,000-token context window and enables enterprises to turn ideas into polished work with collaboration tools, according to the vendor.

Anthropic's release exemplifies how vendors in the AI market continue to try to find areas of differentiation, such as offering larger context windows and prompt caching.

While the Claude Enterprise plan's context window is not the largest, its addition of GitHub integration to pull in code, admin controls and single sign-on makes it stand out, Constellation Research analyst Andy Thurai said.

"This should allow enterprise users to consider using these models to unleash their enterprise data, where data privacy and security are major concerns," Thurai said in a statement to media outlets.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI business strategies