your123 - stock.adobe.com

Former OpenAI scientist raises $1B for AI safety venture

Sutskever’s new company is focused on providing safe artificial general intelligence. While some see a need for it, others find it distracting.

OpenAI's former chief scientist's new AI company has raised $1 billion.

Ilya Sutskever's newly founded Safe Superintelligence raised the funds to build safe AI systems that are better than humans -- a concept akin to that of artificial general intelligence, Reuters first reported.

Artificial general intelligence is a type of AI that tries to mimic the human brain and even exceed its capabilities.

Investors including Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel contributed to the $1 billion funding round.

Ilya Sutskever founded Safe Superintelligence in June to develop safe AI systems.

The company reportedly is now valued at about $5 billion, according to media reports. Safe Superintelligence has not confirmed this valuation.

AI safety

Safe Superintelligence's fast financial growth tells of how much investors continue to look for opportunities to invest in AI startups and foundational aspects of AI.

"Investors are interested in this idea of safe artificial intelligence," said Nick Patience, an analyst with Futurum Group.

The focus on safety differentiates SSI from other startups that are popping up in the market.

Also, that Sutskever came originally from OpenAI, the leading independent generative AI vendor that started with a goal of developing open and safe AI technology but has been criticized for not appearing to fulfill that promise, highlights Sutskever’s AI safety credentials.

Sutskever is one of several former OpenAI board members who fired CEO Sam Altman last November for not being candid with the board.

While Altman was later rehired, Sutskever left months later.

In founding the company , Sutskever and his cofounders wrote that it would advance AI capabilities as fast as possible while making ensuring that safety remains paramount.

One way Safe Superintelligence can find its value is by being a checkpoint for AI systems.

"If people view them as the Underwriter Labs for AI safety, then they have a role to play in this ecosystem," Constellation Research CEO R "Ray Wang said, referring to the applied science safety organization now known as UL Solutions.

Like Underwriter Labs, Safe Superintelligence will be able to can test AI products before they go to market, and be the safety check, he added.

While Safe Superintelligence might not end up making a lot of money, they may be able to find a balance between profit and being an AI safety checkpoint, Wang said.

The venture capitalists that invested in the startup also think SSI could be this AI safety checkpoint, which is why they are backing the company, Wang continued.

Too much focus on safety

However, putting so much focus into AI safety might be too distracting, Patience said.

"I don't think it's possible for us to understand what's safe and what isn't safe because we don't know where the technology is going," he said. "You don't know exactly where the technology is, how do we know whether it's safe or not."

Therefore, focusing too much on safety might distract from other venture capitalists looking to invest in AI vendors, he added. Sutskever’s firm, founded in June 2024, hasn’t proven itself yet. Meanwhile, other startups are already providing products services and have gone beyond the conceptual phase.

Moreover, SSI's focus on creating  an AI system that can do everything a human can do and more might be too much of a lofty goal, he continued.

"I don't think it's necessarily possible, or necessarily that desirable," Patience said. "We can get very far without doing something like that."

"I's quite a risk to plow that much money into something like that," he continued.

It's also risky because Sutskever’s firm has yet to release products, he added.

Anthropic Claude Enterprise

Another generative AI vendor that has also shown interest in AI safety and has released widely used products is OpenAI's rival Anthropic.

On Wednesday, Anthropic launched the Claude Enterprise plan, which competes with OpenAI’s ChatGPT Enterprise.

Claude Enterprise includes key features such as a 500K context window and enables enterprises to turn ideas into polished work with collaboration tools, according to the vendor.

Anthropic's release exemplifies how vendors in the AI market continue to try to find areas of differentiation such as offering larger context windows and prompt caching.

While Claude Enterprise plan's context window is not the largest, its addition of GitHub integration to pull the in code, admin controls, and single sign-on makes it stand out, Constellation Research analyst Andy Thurai said.

"This should allow enterprise users to consider using these models to unleash their enterprise data where data privacy and security are major concerns," Thurai said in a statement to media outlets.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI business strategies