Getty Images/iStockphoto
AI inference startup raises $110 million
Investors are betting on d-Matrix to be a long-lasting vendor in the AI hardware and compute market, despite Nvidia's leading status. The startup offers a low-cost option.
AI startup d-Matrix said on Wednesday it raised $110 million in a Series B funding round.
As a vendor specializing in generative AI compute for data centers, d-Matrix plans to use the funding to commercialize its digital-in memory compute, chiplet-based inference compute platform Corsair, according to the company.
The new funding comes after d-Matrix introduced Jayhawk II, the latest version of its silicon, last month.
Competing with Nvidia
Despite being a startup in the AI computing and inference market, d-Matrix still competes with AI hardware/software giant Nvidia, according to Cambrian AI analyst Karl Freund.
While Nvidia sits atop the market for AI chips, D-Matrix offers an in-memory compute platform that Freund said should be fast and much lower in cost because it doesn't have a high-bandwidth memory but rather uses static random access memory.
Karl FreundAnalyst, Cambrian AI
"There's only a couple of companies that really have a shot at competing with Nvidia," Freund said. "D-Matrix is one of them. They're using a different technology, a different architecture that seems to be producing better results potentially."
Investor outlook
It is this conviction that d-Matrix will last longer than other startups, such as Cerebras, for example, that attracted early-stage investment firm Playground Global to d-Matrix.
The firm started as an advisor to d-Matrix in 2019 and became an investor during the vendor's first funding round.
Compared with the numerous startups in the market, d-Matrix stood out because of its team, technology and the growing AI market that it was trying to address, according to venture partner Sasha Ostojic.
"What we saw in D-Matrix is that they made an early bet on generative AI and LLM," Ostojic said. "They made the bet that that would be the dominant workload moving forward, and they were right. The rest of the world basically caught up with that insight."
Low-cost inferencing
Moreover, Freund said the market is likely to move toward d-Matrix's low-cost strategy for inference.
"Inferencing is just too expensive right now," he said.
"That takes eight H100 or A100 GPUs just to do an inferencing process," he continued, referring to Nvidia AI chips. "Every time you answer enter a query, it's going to take eight GPUs to answer it. That's not sustainable."
This is where companies like d-Matrix will help lower costs, with technologies such as in-memory computing.
Even Nvidia will likely release inference processing products soon that move toward lower price points, Freund said.
Freund said the challenge for d-Matrix going forward is to get its generative AI models up and running with its Jayhawk II development platform.
"Customers want a faster lower-cost solution, but they don't want to go through pain and anguish importing and optimizing code and models," he said.
For Playground Global, execution is the next step for d-Matrix to tackle, Ostojic said.
"What's left now is to deliver the production chip to market next year with an enterprise-class, frictionless software stack to be deployed in data centers and hyperscalers," he said.
D-Matrix's chiplets are expected to become available in 2024, according to the vendor.
The Series B round was led by investment firm Temasek and included Microsoft's venture capital fund M12, as well as Playground Global.
Esther Ajao is a news writer covering artificial intelligence software and systems.