AMD trims delivery time for MI350

AMD plans to send samples of the rack-scale AI chip to key customers in the current quarter.

AMD has shortened the delivery time of its Instinct MI350 series to midyear, marking the debut of the company's first rack-scale AI accelerator for the data center.

On Tuesday, the chipmaker provided the update on its next-generation AI GPU and reported a 24% year-over-year increase in revenue for the fourth quarter to $7.7 billion. Slightly more than half of that revenue, $3.9 billion, came from sales of its Instinct GPUs and EPYC CPUs. Revenue from the data center hardware rose 69% in the quarter that ended in December.

Lisa Su, CEO, AMDLisa Su

"We were very pleased with how we finished 2024, in terms of the data center GPU business," AMD CEO Lisa Su said during an earnings call with Wall Street analysts. "Going into 2025 ... we're actually very happy with the progress that we're making on both the hardware roadmaps and the software roadmaps."

AMD had scheduled production shipments of the Instinct MI350 for later in the year. However, testing the chip for design specifications and functions went faster than anticipated, Su said. AMD will send chip samples to key customers in the current quarter.

The MI350 features AMD's CDNA 4 microarchitecture, designed for high-performance computing and AI workloads in the data centers of cloud providers and large enterprises, such as pharmaceutical companies, financial institutions and retail giants. AMD claims the GPU has 35 times the performance of Instinct chips built on the CDNA 3 architecture.

AMD is on track to deliver the MI400, the successor to the MI350, as planned in 2026, Su said. The MI400's CDNA microarchitecture is a rack-scale product that tightly integrates networking, CPU and GPU capabilities at the silicon level.

AMD's software platform for optimizing workloads on its Instinct AI accelerators is called Radeon Open Compute, or ROCm. It consists of programming models, tools, compilers, libraries and runtimes.

ROCm lacks the maturity of rival Nvidia's software stack, which is led by its Compute Unified Device Architecture, or CUDA, a parallel computing platform and programming model.

Nvidia dominates the AI and accelerated computing data center market. In the last fiscal quarter, the company reported $30.8 billion in data center revenue.

Cloud providers and hyperscalers use Nvidia's high-powered, expensive GPUs to train foundation models. These models have been growing as developers work to make them capable of reasoning instead of just creating content such as text, images or videos.

Last month, the Chinese company DeepSeek released an open source reasoning model trained on low-end Nvidia GPUs. The company's research paper described the techniques for creating a high-performing model on less expensive hardware.

If DeepSeek's claims are verified, the AI market for AMD chips could widen, according to David Nicholson, chief research officer at The Futurum Group. AMD's current chips compete with Nvidia's H100 GPUs, a generation behind the company's latest architecture, Blackwell.

"This is a wake-up call," Nicholson said of DeepSeek. "It will give people the air cover to make decisions [on AMD] that were considered too risky a month ago."

Su believes DeepSeek's achievement was positive for AMD because models that are less expensive to train and fine-tune would make the technology available to more enterprises, not just the largest.

"We think innovation on the models and the algorithms is good for AI adoption," she said.

For the current quarter, AMD forecasts revenue of $7.1 billion, plus or minus $300 million, about a 30% increase from last year. The company expects strong growth in its data center and client businesses. Its client business includes its Ryzen mobile and desktop CPUs, the primary drivers of a 58% increase in client segment revenue of $2.3 billion in the fourth quarter.

Antone Gonsalves is an editor at large for Informa TechTarget, reporting on industry trends critical to enterprise tech buyers. He has worked in tech journalism for 25 years and is based in San Francisco. Have a news tip? Please drop him an email.

Dig Deeper on Data center hardware and strategy