Intel, Nvidia vie for dominance with agentic AI blueprints

Rival sets of tools meant to better support AI apps on developer platforms have hit the market as an even bigger wave of change looms in agentic AI.

SALT LAKE CITY -- Enterprise platform engineers must quickly get up to speed in LLMOps -- the next phase of generative AI depends on it. Vendors and the open source community are responding, but a primordial soup of new projects has yet to coalesce into a stable standard.

Discussions at KubeCon + CloudNativeCon North America this week centered on adjustments that platform engineers and internal developer platform products have made as cost and data privacy concerns drive generative AI workloads out of public clouds. The absorption of generative AI services into existing platforms will be critical to support what is widely considered the next big trend in tech: agentic AI, where sets of AI microservices operate autonomously.

Meanwhile, for IT pros in the trenches, this rapid adaptation has proven challenging, said Kasper Borg Nissen, a staff platform engineer at digital bank Lunar in Denmark, during a keynote presentation this week.

"According to recent surveys done by Gartner, McKinsey and more, 65% of organizations are now regularly using generative AI across multiple functions, and furthermore, 96% of companies expect AI to become a key enabler of business growth and operational improvements," Nissen said. "However, significant challenges remain -- 49% of companies struggle to estimate and demonstrate AI's business value, and only 9% of organizations were considered AI mature."

To fill this gap, vendors across the IT infrastructure landscape are polishing their wares to support large language model operations, or LLMOps. In the GPU space, two nemesis chipmakers opened a new frontier of competition this year in the form of AI microservices orchestration projects: Intel's Open Platform for Enterprise AI and Nvidia Blueprints. Both offer a catalog of components platform engineers can assemble to quickly spin up services that support common AI agent workloads, including conversions between text, images, audio and video.

Here's the bottom line: The reason why Nvidia has become so popular in GPUs is because of their software stack and because of their firmware.
Andy ThuraiAnalyst, Constellation Research

It's still very early for both projects, but their success will likely be influenced by broader market battles, according to Andy Thurai, an analyst at Constellation Research. Nvidia has already replaced Intel on the Dow Jones Industrial Average, and sales of Intel's Gaudi 3 AI chip have not met expectations. Part of that is due to Nvidia's strength in software-based automation, he said.

"Here's the bottom line: The reason why Nvidia has become so popular in GPUs is because of their software stack and because of their firmware," Thurai said in an interview this week. "They made it so easy for people to use any of the AI/ML models, whereas Intel let them adopt them any way [users] wanted. I think Nvidia will nail them."

AI blueprint death match

GPU titan Nvidia and archrival Intel showcased the AI microservices blueprint consortiums each launched this year during conference keynotes. Intel, along with AMD, launched the Open Platform for Enterprise AI (OPEA) project with more than 14 partners in April and now has 45. These partners, which include ByteDance, Canonical, Hugging Face, Neo4j, Red Hat, SAP and VMware, have created dozens of open source blueprints. These composable frameworks support several common agentic AI services, from building agents and retrieval-augmented generation to data preparation and establishing platform security guardrails.

Intel has already demonstrated success with customers migrating from managed to self-hosted AI services using OPEA blueprints, according to Arun Gupta, vice president and general manager of developer programs, in an interview this week with TechTarget Editorial.

"We have migrated customers who are using Azure Open AI to OPEA [apps] running on just straight-up Azure compute," Intel's Gupta said, although he did not identify the customers or specify how many had undergone such a migration. "The total cost of ownership [for these customers] has gone down significantly, and the customer is in control of the data."

Nvidia's consortium, meanwhile, has rolled out a catalog of six AI microservices blueprints since August, based on Nvidia's proprietary NIM microservices for its GPU chips. At launch, Nvidia Blueprints enlisted big systems integrators such as Accenture, Deloitte and World Wide Technology, along with IT vendors such as Cisco, Dell and Hewlett Packard Enterprise.

Nvidia's Chris Lamb on stage at KubeCon 2024.
Chris Lamb, vice president of computing platforms software at Nvidia, appears on stage at KubeCon 2024 with a 'digital human,' which is among the agentic AI blueprints Nvidia has rolled out with partners.

Chris Lamb, vice president of computing platforms software, also presented Nvidia's open source bona fides during Wednesday's KubeCon keynote. The company is a contributor to several Cloud Native Computing Foundation projects used with AI apps, including Kubernetes, KubeVirt and Kubeflow.

Lamb called for attendees to join these efforts and coalesce around projects such as Nvidia's GPU Operator as industry standards.

"Agentic AI ... is going to put a focus on the infrastructure to really take into account the way that you optimally carve up these complicated machines efficiently, and make sure things like autoscaling, resilient failover, [and the] interface between the application and the infrastructure is well described so that people don't have to develop this many different ways," Lamb said.

Beth Pariseau, senior news writer for TechTarget Editorial, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.

Dig Deeper on DevOps