blobbotronic - stock.adobe.com

Nvidia releases new AI software, hardware

Among the AI news from its virtual conference, Nvidia introduced a new DGX SuperPod, an Arm-based CPU, and the general availability of Jarvis, a conversational AI framework.

Nvidia released a flurry of new AI-powered hardware and software on April 12 at its virtual conference, GTC21.

Among the news from the AI hardware and software and video game chip vendor was the general availability of Jarvis, a framework for building conversational AI models.

Conversational AI framework

The framework enables developers to use pretrained deep learning models and software tools to create conversational AI services, such as advanced translation or transcription models, specialized chatbots and digital assistants, according to Nvidia.

"Jarvis provides an end-to-end framework for conversational AI," said Kashyap Kompella, CEO of RPA2AI Research.

Organizations can start with pretrained AI models from Nvidia and plug in their training data sets and customize for their industry applications using Nvidia's transfer learning toolkit," Kompella explained. "Nvidia Jarvis' strengths are low-latency response and deployment on the edge."

Many industries can find applications for conversational AI, including general-purpose digital assistants on the phone and in smart speakers and web and mobile chatbots, he noted.

Jarvis puts Nvidia in competition with other major technology vendors, including Google, Microsoft, Amazon and IBM, which also offer conversational AI frameworks.

Jensen Huang, Nvidia founder and CEO, during his conference keynote on April 12, said

Jarvis now supports five languages -- English, Japanese, Spanish, German and Russian.

Enterprises can customize Jarvis for domain jargon, and users can control its expression and emotion. Users can deploy Jarvis in cloud and can use it at the edge, Huang said.

Screenshot of Nvidia Jarvis
Nvidia Jarvis, a conversational AI framework

AI supercomputing

Meanwhile, Nvidia introduced a new version of DGX SuperPod, Nvidia's supercomputing system that the vendor bills as made for AI workloads.

The new cloud-native DGX SuperPod uses Nvidia's Bluefield-2 DPUs, unveiled last year, providing users with a more secure connection to their data. Like Nvidia's other DGX SuperPods, this new supercomputer will contain at least 20 Nvidia DGX A100 systems and Nvidia InfiniBand HDR networking.

"Supercomputers can perform a gazillion operations in a second and crunch humongous amounts of data," Kompella said.

Jarvis provides an end-to-end framework for conversational AI.
Kashyap KompellaCEO, RPA2AI Research

This speed and power make the machines useful for analyzing huge amounts of data to feed AI prediction models.

To help enterprises speed up their AI and high-performance computing workloads, Nvidia also released its first data center CPU, an Arm-based processor called Nvidia Grace CPU.

According to Nvidia, when tightly coupled with Nvidia GPUs, Grace CPU can deliver a 10-times faster performance than Nvidia's DGX-based system.

"The Grace processor will position Nvidia well in the high-performance computing (HPC) segment," Kompella said.

AI chipmakers can support three key AI workloads: data management, training and inferencing, Forrester Research analyst Mike Gualtieri said.

Nvidia, with its dominance in GPU chips, systems and software optimized for deep learning, leads in training deep learning models, he said.

"But, for Nvidia to stay dominant, it must also be good at data management and inference," he noted.

Looking at the competition

The release of Grace, as well as recent acquisitions by the company, make it clear that Nvidia understands that and is heading in the right direction, Gualtieri continued.

 Nvidia, after years of developing high-powered GPUs for gaming, has targeted the AI market in recent years.

GPUs, according to Kompella, are well-suited for AI training, and "Nvidia is the 800-pound gorilla in this segment, far ahead of rivals AMD and Intel."

While AMD and Intel make impressive AI chips, neither vendor has created GPUs to match those of Nvidia, Gualtieri said.

Still, "Intel is clearly in the game with CPUs and other technology for data management and inferencing," he said. "Intel must be competitive in training workloads to face Nvidia head-on."

Intel's acquisition of Habana Labs put the tech vendor on the path to becoming a player in AI training as well, he said.

The tech giants -- Google, Microsoft, Amazon, Apple and Facebook -- have created chips made for AI, but these are intended for their own specific applications, Kompella said.

The GTC21 conference is being held online April 12-16.

Next Steps

Nvidia unveils new text-to-3D AI research project

Dig Deeper on AI infrastructure