your123 - stock.adobe.com

Nvidia shows updated AI, omniverse tools for enterprises

Software is the engine that powers many of the AI hardware and software giant's systems, including a new computing system for the omniverse, according to CEO Jensen Huang.

In a flurry of product introductions spanning technologies as disparate as a powerful new supercomputer, faster AI chips, data storage and the omniverse, AI hardware and software vendor Nvidia unveiled a series of updated and new enterprise AI systems.

Nvidia CEO Jensen Huang, the billionaire who co-founded and built the company into a tech powerhouse from its roots as a gaming chipmaker, shared dozens of new developments in a keynote speech at the vendor's GTC 2022 spring conference.

Amid news about Nvidia's powerful new Eos supercomputer, a new GPU chip based on the vendor's "Hopper" architecture and digital twin technology, Huang put the spotlight on Nvidia AI software as the engine at the core of all its technology innovations.

"AI has fundamentally changed what software can make and how you make software," Huang said.

AI Enterprise 2.1

Nvidia released AI Enterprise 2.1 -- the vendor's cloud-native suite that enables enterprises to run AI systems and tools on the VMware vSphere platform -- on July 25.

The latest version of the vendor’s AI and data analytics software suite includes support for Red Hat OpenShift, which now runs in the public cloud, and the new Microsoft Azure NVads A10 v5 virtual machine series.

The NVads series enables enterprises to run graphics, video, and AI workloads, including deep learning inferencing. They are powered by Nvidia A10 Tensor Core GPUs.

Nvidia also updated its TAO Toolkit low-code model training system with new features including Tensor Board integration and new pretrained models.

The update follow's the introduction of Nvidia AI Enterprise 2.0 in March and an update in January that featured integration with the VMware Tanzu suite of tools for managing Kubernetes clusters in public and private clouds.

Enterprise 2.0 supports every major data center and cloud platform, including bare-metal servers, virtualized infrastructure and CPU-only systems, according to Nvidia. The suite also now supports Red Hat OpenShift.

With the main Nvidia AI software platform, enterprises can now use containerized machine learning tools to build, scale and share their models on different systems with VMware vSphere.

"The problem continues to be for many IT leaders or enterprise IT leaders, is they don't often have the multi-skill set to deliver GPUs in their context," said Chirag Dekate, an analyst at Gartner.

Because a GPU stack is fundamentally different than the CPU-only cluster architecture that most IT professionals are familiar with, any time new GPU products come into play they increase the complexity of the overall AI stack, Dekate said.

To help IT professionals work with its advanced AI GPUs, Nvidia has taken the approach of partnering with key infrastructure providers such as VMware, Red Hat and Domino Data Lab, the enterprise machine learning operations vendor, Dekate noted.

This approach enables enterprises that use VMware to take advantage of their existing virtual machine ecosystem skill sets to use GPUs effectively and efficiently.

"It's about enabling the IT teams to leverage their existing skill sets and apply and leverage a new technology domain, like the GPUs," he said.

Compared with competitors such as AMD, Nvidia's integration plan is well laid out, Dekate said. Each ecosystem -- hardware and software -- works together.

Nvidia is not just delivering the infrastructure capabilities, not just delivering to the data scientists, stakeholders, they're also kind of leaning into the enterprise and enabling them to create platforms, whether it's on premises or any cloud, even hybrid.
Chirag DekateAnalyst, Gartner

"Nvidia is not just delivering the infrastructure capabilities, not just delivering to the data scientists, and stakeholders, they're also leaning into the enterprise and enabling them to create platforms, whether it's on premises or any cloud, even hybrid," Dekate said. "They have a really comprehensive strategy that others don't have."

Riva and Merlin

Beyond Enterprise 2.0, other AI products Nvidia introduced this year include updated versions of its Riva and Merlin systems.

Nvidia Riva 2.0 is now generally available. The speech AI software development kit includes pretrained models that enable developers to customize speech AI applications such as conversational AI services.

According to Huang, Riva is being used by enterprise AI app vendors including Snap, RingCentral and Kore.ai.

Riva 2.0 includes speech recognition in seven languages and neural text to speech with male and female voices.

The vendor also unveiled Merlin 1.0, an update of its AI recommender framework. It includes Merlin Models and Merlin Systems. With these two systems data scientists and machine learning engineers can determine which features and models will fit their applications.

With both these capabilities, Nvidia is developing higher-level vertical integration and going beyond just being a provider of AI hardware infrastructure, middleware and an AI development stack, Dekate said.

"They're kind of going full stack," he said.

Riva and Merlin are both available on Nvidia Launchpad, the vendor's enterprise AI development platform. Enterprises with a Nvidia GPU ecosystem can use the Launchpad platform to access many of their AI tools.

"Launchpad basically acts as the incubator that enables access to these vertically integrated [capabilities] like Riva for speech and Merlin for recommender systems," Dekate said.

Launchpad is available in nine global regions.

With its AI software upgrades, Nvidia is showcasing large-scale applications for its hardware and software tools "that are going to have an impact and reflect a leap forward in applying AI at scale," said Dan Miller, an analyst at Opus Research. "Nvidia is starting to create a platform and an ecosystem strategy that becomes really hard to compete with," he said.

Screenshot of Riva 2.0
Riva 2.0 is now generally available and includes speech recognition in seven languages.

Omniverse OVX

Nvidia also unveiled a preview of Omniverse OVX, a computing system that will enable designers, engineers and planners to build digital twins and create simulated environments for the virtual and augmented reality worlds of the omniverse that can be used for industrial and construction testing, among other applications.

OVX will be available later this year, Nvidia said.

Facebook, Microsoft and other major tech players are also developing technologies for the omniverse.

The bigger picture

But with all the new AI software and hardware tools unveiled today, other observers also said they see Nvidia as distinguishing itself from its competitors.

"Nvidia has set itself apart by positioning itself as a software company that builds hardware that in turn supports their existing and future products," said Dan Newman, an analyst at Futurum Research. "Fundamentally, that will be their advantage going forward over chip makers that root themselves only in hardware. "

Other updates showcased at the conference include the latest release of Nvidia Triton, which now contains a model navigator for accelerated deployment of optimized models and the latest version of NeMo Megatron, a framework that enables enterprises to train large language models that now adds support for training in the cloud.

Editor at Large Ed Scannell contributed to this story.

Dig Deeper on AI infrastructure