sdecoret - stock.adobe.com
Nvidia targets metaverse with new Omniverse Avatar Engine
The vendor's new cloud engine will help organizations easily build avatars and digital assistants. Nvidia also revealed plans to evolve USD with partners such as Pixar.
Nvidia has unveiled a set of tools and applications intended to make it easier for organizations to create content within the metaverse.
On the second day of computer graphics conference Siggraph (Special Interest Group on Computer Graphics and Interactive Techniques) 2022, Nvidia CEO Jensen Huang shared details surrounding a cloud engine for avatars, updates to its Omniverse platform, and plans to expand the universal scene description (USD) -- a framework used in visual effect, architecture, design and robotics.
Nvidia's inclination toward the metaverse comes as a new report by technology firm Technavio predicts the metaverse will hit a market share value of more than $50 billion in the next four years. It also comes one day after the chip vendor's shares dipped amidst news that it missed Q2 projections.
"Given the hype, it is only a matter of time before Nvidia wants to cash in," said Andy Thurai, an analyst at Constellation Research.
Hype or not, with the release of these tools, Nvidia is making it easier for enterprises to generate content in the metaverse, said Tuong Nguyen, an analyst at Gartner.
Tuong NguyenAnalyst, Gartner
"They want to make content and tools more widely available so more people can start building toward the vision of the metaverse," Nguyen said.
Avatar Cloud Engine
One product Nvidia unveiled is Omniverse Avatar Cloud Engine (ACE). ACE is a suite of AI models and services on the cloud that enables users to easily build avatars that understand multiple languages, interact with the environment and respond to different prompts.
"The metaverse without representations of real humans inside and without human-like representations of our artificial intelligence inside it will be a very dull and sad place," said Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia.
The cloud engine is meant to make avatar creation easier for enterprises so that development is not a barrier to adoption, Nguyen said.
ACE is built on Nvidia's Unified Compute Framework with tools such as Nvidia Riva for speech AI applications, Nvidia Metropolis for computer vision, Nvidia NeMo Megatron for large language models and Nvidia Omniverse.
Enterprise uses
ACE's digital human capability is like the next generation of visual chatbots, where human-like avatars can interact with actual humans, said Kevin Krewell, an analyst at TIRIAS Research.
This will benefit enterprises that use support systems like chatbots.
"I think the combination of audio, plus a physical video avatar, will make [chatbots] easier to interact with for many people," Krewell said.
It will also benefit enterprises struggling from challenges such as the workforce shortage or the Great Resignation, which makes it hard to hire and train employees to meet demand, Thurai said.
Industries such as transportation, entertainment, retail and customer care are leaning toward digital humans or virtual assistants because of this.
Due to their popularity, ACE could be useful in those industries, but adoption might be difficult, Thurai said.
"People hate dealing with another bot, which is essentially what this is," he said. "If they can act more like a human, with empathy, instead of being like a bot, it can gain widespread adoption."
Universal Scene Description
Another way Nvidia is trying to make it easier to generate content in the metaverse is through support for USD, a software platform used to create animated 3D scenes.
Working with organizations like Pixar, which created USD in 2015, Adobe, Autodesk and Siemens, Nvidia intends to make USD "the core pillar and foundation of the metaverse so that it will be analogous to the metaverse, just like HTML is to the web," Nvidia's Lebaredian said.
Nvidia's expansion of USD includes international character support for users from different countries and languages.
USD would also make way for a multivendor metaverse, Krewell said. This will allow enterprises and users to take their avatars and 3D scenes and translate them on different platforms.
"USD has a lot of potential here to be the language for the future of the metaverse," he added.
New tools and updates to Omniverse
Nvidia also revealed new developer tools and applications for its Omniverse platform, a multi-GPU computing platform for creating virtual worlds. The tools include Omniverse Audio2Face and Omniverse DeepSearch.
Audio2Face is an AI-tool that enables enterprises to create facial animation from audio files. It alleviates a major issue many have with digital humans or virtual avatars, Thurai said.
"When communicating with a customer, they are not expressive," he said. Audio2Face allows for changes and enables avatars to have facial expressions that are tied to the audio.
DeepSearch enables enterprises to use AI to search through untagged 3D databases of visuals using natural language. It is now available to enterprise customers. This feature targets one of the areas where the metaverse economy struggles: search for unstructured, untagged multimedia content, Thurai said.
DeepSearch can help with the searches and help improve reusability of the untagged assets within organizations, he said.
Other Omniverse updates include NeuralVDB, the next generation of OpenVDB (an opensource library that stores volumetric data). NeuralVDB combines AI and GPU to optimize scattered data sets.