Getty Images/iStockphoto

Nvidia targets metaverse with OpenUSD NIM microservices

The vendor introduces new generative AI models that will be available as microservices. The new models show that the metaverse is still a part of Nvidia's strategy.

AI hardware and software vendor Nvidia Systems has introduced new NIM microservices to the Universal Scene Description, or OpenUSD, standard for metaverse visual applications.

Nvidia on July 29, during SIGGRAPH, the computer graphics conference, revealed that generative AI models for OpenUSD development will be available as Nvidia NIM microservices. They are in preview now.

The move comes after Nvidia introduced the microservices at its GTC developer conference earlier this year.

NIM microservices for USD

NIM microservices enable enterprises to create and deploy custom applications on their platforms.

The new OpenUSD NIMs will let developers incorporate generative AI copilots and agents into USD workflows. Microservices include USD Code NIM, USD Search NIM and USD Validate NIM, all available in preview.

The USD Code NIM microservice answers general USD questions and generates OpenUSD Python code based on text prompts. The USD Search NIM microservice lets developers search through massive libraries of OpenUSD and image data using natural language or image inputs. And the USD Validate NIM microservice checks whether files uploaded are compatible with USD release versions.

Other microservices such as USD Layout NIM, USD SmartMaterial NIM and fVDB Mesh Generation NIM will be available soon.

Targeting the metaverse

Unlike the generative AI boom, the metaverse failed to gain immediate wide popularity. It remains largely confined to video headsets for virtual and augmented reality, and some industrial applications such as digital twins.

In that context, the expansion of NIM microservices shows both Nvidia's commitment to generative AI and its ambitions in the physical and digital world, according to Forrester Research analyst Charlie Dai.

For the metaverse, Nvidia's Omniverse platform continues to be a cornerstone of their strategy to enable the creation and connection of 3D virtual worlds.
Charlie DaiAnalyst, Forrester Research

"For the metaverse, Nvidia's Omniverse platform continues to be a cornerstone of their strategy to enable the creation and connection of 3D virtual worlds," Dai said. "These microservices are one of the steppingstones on this journey."

One challenge for the metaverse is the lack of standardization to bring together the elastic, scalable infrastructure, compute power, storage and data of the virtual environment. This made USD for 3D and metaverse data interchange formats difficult, according to Constellation Research analyst Andy Thurai.

So, with its NIM microservices, "Nvidia hopes to bring generative AI capabilities to the robotics, metaverse, industrial design and digital twin capabilities," Thurai said.

With the visualization and simulation of environments through the USD Code NIM microservice, Nvidia can help users revisit parts of the metaverse that were too difficult to develop before, such as the virtual and augmented reality worlds, he added.

However, adoption will be the biggest challenge for the AI vendor.

"The industrial areas they are taking on are too many and are very distributed both in the technology and in standards," Thurai said. "It is going to be extremely difficult to convince [customers] to adopt this."

Meanwhile, the Alliance for OpenUSD was created to help industrial companies adopt advanced technologies like the metaverse, he added.

Other than supporting the industrial metaverse, Nvidia is also looking ahead, Thurai continued. Generative AI appears to be slowing down in the adoption phase, and enterprises are not adopting the technology at the same pace they were experimenting with it.

"If the market slows down, it could hit Nvidia hard," he said. "They are staying ahead of the curve by thinking and innovating this and being a market maker again."

Partner news

In another development, Nvidia's partner Getty Images also revealed on July 29 that it updated its generative AI image-generating model.

The updated model was built on the Nvidia Edify model architecture. Edify is part of Nvidia Picasso, a platform for building and deploying generative AI models for visual design.

Generative AI by Getty Images and Generative AI by iStock, also from Getty Images, are now updated with image-generating speeds of about six seconds, enhanced detail in generated images, longer prompt support, and more control over output using shot type and depth of field. Users can also modify both generated AI images and existing preshot images.

In addition, Nvidia introduced fVDB, a deep learning framework for generating AI-ready virtual representations of the real world.

The AI vendor also revealed that Hugging Face will offer developers Inference-as-a-Service powered by Nvidia NIM.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Next Steps

Nvidia launches NIM Agent Blueprints to speed AI use

Dig Deeper on AI infrastructure