your123 - stock.adobe.com
What Microsoft, Nvidia AI supercomputer partnership means
The public cloud supercomputer will incorporate A100 and H100 GPUs and the chipmaker's AI enterprise software suite. It will enable enterprises to build large AI models at scale.
Nvidia and Microsoft now have a multiyear partnership to build a cloud AI supercomputer that will enable enterprises to build large language models and complex recommender systems.
The vendors revealed on Nov. 16 that the supercomputer will be powered by Microsoft Azure's infrastructure and Nvidia's GPUs, networking and AI software stack.
Foundation models and generative AI
Nvidia will also use Azure's VM instances for research and development in generative AI and foundation AI models. Generative AI and foundation models such as GPT-3 enable enterprises and organizations to create text, images or code. Meanwhile, Microsoft customers will have access to Nvidia's full stack of AI workflows and software kits.
Azure's AI supercomputer will be the first public cloud supercomputer to incorporate Nvidia's AI stack with A100 and H100 GPUs, the Nvidia Quantum-2 400 Gbps InfiniBand networking system and the Nvidia AI Enterprise software suite, the vendors said.
The collaboration comes as generative AI and foundation models continue to grow in popularity among organizations since the release of GPT-3, image generation tools such as Dall-E and Stable Diffusion, and Microsoft's GitHub Copilot.
Microsoft's search for alternatives
The partnership demonstrates that Microsoft has received more demand from its customers for access to Nvidia GPUs, specifically H100s, said Karl Freund, founder and analyst at Cambrian AI Research.
Karl FreundFounder and analyst, Cambrian AI Research
"It's a bit surprising that they came out at supercomputing with this big announcement, but not terribly so," Freund said. "You want to be big at supercomputing, so with this announcement they were able to say, 'We're doing something big here.'"
While Microsoft has had a relationship with Nvidia in the past, it has also shown interest in alternatives to the chip vendor. For example, in 2019, Microsoft formed a partnership with Graphcore. The chip startup had a deal to provide Microsoft with processors for its cloud computing platform. However, that partnership has since ended.
"Nobody can come anywhere close to Nvidia's software," Freund said, adding that while competitors such as Intel or Graphcore could compete on performance, they can't compete effectively in the software arena. "If you are Microsoft ... there's really only one choice, and that's Nvidia."
If Nvidia can deliver on H100s, the partnership should go well, he said. This means that by the first quarter of next year, enterprises should see thousands of Nvidia GPUs on Azure.
The alliance also provides insight into what infrastructure Microsoft and OpenAI -- in which Microsoft has invested $1 billion -- could be using to develop GPT-4, the next generation of the powerful large language model, he continued.
"My assumption is it's Nvidia A100," Freund said. "If ... it's not Nvidia A100, then Microsoft probably will not be in position to announce this big intention to continue to invest in Nvidia hardware."
Meanwhile, both Nvidia and Microsoft are ready to support enterprises as they increasingly train both generative AI models and foundation models.
"These giant foundation models as well as generative AI models, they're not just trained on small-scale infrastructure," said Paresh Kharya, senior director of product management and marketing for accelerated computing at Nvidia. "It's the full stack that's really important to be able to offer really compelling and efficient [products] to address enterprise demand."