Getty Images/iStockphoto

Meta infrastructure focus key to AI and metaverse ambitions

The company sees its new AI chip, data center and supercomputer as ways to develop a strong foundation for what it calls an immersive metaverse experience.

Social media and technology giant Meta is intent on building what it sees as a solid hardware and software infrastructure to succeed in the long term as it pushes further into AI and the metaverse.

Over the past two weeks, Meta and its Meta AI division revealed a host of plans for its products and internal infrastructure, including its AI supercomputer, data center and AI coding assistant platform.

The Facebook parent company also revealed for the first time that it created an AI chip: the Meta Training and Inference Accelerator (MTIA).

A pivot

Meta's focus on its infrastructure is a good sign for the company's growth and longevity, said Ray "R" Wang, founder and analyst at Constellation Research.

"Meta is in the right place now," he said.

The investment enables the company to pivot more toward AI and way from its intense focus of recent years on the metaverse world of virtual and augmented reality applications, Wang said.

This change of direction comes after Meta spent some $36 billion building the metaverse by pumping that money into its Reality Labs Division. But despite the large sum it spent on the division, Meta gained few returns on the outlay.

However, Meta's rededication to AI technology is not an entirely new course. It has used AI recommenders and other AI systems for almost two decades.

For example, Facebook's newsfeed, long powered by AI, was launched in 2005. Meta in 2016 also made PyTorch, a machine learning framework for deep neural networks and deep learning research that underlies all of Facebook's AI workloads, open source. Last December, Meta released PyTorch 2.0.

"This is really an evolution for us," Meta vice president of engineering Aparna Ramani said during a streamed panel discussion at Meta's At Scale conference on May 18. "What's shifting now is the pace of innovation is really rapidly increasing."

Meta's current path of using automation and AI to create efficiencies is "smart for their future," Wang said.

Even Meta's recent layoffs were the right move for the company's future, Wang said, adding that the company was somewhat bloated with talent and now can focus more on attracting the right talent.

CEO Mark Zuckerberg said in March that the company planned to cut about 11,000 jobs by May. Some of those positions were eliminated in April. Other layoffs will come next week.

"Now they have to prioritize what to do with their network," Wang said.

While it focuses on building a solid infrastructure that can serve both AI and metaverse initiatives, Meta can keep working quietly on the metaverse away from the public eye.

"They can do both at once as AI is foundational to the metaverse," Wang said. "They have to beef up infrastructure for the metaverse."

Image of Meta's custom AI Chip
Meta Training and Inference Accelerator will help the company create better user experiences on social media platforms.

A custom AI chip

First up in building that foundation is creating a silicon chip.

MTIA is Meta's in-house custom accelerator chip. The chip will help the tech giant deliver better performance and efficiency for each workload with the help of GPUs.

With MTIA Meta aims to improve the user experience in Meta's Facebook, Instagram and WhatsApp applications.

The accelerator will provide more accurate and exciting predictions, increase watch time, and better click-through rate, according to Meta.

MTIA fulfills a need for developer workloads that neither CPUs nor GPUs can, Meta said. In addition its software stack is integrated with Pytorch.

MTIA is a way for Meta to move onto its next era of specialization, Gartner analyst Chirag Dekate said.

While GPUs are flexible, ever more computing power is needed to power the latest generative AI techniques and large language models. Therefore, tech giants such as Meta and Google, with its TPU, have started to design newer techniques to handle these much larger models.

"They are taking some of these neural networks and creating identifying commonalities across their mix of workloads and creating purpose-specific cases," Dekate added.

Meta's new AI silicon chip is also about being more AI-native, he said.

"It's not yesterday's technology," Dekate continued. "It's about innovating tomorrow's model platforms, model products and model ecosystem."

For example, Meta's metaverse strategy involves a highly immersive experience and ecosystem. This will likely involve not only VR/AR headsets but also worlds with avatars with more and better language options and more realistic movements. However, it will be hard to add advertising platforms in a metaverse ecosystem with its current infrastructure.

Therefore, Meta will likely evolve its hardware strategy to develop different families of chips that enable training and inference acceleration of generative AI models and multimodal AI that will help Meta create a better metaverse experience, Dekate said.

"These experiences require stitching together vision models, speech models, and NLP [natural-language understanding] techniques," he said.

"It's not just about solving generative AI techniques," Dekate added. "It's about using a lot of these techniques as building blocks and building larger AI native ecosystems that Meta specializes in, especially as its vision towards the metaverse."

Aiming for the future

However, building custom chips is an expensive endeavor that only the likes of Meta, Google and AWS can take on because of their monetary resources.

"The scale of AI in their organization is so large, and more importantly, they have a precise understanding of the problems that they need to address not just today but in their AI-first future," Dekate said.

Meta is in the right place now.
Ray Analyst and founder, Constellation Research

Those problems include research on how to optimize Meta's language models and platforms -- including Facebook, Instagram and WhatsApp -- with targeted ads. As a technology company with such a large social reach, Meta must address the problem of making sure its language models are able to extend across numerous world languages, using video, audio and images to deploy the right ads to the right populations.

Meta is using what it is learning from these platforms to create future immersive platforms at scale, including ones for the metaverse, Dekate said.

Part of this strategy is the next-generation data center. The new data center will be an AI-optimized design that supports liquid-cooled AI hardware and a high-performance AI network, according to Meta.

Meta also revealed it had completed the second-phase buildout for its AI supercomputer, the Research SuperCluster. This has allowed the tech company to train large AI models, such as its large language model, Llama, on the supercomputer.

Earlier this year, Meta made LLaMA available as an open source model, heading in a direction that Microsoft, Google, and ChatGPT creator OpenAI have shied away from, citing the risks associated with people misusing the models.

"By open sourcing Llama, Meta hopes to accelerate innovation," Cambrian AI analyst Karl Freund said.

Despite the criticisms for its choice to make the technology open source, Meta's choice with LLaMA shows how it hopes to rise to the top of the AI industry.

"Meta wants to use AI in all its products and be a leader in creating new LLMs," Freund said, adding that beyond its numerous products for internal use, Meta plans to develop massive AI models and release them as open source to enable wide adoption of Meta technology across the industry.

"We've been building advanced infrastructure for AI for years now, and this work reflects long-term efforts that will enable even more advances and better use of this technology across everything we do," Zuckerberg said in a statement provided to the media.

Esther Ajao is a news writer covering artificial intelligence software and systems.

Dig Deeper on AI infrastructure