Getty Images

HPE offers Cray supercomputer cloud service for AI models

HPE unveils cloud AI services powered by its Cray supercomputers working in tandem with a version of its GreenLake for Large Language Models.

HPE's new cloud service delivers supercomputing infrastructure and software to enterprise IT pros, providing access to the technologies they need to develop AI models.

The new GreenLake for Large Language Models (LLMs) is a multi-tenant supercomputing cloud service that provides access to Cray supercomputers and HPE's own machine learning software. The service will allow a wide range of IT pros to train, tune and deploy large-scale AI applications and services, delivered in partnership with Germany-based AI startup Aleph Alpha.

Leveraging the exascale capabilities of a supercomputer, IT shops can develop more competitive AI offerings faster and more cost-effectively, according to HPE. The company did not provide pricing for the new service.

To date, developing AI software using supercomputers has been restricted to a select group of scientific researchers and government agencies, HPE officials said.

"This announcement opens up possibilities for users that don't have access to supercomputers to develop large language models using a multi-tenant instance of a supercomputing platform," said Justin Hotard, executive vice president and general manager at HPE's high-performance computing (HPC) and AI businesses, in a briefing with reporters. "For instance, this [LLM] service can be used in applications requiring tax and image processing, or where analysis is involved to make predictions."

This will help democratize users' access to supercomputers on a consumption-like basis.
Dan NewmanChief analyst, Futurum Research; CEO, The Futurum Group

Users can upload their own data to train and tune a customized model solely for their own use, he added. HPE expects to deliver additional services later this year and next aimed at a number of commercial markets, including climate modeling, drug discovery, infill services, manufacturing and transportation.

Some analysts see the supercomputer-based service as beneficial for both HPE's competitive position in that market and for properly sensing the priorities of companies that don't want or can't afford to have a multimillion-dollar supercomputer on premises.

"This will help democratize users' access to supercomputers on a consumption-like basis, and they are using GreenLake to do it, which helps [HPE] out," said Dan Newman, chief analyst at Futurum Research and CEO of The Futurum Group. "This should also help HPE's billion-dollar green market that's becoming an important part of their overall strategy."

Another analyst believes the new service is well timed because users are finding it difficult to build HPC clusters in their own data centers.

Peter Rutten, research vice president, infrastructure systems and platforms, IDCPeter Rutten

"HPC is becoming a bigger workload for commercial enterprises, and they don't necessarily want to build their own HPC clusters in their own data centers," said Peter Rutten, research vice president for infrastructure systems and platforms at IDC. "The talk of the town is generative AI now, but this approach could be a practical way of having it both ways and for HPE to make its way into this world."

While HPE's service might offer advantages to a range of different users, supercomputer providers including Dell with Apex, Lenovo, Huawei, Fujitsu and NEC are expected to offer similar services.

"HPE will absolutely have competition for this," said Jack Gold, principal analyst at J.Gold Associates.

While this is HPE's first foray into the cloud supercomputing AI services market, the company has delivered offerings such as GreenLake for Machine Learning, the HPE Machine Learning Development Environment and the HPE Learning Data Management Software over the past several years.

GreenLake for LLMs will simultaneously support thousands of CPUs and GPUs from leading chipmakers including Intel and AMD, allowing users to train and create more accurate AI models and deliver them to market faster, HPE said.

The service will be hosted on the HPE Cray XD2000 and XD6500 supercomputers. HPE will offer a range of options that will serve to customize several technologies, including CPUs, accelerators, storage and interconnects, Hotard said.

The AI services delivered by GreenLake for LLMs will be available to users on demand. Users will have full access to the HPE Cray programming environment that contains an integrated set of tools for optimizing HPC and AI applications, as well as for porting and debugging them.

Users can opt to deploy the HPE Cray XD2000 and XD6500 supercomputers in cost-conscious remote colocation facilities, such as with QScale, which supports supercomputers using nearly 100% renewable energy. And according to Hotard, because HPE has used liquid cooling in other on-premises installations hosting its supercomputers, it has made more efficient the process of recycling wastewater so that it can be used again.

The service will be available by the end of this year in North America, with availability in Europe scheduled for sometime in 2024.

As Editor at Large in TechTarget Editorial's News Group, Ed Scannell is responsible for writing and reporting breaking news, news analysis and features focused on technology issues and trends affecting corporate IT professionals.

Dig Deeper on AI infrastructure