Joshua Resnick - Fotolia

Accelerated processing units advance data center performance

As you come across more complex data center workflows, you can begin exploring options for accelerated processing, such as GPUs or TPUs.

Demanding applications, such as artificial intelligence, machine learning, video encoding and genomic research, are now out of the research laboratory and are dramatically altering the server microprocessor market. With accelerated processing units, organizations can increase processing speeds for data center workflows.

"Companies need microprocessors with more horsepower than traditional CPUs to handle new compute-intensive workloads," said Karl Freund, consulting lead for high-performance computing and deep learning at Moor Insights & Strategy.

Early advancements included graphics processing units (GPUs) and tensor processing units (TPUs). Now, future processing units are emerging to support these applications, leaving the server microprocessor market in a state of flux.

Historically, GPUs were mainly used for processing video data in areas like the gaming industry, but in the early 2000s, Nvidia Corp. began selling them to customers with new, high-performance workloads.

"GPUs have thousands of cores, allowing them to be much better at performing parallelized workloads, such as deep learning, than CPUs," said Peter Rutten, research manager of server solutions at IDC. "What's more, the dominant vendor, Nvidia, has done a very good job at building a complete software stack around its GPUs."

The end result is increased growth in the microprocessor accelerator market. The worldwide market for accelerated compute -- servers with acceleration technology -- will grow from $15 billion in 2019 to $19 billion in 2020, according to IDC, making it a fast-growing segment in the server market.

A growing field

Many vendors are looking to take advantage of this microprocessor market growth. In February 2018, Google revealed its TPU cloud service, which it claims performs up to 30 times faster than a GPU for machine learning applications.

Industry behemoths, like Amazon Web Services, Apple, Facebook, IBM and Microsoft, are all investing billions of dollars to build out their data centers to handle new public cloud workloads. Researchers are looking for ways to lower their companies' operating costs and have the expertise needed to design new microprocessors that improve server performance.

For instance, Microsoft now uses field programmable gate arrays for its speech recognition applications. Freund expects that Microsoft will eventually offer cloud services powered by these customizable chips.

Here come the startups

Microprocessor suppliers like Intel have been investing in this market, and a number of startups have emerged, including Cambricon Technologies, Cerebras Systems, CDSM Interactive Solutions, Horizon Robotics, Tenstorrent and Wave Computing.

As of July 2018, the supplier market is at an embryonic stage of development. In addition to developing new accelerated processing units, vendors need to create a software ecosystem -- a series of programming and management tools -- so third parties can build applications that take advantage of the high-performance chips.

Within this initial stage, it's unclear which microprocessors will gain traction.

"The accelerator market is now in a nascent state where customers are figuring out what these systems can do and how the vendors' various solutions perform with their applications," said Alan Priestley, research director at Gartner.

Bringing accelerated processing units to the data center

These hardware changes will affect the data center in a few ways. In some cases, companies will add accelerators to their servers and build their own high-performance applications.

"It makes sense for a large financial services company to build custom intelligent applications because so much money is at stake," Freund said.

Vendors offer companies the ability to tap into high-volume processing services via the cloud.

But developing high-performance applications to go with accelerators is complex and challenging.

"The ability needed to develop these high-performance applications is beyond most firms' skill sets," Priestley said. The work is complex, and few programmers have experience with these systems, which can make application development expensive.

Vendors offer companies the ability to tap into high-volume processing services via the cloud. This option will see more adoption because service agreements are simpler to deploy than the DIY route, require a low investment and shift the ongoing maintenance burden from the company to the cloud vendor.

Regardless of how enterprises approach accelerated processing units, these changes will dramatically impact server design with new configurations for switching and workload prioritization.

Since the 1970s, businesses have relied on Intel microprocessors to support their workloads, but, moving forward, they will deploy a wider variety of server microprocessors.

Dig Deeper on Data center ops, monitoring and management