Victoria - Fotolia

NVMe, Optane, GPUs exert big impact on enterprise storage

Enterprises in need of high-performance storage for AI, analytics and virtualization may find help in new NVMe, Optane and processor technologies, a Flash Memory Summit panel says.

Technologies that are having the greatest impact on enterprise storage and trending to become even more important in the future include high-speed NVMe, innovative new processors, as well as Optane and other emerging memory products.

That was the conclusion of a panel of experts convening at this week's virtual Flash Memory Summit.

Demanding workloads that are driving the industry to produce faster and more power-efficient enterprise storage and memory technologies include AI, big data analytics, virtualization and high-performance computing.

"In order to manage AI, we have to rethink the whole compute architecture. And that's both hardware and software," said Dave Eggleston, principal at Intuitive Cognition Consulting. 

Feeding the 'AI beast'

Technologies that play a role in feeding "the AI beast" with data include CPUs, GPUs, new data processing units (DPUs), field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs), Eggleston said. Latency-lowering NVMe and NVMe over Fabrics (NVMe-oF) help to tie the compute elements together, he added.

"How do you then reduce the problem of the CPU being a traffic cop for data moving between these different processing elements?" Eggleston said. "One of the key things is going to be how do we move some of that work out to other places, maybe farther out on the network or into the storage controller itself."

In order to manage AI, we have to rethink the whole compute architecture. And that's both hardware and software.
Dave EgglestonPrincipal, Intuitive Cognition Consulting

Eggleston said processor-equipped computational storage devices are gaining traction to unburden the server CPU and handle tasks such as encryption, compression and database search at the edge. He said DPUs can also help by moving intelligence out to the network card to handle some of the work. 

CPU vs. GPU vs. DPU

Rob Davis, vice president of storage technology at Nvidia, said the DPU is designed to accelerate the secure movement of data around the data center between CPUs and GPUs, which focus on general-purpose computing and accelerated computing, respectively.

"For many years, CPUs were really the only programmable element in computers, and more recently, GPUs have come to the forefront," Davis explained. "Originally, they were used to deliver real-time graphics, but their parallel processing capabilities made them ideal for accelerating computing tasks like artificial intelligence, deep learning and big data analytics."

Davis said Nvidia's new GPUDirect storage could bypass the host CPU completely and move data directly into the GPU's memory from the network using remote direct memory access (RDMA) technology for "an order of magnitude increase in performance." Nvidia is working with major players and many startups in prepping GPUDirect, which is currently in beta, for general release, Davis said. 

Applications accelerating through Optane

One of the technologies that Intel is focusing on to accelerate especially demanding applications is the non-volatile 3D XPoint memory it co-developed with Micron to fill the performance gap between DRAM and NAND flash. Intel uses the Optane brand name for its 3D XPoint-based solid-state drives (SSDs) that began shipping in 2017 and dual in-line memory modules (DIMMs) that hit the market in 2019.

Optane memory.
Intel's Optane persistent memory modules offer higher capacity options at a lower cost than DRAM.

Jim Handy, director and chief analyst at Objective Analysis, said Oracle databases in particular have done well by harnessing Optane SSDs, and they could achieve even greater performance gains using Optane DIMMs tightly coupled with Intel's latest Xeon processors. Handy said adoption has gone slower than expected, but the DIMM business is still on track to raise more than $10 billion within a few years.

"Intel's plowed a ton of money into this," Handy said, noting that he expects Optane/3D XPoint to pull ahead of other emerging memory technologies. "They want this badly enough that they've invested to the tune of a couple of billion dollars a year for the past three years." 

Most promising emerging memory

Tom Coughlin, president of Coughlin Associates, said other emerging memory technologies that could also play a bigger role in the future through discrete and embedded products include magnetoresistive random access memory (MRAM), resistive RAM (ReRAM or RRAM), and ferroelectric RAM (FeRAM or FRAM). Coughlin and Handy co-authored a recent report that predicts a $36 billion market by 2030 for emerging memory technologies, led by 3D XPoint and MRAM. The largest company making standalone MRAM products, Everspin Technologies, claims to have shipped more than 120 million chips, Coughlin said.

"There's getting to be a whole zoo, if you will, of different types of memories. We're expanding the whole memory market," Coughlin said. "And what's intriguing about this is with NOR flash [and] possibly SRAM reaching their limits in terms of scaling, we may be moving from volatile memories into non-volatile memory architectures. And these could have an enormous impact on the design of future devices, both for embedded and large systems as well."

Eggleston said Optane persistent memory (PMem) DIMMs could add capacity cheaper than DRAM to accelerate in-memory databases such as SAP HANA and enable more virtual machines per server in virtualization environments. Another key advantage of using persistent memory over DRAM is a reduction in power consumption, Eggleston added.

On the software side, one recent development that caught Eggleston's attention was the open-source Distributed Asynchronous Object Storage (DAOS) that Intel developed to work with the Optane PMem DIMMs and handle "small and misaligned I/O." DAOS serves as "an example of how a mix of using persistent memory along with storage can be really beneficial to HPC and AI workloads," Eggleston said.

Many analysts expect Optane DIMMs will have a greater impact than Optane SSDs over the long haul, and prices for both will decline as manufacturers produce more chips. Eric Herzog, CMO and vice president of worldwide storage channels at IBM, said Optane SSDs are currently relegated to "niche" status in enterprise storage because of their limited capacity and high cost relative to flash.

"Storage class memory will get there, but right now, it's really for the speed demon crew," Herzog said. "We do sell some of it, but compared to flash, it's miniscule at this time."

Areas where IBM has seen significant growth are enterprise storage systems with NVMe SSDs and all-flash arrays that support NVMe over Fabrics (NVMe-oF). Herzog said IBM started shipping NVMe SSDs in flash arrays in the summer of 2018, at a 30% price reduction over the prior generation, and introduced NVMe over Fibre Channel about six months later.

"One of the myths of NVMe is that it costs more money," Herzog said. "I'm here to say that's bull hockey."

Next Steps

The best enterprise storage systems and products of 2020

Designing Powerful AI Systems with NVMe, PCIe and Logical Volumes

Hands-on Testing of Persistent Memory's Effects on Analytics Performance

Open Source Processors for Next-Generation Storage Controllers

Nvidia finalizes GPUDirect Storage 1.0 to accelerate AI, HPC

Dig Deeper on Flash memory and storage