Move to NVMe technology will depend on need and speed

As a replacement for the SCSI storage protocol, NVMe can provide more responsive storage and lower latency. But transitioning to NVMe in the data center requires thorough planning.

Nonvolatile memory express is a storage protocol designed specifically for memory-based storage, enabling technologies like flash to reach their full potential. Since NVMe technology is a change to infrastructure, it would likely take years for a data center to take advantage of it fully. But with each incremental step forward, organizations will see performance improvements.

NVMe is an industry standard that replaces the SCSI storage protocol. It supports thousands more simultaneous commands than SCSI can, and it can more deeply queue those commands. The primary benefit of NVMe technology is more responsive storage thanks to lower latency. It also uses Peripheral Component Interconnect Express (PCIe) as the primary storage interconnect.

NVMe is available as a drive interconnect and as a network protocol, thanks to NVMe over Fabrics (NVMe-oF). The network version enables Ethernet and Fibre Channel (FC) networks to transport data at speeds similar to locally attached storage.

The data center journey to NVMe will be multistep, with each step driven by practical need, as well as the speed at which NVMe technology can mature.

NVMe-based all-flash arrays

The first step to transitioning to NVMe in the data center is using all-flash arrays in which the drives inside the system are NVMe-attached. Externally, however, there are no significant changes to the system. Connectivity to the storage network and the rest of the environment is still the same: traditional Ethernet and/or FC. Even hyper-converged systems that use server-based NVMe drives will still interconnect their nodes via traditional Ethernet. The good news is that inserting an NVMe-based all-flash array into the storage infrastructure is seamless. The bad news is that it doesn't take full advantage of everything NVMe technology has to offer.

NVMe and IOPS performance

What is the payoff of NVMe in the all-flash array if data has to use traditional protocols coming in and leaving the storage system? The reality is that the storage system becomes a choke point for performance, especially in a shared storage environment. The internals of an all-flash array are an ecosystem themselves. There is storage software that uses the storage system's CPU to receive, act on and store data. The flash drives have to manipulate data to ensure it is stored correctly and efficiently. The software has to use the CPUs once again to find and send data back to the requesting user or application.

NVMe technology ecosystem

There is a network interconnecting the communication among software, CPU and storage. With all data sent to and from this system, the quicker data can traverse the internals of the system, the better overall performance will be.

Until NVMe burst onto the scene, the internal network in most all-flash arrays was serial-attached SCSI.

Until NVMe burst onto the scene, the internal network in most all-flash arrays was SAS. Now, it is quickly becoming NVMe. SAS is a slower connection and has to deal with the SCSI protocol inefficiencies. NVMe is faster in terms of connectivity and more efficient in how much data it can operate on at any given point in time.

NVMe-based scale-out storage

NVMe-oF is still in its early days. While it works, it is still fragile, and interoperability leaves a lot to be desired. This instability is why the first step to an NVMe data center is all-flash arrays with NVMe internal storage but external traditional networking. NVMe-oF does work; vendors just have to keep tight control over the components to eliminate problems. The need for a controlled network deployment makes NVMe-based scale-out storage an ideal second step for the NVMe transition.

NVMe technology ecosystem

Scale-out storage creates a cluster of servers (nodes) -- each with their own internal storage. It aggregates the storage from each of the nodes into a single virtual pool of storage. The network that interconnects these nodes is typically traditional Ethernet using IP. The internode communication is intensive, especially with the addition of more nodes. NVMe-oF is a more ideal interconnect thanks to its high performance and low latency. It should enable scaling to more nodes with better overall performance.

End-to-end NVMe

The next step to transition to NVMe technology is end-to-end NVMe, where the storage system and the servers attaching to it are all connected via NVMe. This design should bring in-server storage performance to shared storage. Major switch vendors Cisco and Brocade have added NVMe support to their switches, and there are several NVMe host bus adapter (HBA) cards available.

Unfortunately, there are several roadblocks to end-to-end NVMe:

  • Interoperability issues. Until any network card can work with any other network card and switch, deployments will go slowly.
  • Infrastructure upgrades slowly. Switches and HBAs that don't support NVMe need to reach end of life, which could take years.
  • Lack of need in terms of broad implementation. We are actually at a point where the network (without NVMe) and the storage (with or without) are faster than what most data centers need.

How to get started

For the few data centers taxing their current all-flash arrays and network, moving to NVMe right away might help. But they should also consider a network upgrade at the same time. There are several NVMe all-flash specialists that can either deliver a turnkey end-to-end product or have all the partnerships in place to do so.

Most data centers, however, can't take immediate advantage of NVMe, so this is not a situation where an organization needs to throw out its all-flash array and replace it with an NVMe-based all-flash array. As storage system refreshes occur, it makes sense to look for flash arrays that are at least NVMe in part. It also makes sense to make sure that any investment in network infrastructure -- be it switches or HBAs -- has NVMe compatibility built in.

NVMe is not a case of hurry up and wait; it is more a case of hurry up and plan. Now is the time to make sure future investments -- be it in storage or in infrastructure -- are ready for NVMe technology.

Dig Deeper on Flash memory and storage