olly - Fotolia
NVMe/TCP and edge storage benefits and drawbacks
NVMe/TCP is finally entering the storage mainstream and can be particularly useful for data storage at the edge. Find out the pros and cons of this approach.
A little over a year ago, NVM Express ratified the NVMe over TCP transport binding specification and made it publicly available. Joining existing NVMe transports -- PCIe, remote direct memory access and Fibre Chanel -- NVMe/TCP defines the mapping of NVMe queues, NVMe-oF capsules and data delivery over TCP. As NVMe/TCP works its way into the storage mainstream, it's a good time to take a close look at the specification, particularly as it relates to edge storage.
Storage is an essential building block when designing the edge infrastructure for edge cloud computing given all the data from the billions of IoT devices. Choosing the best storage solution for edge cloud requires meeting edge-specific requirements.
"NVMe/TCP storage solutions are designed for edge cloud infrastructure and optimally serve the needs of the edge and enable organizations to fully benefit from the potential of the edge infrastructure," said Muli Ben-Yehuda, co-founder and chief scientist at Lightbits Labs Inc. Lightbits uses NVMe/TCP as a base for transforming hyperscale cloud computing infrastructures from relying on a collection of direct-attached SSDs to a remote low-latency pool of NVMe SSDs.
Edge storage requires consistent, low-latency operations that are well-served by NVMe SSDs. Such systems are typically deployed in hyper-converged environments where SSDs are locally attached. "The projected growth of the edge will create a strain on these systems, because it will require not only more and more high-performance storage, but also more compute resources," said Paul von-Stamwitz, senior storage architect at Fujitsu Solutions Lab.
The network between the edge and the private or public cloud can have difficulty handling the new massive data flow. As a result, it's necessary to increase edge compute capabilities to preprocess the data. NVMe-oF makes it possible to disaggregate and independently scale storage and compute resources with minimal impact to latency. NVMe/TCP provides further flexibility of network components without the need of specialized NICs and switches.
The pros
Disaggregation and composability are important benefits associated with using NVMe/TCP for edge storage. "The ability to scale storage and compute resources separately allows for maximum efficiency of resources," von-Stamwitz noted. "The edge is also fairly dynamic, so the composability of NVMe/TCP allows edge data centers the ability to reconfigure and repurpose hardware resources as needed."
Physical disaggregation also makes it possible to use higher capacity NVMe media. "By physically disaggregating, you can pool NVMe and utilize it remotely, purchasing only what you need and potentially adding more drives if you need more space," said Josh Goldenhar, vice president of products at Excelero Inc., which supplies distributed block storage with low latency for web-scale applications. This approach enables compact and dense application servers, requiring only a network interface instead of U.2 drive slots. It also can improve operational efficiency, helping ease the power, cooling and space restrictions common in edge data centers.
Another NVMe/TCP benefit for edge storage is eliminating the need for a dedicated storage network, particularly an unfamiliar one. "As edge data centers run on highly tuned TCP/IP networks, not having to add an unfamiliar protocol or, worse, a separate fabric, improves and eases the deployment of storage at the edge," Goldenhar said.
The cons
Along with its strengths, NVMe/TCP also possesses some negative attributes. "There's a small increase in latency, but for most applications, the consistency of latency is more important than [achieving] the absolute minimum," von-Stamwitz said.
Muli Ben-YehudaCo-founder and chief scientist, Lightbits Labs
A bigger concern is NVMe/TCP's inherent complexity. "The beauty of [hyperconverged infrastructure] is its simplicity," von-Stamwitz observed. "NVMe/TCP requires storage provisioning that needs to be automated and integrated with orchestrators, such as Kubernetes." Additionally, as with any network technology, NVMe/TCP creates security concerns that must be addressed.
Another potential sticking point is that to work at scale, NVMe/TCP deployments of any type demand high-performance density, because power, space and cooling all come at a premium at the edge.
"High-availability (HA) support is also critical for the same reasons, because power, cooling and space limitations prevent more common cloud data protection schemes, such as triple replication, from being viable, as they need three-times the gear," Goldenhar said. "HA allows for redundancy in a smaller footprint."
It's also important to remember that edge resources, including edge storage, often reside in a single rack, a single failure domain that "violates the basic architectural precepts of high-availability settings," Goldenhar noted.
Getting started
NVMe/TCP is inherently performant, scalable and flexible, so new adopters generally don't have to worry about these important issues. But it is important to consider aspects such as high availability, automation and security.
Goldenhar recommended experimenting with the open source NVMe-oF initiators and targets that are available at no cost on the web. Trying-before-buying gives newcomers a safe and easy way to gain experience with NVMe/TCP's performance characteristics and potential impact on an existing network.
"Doing so will not provide any logical volumes, redundancy or centralized management, but can allow you to become familiar with [NVMe/TCP's] concepts," he noted.
What's next?
Emerging NVMe All-Flash Array (NAFA) technology promises to be a great fit for edge storage and computing, Goldenhar observed. "Look for NAFAs that support HA requirements in addition to providing high bandwidth, high throughput and low latency."