- Share this item with your network:
- Download
Storage
- FeatureEnterprise file sync-and-share market evolves as adoption expands
- FeatureAre you ready to comply with GDPR requirements?
- OpinionMarket for software-defined systems due for a correction
- OpinionViolin Systems and StorOne lead way to shared storage
- OpinionTime for a flash storage system refresh: What's next?
- OpinionWhat's the future of data storage technology and the IT pro?
luchschen_shutter - Fotolia
Time for a flash storage system refresh: What's next?
The second wave of flash storage systems in the enterprise must be about optimizing flash performance and density, not just IOPS, and being faster than its hard disk predecessor.
Most data centers have already made the decision to implement flash in the primary storage tier, with many now 100% flash for production data. This year will be one of the first where we will see organizations start to go through a flash storage system refresh.
For these enterprises, the shock and awe of their first flash purchase has long worn off. So they will be looking at more than just IOPS as they refresh their original flash purchase.
Why so soon?
Considering most data centers purchased their first all-flash, or mostly flash, system less than three years ago, it may seem premature for a flash storage system refresh. But technology changes fast, and in the case of flash, even faster than normal.
The first big change in flash has been in the density of the drives. Three years ago, 128 GB or 256 GB flash drives were the norm. Today, most vendors are shipping 16 TB drives, and several are preparing to ship 50 TB-plus drives in this year. The problem is most flash systems, especially early ones, couldn't mix flash capacities or flash drive types.
The second big change is both internal and external network connectivity. Internally, most flash arrays in the data center use SAS as the drive-to-motherboard interconnect. While the speed of 12 Gbps SAS was acceptable in the early days of flash, as organizations start to add more and more workloads to their flash arrays, the SAS bus can get overwhelmed. Also, SAS communicates via the SCSI protocol, which was designed for rotational HDDs, not solid-state flash drives. This adds significant latency.
Similarly, external connectivity to the host is evolving. Initially, most flash systems came with either 8 or 16 Gbps Fibre Channel or 10 gigabit Ethernet, which, again, was fine for the early use cases. But as the infrastructure adds more hosts and virtual machines or containers are added to the hosts, the amount of I/O transmitted to and from the array grows at an exponential rate. All of this connectivity also used SCSI.
Enter nonvolatile memory express (NVMe), the new standard in flash protocol connectivity. NVMe is designed for solid-state media and doesn't have the latency of SCSI. Vendors claim a minimum of a 50% performance increase by switching to NVMe, which means better scale and more optimal use of your flash investment.
Finally, most organizations bought their first-generation flash storage system either to solve a specific performance problem or to extend the life of an existing hard disk-based system. The point flash systems typically lacked the scalability or reliability required to support the entire enterprise, while the flash extension systems didn't fully deliver on flash performance potential.
What to look for
The first step in making your next flash storage system purchase is to decide if it's going to be the storage system for the enterprise, if it's going to be to support a specific set of environments or if it's going to partner with a second storage system that will store less-active data.
It's also important to realize most organizations won't become all-flash data centers. While an all-flash primary storage tier is practical, an all-flash data center is another thing entirely. The reality is 80% or more of the data in most enterprises hasn't been accessed in more than a year, and storing it on flash media is a waste of IT budget. Most data centers will be better off with a two-tier strategy: a primary tier of all-flash and a secondary tier of a high-capacity, hard disk-based system like an object store.
The alternative is to go with a consolidated approach that integrates RAM, flash and hard disks into a single system that handles both primary and secondary storage. A system like this will need high performance and massive scalability and must be extremely reliable.
Beyond deciding what type of system to get, IT planners must learn from the past and build flexibility into their systems. As SSD density continues to increase, the ability to buy flash arrays that aren't fully populated and mix drive sizes -- something that many systems can't do -- becomes critical.
Finally, even if network performance isn't an issue, planners should look for a flash storage system with internal NVMe capabilities, at least for some of the flash tier. While external NVMe connectivity may not be necessary until the next flash refresh, they should understand their I/O profile and the NVMe plans of both their storage and network vendors.
For many organizations, the move to flash is complete, and it's time for the first flash-storage refresh. This next wave of systems must do more than be faster than the hard disk predecessors. It needs to optimize both flash performance and density. IT should focus on media flexibility and network connectivity, as well as the ability to somehow integrate flash with a hard disk storage system to store inactive data.