12_tribes - Fotolia

Enterprise storage market poised for more disruption in 2017

CTOs share enterprise storage predictions for 2017. Cloud, server-based storage, HCI, and growing use of containers and analytics will spur further disruption.

The enterprise storage market is heading for more disruption in 2017 -- and beyond.

We tracked down CTOs and other leading technologists at the top enterprise storage vendors for their personal predictions on how the industry will shape up in the coming year and in the future. They envision shifts to cloud, server-based storage, hyper-converged and converged infrastructure, and the growing use of real-time applications, containers and data analytics. These changes will force legacy vendors to transform their strategies to better compete in the wildly changing enterprise storage market.

'Churning' in the enterprise storage market

Hu Yoshida, CTO, Hitachi Data SystemsHu Yoshida

Hu Yoshida, CTO at Hitachi Data Systems (HDS): There [are] going to be a lot of acquisitions, divestitures and things churning in the storage industry. The ones who succeed are going to have to focus more on application enablement, the cloud and the internet of things (IoT). If it's just about storage, they're going to be out of business. You have seen this already happen in the disk business. Not everything is in the cloud, but you could see the writing on the wall that it's going to happen, and [it] could happen very quickly.

This is a turning point year. Storage companies are trying to position for that. The focus is not on building boxes; it's enabling these boxes. We enable it by integrating it into converged solutions that can interface to public and private clouds.

Daniel Cobb, fellow and vice president of global technology strategy at Dell EMC: Storage goes real time, all the time. Net-new architectures that combine new ratios of compute, network, memory and storage will drive the rapid adoption of real-time infrastructure. And that infrastructure will be fundamentally different from the traditional transaction-processing infrastructure we're used to. There will be a sea change to get data closer and closer to compute and, quite frankly, compute closer and closer to data. We've only just begun to see real-time workloads, driven by the integration of multiple real-time data feeds, on very low-latency flash storage.

It's also going to be driven by [the] rapid advancement of the networking stack -- particularly the deployment of 25/50/100 Gigabit Ethernet and RDMA [Remote Direct Memory Access] over Converged Ethernet fabrics -- and then, finally, the host systems that can take advantage of multicore and large memory footprint. For some customers, those things will be three different ecosystems that they bring together for their own solution. For others, it will be a single, integrated architecture -- maybe a variant of a hyper-converged architecture -- that advances the workload conversation about real-time customer interactions and real-time systems of insight.

Software developers will design real-time applications more around an in-memory paradigm, and, suddenly, memory will need to take on a lot of the values and principles that we grew up with in storage platforms. It will increasingly be a requirement of memory platforms to guarantee persistence, protection, robustness, resilience, manageability and security -- the kinds of things that you used to have to bounce off of an external storage array. You might, in the future, bounce that off of an external memory array. Or, maybe, those functions become a part of my software-defined storage stack that lives in the server.

Christos Karamanolis, fellow and CTO of the storage and availability business unit at VMware: In 2017, storage is going to become an integral part of a more holistic and simplified IT operations model. Storage will no [longer] be a discrete part of the IT infrastructure. It's going to become part of an end-to-end IT model, where generalist IT professionals manage the entire IT infrastructure. That infrastructure may span both private data centers as well as public clouds that offer infrastructure as a service. The goal is to substantially simplify the operations of enterprise customers' IT to get them as close as possible to the operational efficiencies of public cloud.

J Metz, research and development engineer of storage networking and solutions for the office of the CTO at Cisco: Storage solutions will see a high degree of blending with other technologies. Elimination of bottlenecks in storage means that networks and high-performance storage will start to move closer together. Storage will become a prime motivator for higher-bandwidth networking solutions.

This is not the same as hyper-convergence, where the relationship between compute, network and storage is constant, just modified into use cases that are beneficial for certain types of deployments. The interconnections between storage, networking and compute will become a hindrance to overcome. We'll see a bundling of technologies like networking and storage, and those bundles will be treated in the system as an independent component.

Editor's note: Metz said his predictions are personal and do not represent the opinions of his employer.

Server-based storage, hyper-converged infrastructure

Rajiv Mirani, SVP of engineering, NutanixRajiv Mirani

Rajiv Mirani, senior vice president of engineering at Nutanix: Hyper-converged vendors will start addressing the full infrastructure stack -- not just storage and compute, but also networking and security. Once in place, IT will then be able to set up policies defining which applications talk to which other applications, how they are authorized, how they are authenticated, the best network path between them and how to connect these applications to other instances running in the cloud. Delivering this level of application-centric management and control will require a complete and integrated infrastructure stack.

Karamanolis, VMware: Server-based storage and hyper-converged or converged technologies are currently a niche market. Two factors will accelerate their broader adoption, starting in 2017. First is the rapid decline of the cost of flash storage. [Nonvolatile memory express] NVMe devices coming into the market are designed with server-storage architecture in mind. Given how fast those devices are, it makes much more sense to have your data close to the CPUs where the applications run, as opposed to going over an interconnect that adds substantial latency.

The other factor is the introduction of the new generation of chipsets from Intel, called Skylake. They are going to become available in mid-2017, and this will result in a major server refresh in the data centers of enterprise customers. This server refresh is going to bring an opportunity to many enterprise customers to reconsider the architecture of their data centers.

Mark Bregman, CTO at NetApp: Hyper-converged infrastructure (HCI) is not the future of the data center. It's going to be clear a year from now that it satisfies a subset of the needs people have in data centers, but it's not the universal solution that some people would have you believe. HCI came in and promised a lot of simplicity, which it delivered. But, now, people are realizing that simplicity comes at a cost. We're beginning to see some customers realize that when they use HCI building blocks in the general-purpose data center, they're paying for things they don't use. If they've got workloads that are storage-intensive, they're paying for compute capacity they're not using. Or, if they need a lot of compute capacity, they have a bunch of storage they're not using.

A year from now, we'll have something that goes beyond today's HCI and delivers the simplicity -- mostly driven by software, plus the flexibility and modularity to pick and choose for a given service how much compute and storage is needed, and upgrade them separately as technologies evolve.

Container-ready storage

Milan Shetti, CTO of data center infrastructure, HPEMilan Shetti

Milan Shetti, CTO of the data center infrastructure group at Hewlett Packard Enterprise (HPE): We'll see more container-ready storage in '17. There will be storage for containers and storage for virtualized applications, because the properties of containers are so different [from] the properties of virtualized. This is going to be similar to the SAN and the NAS world. There was a SAN world, then a NAS world, and then a unified world. If I use the same parallel, hyper-converged would be like the SAN world. Containers would be more like the NAS world. And, over time, they will get unified. But, initially, they will be discrete, because they can advance faster in feature functionality.

Mirani, Nutanix: Containers will force hyper-converged vendors to address both virtual workloads and containerized workloads equivalently. How you manage them in a unified manner is something that's still missing. The applications themselves may be built out of virtual machines (VMs) and containers, but that should all be transparent to the user.

Much of the time that IT currently devotes to infrastructure management will go away. The focus will shift toward enabling a more consumer-like experience when managing applications, including having an enterprise app store from which you can deploy applications and automate lifecycle management and monitoring. When two applications need to scale out, for example, operations will be automated on a per-application basis, not for individual VMs or containers. We will begin to witness this fundamental change over the next two or three years.

Data analytics in storage

Martin Skagen, CTO, BrocadeMartin Skagen

Martin Skagen, CTO at Brocade Communications Systems: We'll see analytics in the storage space more than we have before. So far, it's been nichey, hard to obtain or very expensive -- 2017 will be a year where customers see something that's much easier to obtain both commercially and physically.

Matt Kixmoeller, vice president of products at Pure Storage: The big data world that was most noticeably characterized by the era of Hadoop is going to transition very aggressively to [Apache] Spark. Spark is a new analytics package that opens up much more real-time stream analysis and interactive queries. There's a huge opportunity to marry that with flash to get into the world of real-time analytics for IoT.

Vincent Hsu, fellow, vice president and CTO for IBM Storage: Spark provides a very interesting model to allow you to do in-memory analytics and operate on your data in a much more agile fashion. We will see optimizations to allow Spark to work with all kinds of diverse storage. For example, we are allowing Spark to work on object storage with great performance. We just plug in the Spark operating models to allow us to harvest data insight.

In the past, when you talked about big data, people assumed you were talking about Hadoop. You make a replica of the data, load it into Hadoop clusters and run MapReduce operations. But these operations become difficult when you start talking about large number of petabytes. You have to ingest the data to the particular file system and then crunch the numbers on that. Spark doesn't require you to do that. Spark will allow you to run analytics where the data is, in your object storage or file storage.

Object storage to finally catch on?

Edge is going to be the new center, and more storage products are going to reach out to the edge.
Milan ShettiCTO of data center infrastructure, Hewlett Packard Enterprise

Yoshida, HDS: As I talk to more and more CIOs, the biggest problem they have is data silos. Many of them are moving to more of a centralized data hub, so they can get better control of their data. This makes it easier to run analytics, provide security and share a consistent view of data across the lines of business. Object stores are going to be the base for common, centralized repositories. Object storage has rich metadata capabilities, which enable us to search it better and provide policies to govern it better.

Shetti, HPE: Edge is going to be the new center, and more storage products are going to reach out to the edge. Data center applications are getting consumed through devices, whether iPads, Android apps or, even in a lot of cases, wireless at the edge. And more data is getting created at the edge. Today, the edge infrastructure and the core infrastructure are very discrete in most data centers. And there is a whole bunch of middleware to do translations from one layer to another. That's going to get simplified. Object storage has a big play in this area. This might be the year where it breaks out.

Storage to play bigger role in security

Shetti, HPE: The entire security paradigm will start to get redefined in '17. Until now, security in the data center has been seen as network intrusion detection or prevention. But if hackers are after data and data is on storage, why isn't security more of a storage concern? Storage and compute will play a much bigger role in defining security, and enforcement technologies will be showing up in both ... embedded right on the hardware above and beyond encryption. The biggest problem in the security industry has been add-on software packages. They're cumbersome and, in some cases, they become the problem because there could be malware introduced in them.

Skagen, Brocade: We might see in-memory databases and in-memory applications take advantage of NVMe and change how we deal with security and flash management. Today, when we talk about security, a lot of that stuff is not analyzed or predicted real time. There's some delay. It could be hours. It could be days. But with the advances in memories, there is a possibility that the security space would comingle more with the storage space because of the much lower latency provided by NVMe.

Next Steps

Hot enterprise storage trends for 2017

Storage vendors to watch in 2017

Compare 2017 vs. 2016 enterprise storage predictions

Dig Deeper on Storage vendors