Tip

Making a storage capacity plan that doesn't bust your budget

Storage capacity is growing at a rapid rate and by using the latest innovations in storage technologies it is possible to meet demands without growing your storage budget apace.

Many storage experts see a rapid increase in the size of data storage in the next few years. A recent Red Hat survey suggests that, averaged over all respondents, capacity demand will increase by 54% in three to five years. In addition, a whopping 94% of those surveyed are frustrated with their current implementations and most fear they can't cope with the growth in capacity and workloads.

Storage's share of the IT budget is expected to move only slightly, from 13% to 14.5%. This means IT will have to draft a much better storage capacity plan if older gear is to be replaced at the same time as capacity is increased.

Pulling that rabbit out of the hat requires an open-minded, multi-faceted approach. For many, the hybrid cloud is the apparent panacea for the capacity issue. While the cloud has an OpEx-only model that is very attractive, there are still unresolved issues with moving data between public and private cloud storage due to the low speed of WAN networks, limiting the benefits of the hybrid cloud approach in the near future.

Techs that make the most of your storage dollars

We need to look at the bigger picture to create a storage capacity plan that includes data management and placement, and takes advantage of newer technologies such as all-flash arrays and data services software. These topics strike at the core issue of what amount of growth is actually needed to cope both with the capacity issue and performance and feature demands. These new technologies firmly change the storage model and open up new ways for users to get more bang for their buck.

Let's start with solid-state drives. Ultra-fast NVM Express products are still considerably more expensive than enterprise-level hard disk drives, but changes in the way data is distributed between appliances rather than concentrated in RAID arrays mean SATA SSDs are fit for use in most server workloads. The performance boost these devices provide reduces the number of servers needed for a given workload, and the total structure may actually be cheaper.

As an alternative for SAN-oriented shops, all-flash arrays provide an enormous boost in IOPS. All-flash arrays can serve as a performance tier and allow you to relegate older disk arrays as a secondary cold tier. Again, required server counts may be reduced due to the additional storage performance, resulting in a more affordable storage capacity plan.

For many IT shops, much of the expected data growth is unstructured, which implies that object storage has to be a part of the storage capacity plan.

The majority of all-flash arrays have internal data compression software that runs in the background. This allows these arrays to take advantage of the performance boost to millions of IOPS to provide a journaled write mechanism where the journaled data is deduplicated and/or compressed. This data reduction means that a much lower capacity is needed for the all-flash array, while the old secondary storage can hold much more data without adding any hardware.

All-flash products now support Linux, Windows and Oracle, and similar acceleration is available in the NAS world. There's something of a price war, so good products can be obtained at affordable prices.

With affordability in both SSDs and all-flash arrays, one might ask if this is enough to keep users within their storage capacity plan budget. Reality is often a bit more complex. The whole server/storage complex is being rethought as we see hyper-converged systems and mashups with the cloud. Moreover, RAID is deprecated as the data integrity and availability choice, and Fibre Channel SANs are under siege from Ethernet, with or without RDMA. This means short-term goals for data center storage are conflated with long-term objectives, causing some paralysis in the planning process.

Dealing with unstructured data growth

Users should look at options that carry over from today's tactical plan to their longer term strategy. This means expanding RAID farms or hard-drive filers are likely off the table. Beyond that is a good look at data structures in a few years' time. For many IT shops, much of the expected data growth is unstructured, which implies that object storage has to be a part of the storage capacity plan.

Other than software, object storage appliances and hyper-converged systems can be identical. Because of open-source code such as Ceph, object storage appliances can be inexpensive commercial off-the-shelf white boxes, bringing the cost per terabyte of these offerings way down compared with traditional data storage gear. Object storage software is growing to offer filer and block access to data, as "universal" storage. This will make object appliances very attractive for any future storage capacity growth.

Another alternative is to look at storage as a service as a way to avoid capital expenses for in-house storage. Companies such as Zadara Storage offer an on-premises rental approach for enterprise-class storage that allows for rapid configuration changes and experimenting with state-of-the-art gear such as all-flash arrays, flash caches, SATA SSDs and cache-fronted high-capacity hard drives, without the risk of committing capital budget.

To achieve more storage for much less cost means throwing out the old RAID playbook, looking at new-age vendors and considering new approaches to the storage problem.

Next Steps

Navigating the data storage capacity planning waters

Combination of SSDs and object storage almost ready for center stage

Data storage performance and acceleration still worry storage admins

Dig Deeper on Storage architecture and strategy