Getty Images
How to effectively compare storage system performance
Your power as a buyer is stronger before the sale closes: Ask for what's required to effectively evaluate storage vendors' system performance.
Storage vendors love to talk about their performance specifications, such as IOPS, throughput and latencies. The problem is that there's no standardization of those specifications among vendors.
There is little guidance as to how vendors measure or come up with those storage system performance numbers or how they correlate to a specific workload or application. In addition, vendors measure performance in different ways. Add in different storage types -- block, file and object -- and it gets even more complicated.
You, the IT pro, are essentially left to your own devices to figure it out. So, how should you compare performance?
Specific performance metric misinterpretations
Storage system performance metrics generally measure three things: latency or delay of reads and writes, IOPS and throughput, i.e., the number of bytes per second.
The problem is their measurement; it's difficult to tell exactly what a given storage vendor measures.
As seen by the following chart, organizations should consider multiple factors to standardize vendors' performance claims.
The type of storage matters, too. Block storage generally focuses on latency and IOPS. File and object storage tend to focus more on throughput.
Storage vendors generally measure with a process that delivers the best results for the given vendor. It does not necessarily correlate to the real world or specific workloads.
Benchmarks can be an accurate storage system performance metric. However, the storage configuration can manipulate them. The configuration is often one that you would rarely or never buy. It doesn't mean the benchmark is not accurate. It means it's not accurate for the configuration the vendors propose.
Regardless of their performance assertions, it's crucial to know how vendors measured that performance and how that performance applies to the proposed storage configuration.
Proof-of-concept partiality
The goal of most storage salespeople is to get their storage system installed and tested at the customer's office in a proof of concept (POC). They know that, once their product is on-site and applications are running against it, it is likely that the organization buys the storage system.
The problem with POCs for the busy IT administrator is the number of resources, amount of effort and time it takes to do them.
There is another approach, but it does require some work.
A methodology to compare storage system performance
First, determine all the workloads that you will place on the proposed storage system. Next, figure out the peak load on the storage system or systems you plan to replace. Then, estimate both the capacity and peak performance growth rate within the next three years.
Now, for the fun part: Ask the vendor to simulate its peak workloads in its labs on the proposed configuration. Allow oversight on its testing, and have the vendor provide the results in writing. This puts the risk back on the storage vendor or reseller and not on you.
What if this methodology is simply impractical for you? Then, go to the following secondary methodology.
Normalize storage system performance metrics
The second option requires more work. Gather each storage system's performance metrics that are relevant to the workloads that you plan to run. Make sure each vendor or reseller provides a written explanation of what it measured and how, including all configurations. This is critical for several reasons, including cost and performance.
If, for example, the claim is 171 GBps throughput but it requires 352 server nodes with two rack units for each, there are a lot of other costs to consider, such as power, cooling, uninterruptable power supply, networking, cables, transceivers and RUs. Data center fixed overhead costs are allocated based on RUs consumed. These costs are generally not included in the proposal that includes storage system price, software subscriptions, implementation costs and maintenance costs. And they are also not included when the storage system vendor charges on the cloud on-demand model.
Next, compare the applicable metrics from the specific storage systems; find common denominators. For example, if one storage vendor reports 4 KB sequential read IOPS and another reports 4 KB random read IOPS, ask the vendors to provide the same. Odds are the vendor has them even when it doesn't publish them.
The best metrics to compare are the ones most applicable to the applications and workloads you will run. If the application is an Oracle database, the performance metric most applicable is 8 KB mixed read/write random IOPS. When the vendor only provides the 4 KB variation, there is a way to roughly estimate the 8 KB results -- simply divide the 4 KB results in half. If the vendor objects, ask for actual 8 KB test results. Use this same simple math for other I/O sizes.
Throughput is somewhat more difficult to standardize, especially if vendors don't supply it. You can roughly calculate it by multiplying the sequential read IOPS by the size of the I/O.
Latency is the most difficult to standardize, especially when vendors measure it differently. There are many factors that affect application latency, such as storage system load, storage capacity utilization, storage media, storage DRAM caching, storage network congestion, application server load, application server utilization and application server contention.
The most important question to ask is how the vendor measured the latency, under what loads and from where. Measure it to and from the interconnect on the storage controller for the best point of normalization because it takes nonstorage variables out of the equation.
Once you standardize the performance of each potential storage system to a common denominator, calculate TCO for each storage system for three to five years. Then, calculate TCO/performance.