Understanding storage performance metrics
Learn how to accurately measure storage performance metrics in your data storage system, and discover why your results may differ from vendor benchmark numbers.
What you will learn in this tip: Learn how to accurately measure storage performance metrics in your storage system, and discover why your results may differ from vendor benchmark numbers.
Data storage vendors often release benchmark numbers for their systems to give customers an expectation of storage performance levels. But smart customers know not to trust these benchmarks completely because they don't always match real-world performance levels. We'll look at the factors benchmarks measure and why you might get substantially different results than those claimed by the vendor.
Disk throughput, latency and seek time
The most basic storage benchmark is megabytes per second (MBps), which measures disk throughput. As data is moved, performance is gated by the read speed of the source, the write speed of the destination, and the buffers and buses in between.
Manufacturers often report a single performance metric for their products, but real-world results frequently differ. The physical geometry of spinning disks causes data throughput to slow as the heads move from the outer tracks to the innermost tracks. Personal computers used by just one person often have uncontested access to all of these resources, but servers must deal with congestion arising from conflicting application demands. Therefore, you should treat a manufacturer's claimed throughput as a realistic maximum, not a daily expectation. Because they're mechanical devices, the read/write heads of hard disk drives take time to move into position as data is requested. This introduces rotational latency and seek time, and can slow disk performance significantly, especially in busy multitasking environments. Manufacturers generally report an average seek time measured in milliseconds for each model of a hard disk drive, and rotational latency can be calculated based on drive spindle speed. Although not useful on their own, these storage metrics become important when trying to predict storage system performance.
Solid-state drive (SSD) media can dramatically outperform mechanical hard disk drives. Data can be read from any portion of a flash memory drive with virtually no seek time or latency. Read throughput is high as well, but writes are a different story. Because data is stored in large blocks that must be entirely rewritten when even a single bit has changed, throughput and latency of write requests can be unpredictable.
Measuring IOPS
Enterprise storage systems incorporate technologies designed to overcome the limitations inherent in hard disk drives, and many familiar storage benchmarks are designed with certain assumptions about these systems. You're likely to encounter two types of benchmarks: synthetic measurements of theoretical performance and real-world application runs.
The grandfather of all storage performance metrics is a simple statement of the number of input/output operations per second (IOPS) that a subsystem can sustain. IOPS can be calculated or measured, and varies significantly based on the size and direction of the operations included. Performance can vary dramatically based on the size of the I/O under measurement, with most systems optimized to handle reads and writes of a few kilobytes.
Writing data to a RAID set or flash drive is often substantially slower than reading that data back. Fortunately, most applications read more than they write. Therefore, most benchmarks include a mix of read and write operations, often skewed toward the former.
We have theoretical models to predict the IOPS an application requires or that can be sustained by a RAID set. The theoretical performance of a given RAID level is often expressed as a write penalty, since parity RAID tends to slow writes more than reads.
Keep in mind that most enterprise systems include cache that accelerates I/O. This tends to upset assumptions based on theoretical IOPS and RAID levels because cache equalizes and accelerates performance.
Standard Performance Evaluation Corporation and synthetic app benchmarks
Because theoretical performance metrics often don't hold up when it comes to real-world systems, many people rely on application workloads for benchmarking. There are two main benefits to using application-based benchmarks: they allow apples-to-apples comparisons of systems with vastly different capabilities, and they suggest real-world storage performance expectations.
For example, a benchmark demonstrating Microsoft Exchange server performance would suggest that customers deploying a similar configuration should experience similar results. Synthetic metrics like IOPS force customers to model their systems and draw conclusions, introducing a greater potential for error.
Yet applications and expectations change frequently, and it's useful to compare the performance of systems separated by years or decades. This has led to the creation of synthetic application benchmarks, with the Standard Performance Evaluation Corporation (SPEC) leading the way. Rather than using actual applications, SPEC benchmarks rely on a mix of I/O intended to simulate a generic application. This balances the need for real-world performance and consistency over time.
Overall, if you seek to understand storage performance metrics, it's important to consider the implications of the questions you ask. Are you looking for real-world application benchmarks, synthetic comparisons over time, or basic measures of throughput and I/O performance? Each can be valid, and can assist in the selection of appropriate technology.
BIO: Stephen Foskett is an active participant in the world of enterprise information technology, currently focusing on enterprise storage and cloud computing. He is responsible for Gestalt IT, a community of independent IT thought leaders, and organizes their Tech Field Day events. A long-time voice in the storage industry, Foskett has authored numerous articles for industry publications and is a frequent presenter at industry events, including Storage Decisions. His contributions to the enterprise IT community have earned him recognition as both a Microsoft MVP and VMware vExpert. He can be found online at GestaltIT.com, FoskettS.net and on Twitter at @SFoskett.