Nmedia - Fotolia
How do I prevent an I/O bottleneck in hybrid cloud?
In a hybrid cloud deployment, what are some ways to eliminate I/O bottlenecks that hurt application performance?
When building a hybrid cloud, IT teams should carefully consider their performance requirements to avoid an I/O bottleneck. These requirements are use-case dependent.
At one extreme, databases pound on storage and can't get enough I/O operations per second (IOPS), while web servers can run on low I/O levels. A high-end server that only runs database instances would likely need several thousand IOPS to remove the storage bottleneck completely, while a web server with 2,000 containers would likely need an average of 1,000 IOPS.
Local instance stores can ease the load on networked I/O, but performance must increase to support the cloud's compute power and agility.
Consider flash and solid-state drive (SSD) storage to increase IOPS and decrease the likelihood of an I/O bottleneck. All-flash arrays can reach one million or more IOPS, while new storage appliances with, for example, 12 inexpensive Serial Advanced Technology Attachment SSDs can reach 500,000 IOPS. But fast networked storage puts enormous pressure on networks. Consider using 10 gigabit Ethernet (GbE) dedicated storage local area networks, which will begin migrating to 25 GbE this year.
In a hybrid cloud, fast SSD and flash technology can increase storage performance for local private clouds, but the bridge to the public cloud requires further planning, especially if you use cloud bursting. The problem is that wide area network transfers remain slow.
To further reduce the chances of an I/O bottleneck, form a data management strategy that positions data as closely as possible to where it will be used. This requires the duplication of data sets in each cloud environment. This model will work for most types of data, including application code, tool sets and operating systems, as well as computer-aided design libraries and customer history files.
For critical data, such as inventory levels, a single copy is essential for data consistency. Often these are database records, and a cloud bursting model could use sharding to distribute processing and the associated data. If IT teams plan this in advance, they can preposition a snapshot of a portion of the database. Then, for cloud bursting, that section of the database in the public cloud would sync with any changes in the current, private version. The public cloud has many instance and storage options, so use sandboxing apps to determine the best configurations.