Reduce hybrid cloud latency with these five tips
With the right workload design and infrastructure changes, organizations can minimize latency and boost hybrid cloud performance.
Hybrid clouds are a popular way to extend local data centers, giving businesses more flexibility for workload balancing and management. Hybrid clouds allow businesses to burst or migrate workloads to the public cloud when additional computing resources are needed, and to prepare for disasters through redundant workload architectures. But it takes a network to connect private and public clouds, and networks experience unavoidable latency that can cause hybrid cloud performance to suffer.
Here are five ways your business can reduce hybrid cloud latency and maintain top-notch performance.
1. Location, location, location
It takes more time to cover larger distances -- even at the speed of light. So, to reduce hybrid cloud latency, organizations should connect to a public cloud facility that is geographically close to its private cloud data center.
Large public cloud service providers typically allow customers to select from a variety of facilities in different regions and countries. For example, Amazon Web Services (AWS) maintains three public regions in the U.S. -- one in the east (northern Virginia) and two in the west (northern California and Oregon). A business in Georgia could minimize hybrid cloud latency by connecting to the U.S. East Region rather than a U.S. West Region.
Some companies opt for a distant public cloud for disaster recovery (DR) or business continuity (BC) reasons. However, it's important for businesses to balance performance with location vulnerability.
2. Consider dedicated connections
The Internet is one network that is shared between countless users. This means public Internet connections may experience traffic congestion, causing bottlenecks and increased latency between private and public clouds. To avoid this and reduce hybrid cloud latency, organizations can establish dedicated, or private, connections between the public and private cloud.
A growing number of public cloud providers offer direct connection services, including Azure ExpressRoute, AWS Direct Connect and Google Direct Peering. Customers can choose the direct connection bandwidth that is most appropriate for their cloud traffic needs.
Direct connections, however, come with caveats. Businesses must pay for connectivity with a telecom provider, and will also have additional cloud service provider costs.
3. Use cloud caches
Hybrid cloud latency occurs when moving massive amounts of data -- such as big data sets -- between private and public sites. For example, a redundant workload may draw from a central data store in the private cloud to back that data up in the public cloud. However, there is no need to move or remotely access data that is already on hand. Caching can help avoid hybrid cloud latency by allowing workloads to reuse content that has recently been used.
For example, if a public cloud workload calls for a 10 MB file, moving that file for the first time could cause latency. But once that file is moved and retained in a cloud cache, that same data can be accessed again from the cache, rather than having to move it from the private cloud again.
One example of a cloud cache is AWS ElastiCache, which may be used to complement AWS API development. There are also intelligent caching appliances, like Blue Coat's CacheFlow, that can cache data and web content, such as video.
4. Optimize traffic
Even with caching, there are instances when data has to be moved within a hybrid cloud, and this can cause an increase in network traffic. Latency will occur each time a packet moves across the Internet, but businesses can reduce the cumulative latency of those packets by reducing the number of packets it has. Traffic or WAN optimization devices can help achieve this.
These devices use technologies like protocol acceleration, data compression and quality of service, depending on the data type. Data compression, as an example, packs more data into each packet, lowering the total number of packets needed to move a file.
There are numerous traffic optimization devices, including Riverbed SteelHead appliances, Avare's FVT edge filters and Blue Coat's Mach 5 appliances.
5. Optimize workloads
Developers shouldn't overlook the importance of workload design and its influence on network latency. Workloads designed to use massive data sets, or deal with critical, time-sensitive data, can be extremely sensitive to network latency. Redesigning or re-architecting these workloads to better accommodate hybrid cloud environments can alleviate some latency issues.
For example, rather than designing a workload so that it demands a massive data set to perform an action, design it to call only a subset of that data for a particular task. There is no single, "right" way to optimize an application for hybrid cloud, but developers should take the time to assess possible design changes that could help.