everythingpossible - Fotolia

Tip

Right-size a cloud VM for better performance, lower costs

Most companies don't pay enough attention to VM selection during cloud app deployment -- which can drive up costs. Follow these steps to right-size a VM and stay within budget.

Even if most IT and business execs don't want to hear it, chances are their enterprise overpays for VMs in the cloud.

One of the primary drivers behind wasted cloud spend is a lack of VM right-sizing. The good news, though, is that there are concrete steps an organization can follow to properly size a cloud VM.

Public cloud providers price their resources based on reservation model and size. A customer pays more upfront for reserved instances than for on-demand instances and more for larger VMs -- or those with more compute, memory and storage resources -- than smaller ones. While these more expensive instance types come with high performance, the goal is to find the best balance between performance and cost.

Steps to right-size

The first step to right-size a cloud VM is to right-size machine images. Companies tend to take the easy way out when they compose machine images for deployment and adopt a single, standard set of OS and middleware elements. That often results in images that are as much as 40% larger than they need to be. To start the optimization process, define which middleware elements or OS features a given application needs to run, then pull anything else out. That leaves space for buffers -- or otherwise unused VM space that holds I/O data for improved performance -- and might let you reduce VM size.

Start this process in the development phase. Software teams should define explicit tool requirements for each application, rather than just propagate a convenient, standard configuration. During application changes and enhancements, revisit the middleware commitments each time to ensure image bloat doesn't creep in.

The second step is to tune configuration parameters and VM memory size based on testing. Linux tends to use whatever memory is available, often to buffer for improved performance. That's one reason users often overallocate memory resources. In most cases, an increase in available memory will improve performance in a nonlinear way; at some point, the rate of improvement will slow or stop. There's no easy way to calculate that point, so test applications on different cloud VM sizes to determine the cost-performance curve for each.

These kinds of tests require an automated testing tool that's capable of production volume testing, such as load or performance testing. The best tool depends on the nature of an application; some will require distributed, web-driven tests and others more specific transaction tests. Check the free memory usage of your VM -- via the Linux free command, for example -- and pay attention to the swap memory. If you see high swap usage, then your application has too little memory.

Run your presumptive cloud VM size, and retrieve the memory commitments from the free command. Use these to resize your VM so that it's as close to 1.2 times the used memory as possible -- a safe margin, in most cases. Then, test performance with a VM size that's one level smaller and one level larger.

Next, factor in how your cloud VM will handle multiple applications. If you plan to deploy only a single application into a VM -- rather than pool your VM resources -- then skip this step. But if you do expect to pool resources, make a spreadsheet with the VM size of all your applications, and determine the number of applications by VM size. If you have only a few outliers in terms of typical VM size, settle on the most used size rather than establish a second resource pool. This improves overall utilization, since larger pools are more efficient than smaller ones.

Reserved vs. on-demand vs. pre-emptive instances

Another factor to consider to right-size cloud VMs is whether to choose a reserved, on-demand or pre-emptive instance type.

Reserved instance types -- such as AWS Reserved Instances, Azure Reserved VM Instances and VMs available via Google's committed use discounts -- are always available, which means users face no delay in spinning one up. This is especially useful for applications with a 24-hour operations requirement. The use of VM pools for multiple applications, as described above, is another justification for reserved instances. In fact, if you plan to build a VM resource pool, you likely will have to use reserved instances, since one of your applications will always need a VM.

Most applications are probably a fit for on-demand VMs, which are less costly -- upfront, at least -- and reduce the risk of users being stuck in a long-term reserved contract. That said, while on-demand and pre-emptive instances cost less than reserved instances, they pose a greater risk of delayed image loads or, in the case of pre-emptive instances -- such as Google pre-emptible VMs or AWS Spot Instances -- of having one taken away. Be wary of on-demand, and especially pre-emptive, instances for mission-critical workloads.

The only way to guarantee you make the right decision is through testing.

Next Steps

How to create a Google Cloud Spot VM

Dig Deeper on Cloud deployment and architecture