Get started Bring yourself up to speed with our introductory content.

Eight tips to roll a service or app into an AWS deployment

4/9

Optimize workload performance with Auto Scaling

Source:  Akindo/Getty Images
Visual Editor: Sarah Evans

Traditional enterprise applications often rely on single instances in a local data center, which can limit a workload's ability to handle increases in traffic. The availability of cloud resources enhances workload scalability in an AWS deployment; you can increase resources for periods of high traffic demand and reduce them as demand wanes.

When you move a new workload, or integrate a new service, into your cloud deployment, use AWS Auto Scaling to monitor the health of Elastic Compute Cloud (EC2) instances, replace failed instances and maintain availability.

Auto Scaling can also spin up EC2 instances across AZs, balance traffic and ensure adequate compute power. Admins can manually add or reduce EC2 instances in response to Amazon CloudWatch metrics to maintain workload performance. Load balancers can also distribute traffic across instances, even as the number of EC2 instances changes over time.

In addition to EC2 instances and Spot Fleet, Auto Scaling also works for Elastic MapReduce clusters for big data processing tasks, AppStream fleets and DynamoDB throughput capacity.

Sysops should obtain detailed technical knowledge to properly assemble and scale the right mix of services for workload architecture. Refine and optimize scalability over time to achieve a desirable mix of performance and cost.

View All Photo Stories