Kit Wai Chan - Fotolia
Control AWS traffic with routing policies
AWS offers four routing policies in Amazon Route 53 Traffic Flow. While each option balances heavy traffic loads, admins must consider each policy's specific uses.
Amazon Route 53 Traffic Flow is a cost-effective service that gives IT pros full control over end-user traffic flow to and from hosted applications and websites. The service translates domain names into IP addresses, enabling administrators to dictate how to route AWS traffic. Within Route 53, admins must choose one of four routing options: weighted, failover, geolocation and latency.
Weighted routing. The weighted rule distributes routed traffic among different resource record entries based on the percentage of defined total traffic flow. This option provides load balancing at the Route 53 level, controlling routed traffic toward application endpoints. The rule calculates total volume of traffic flow for a subdomain, which serves multiple applications or websites with the same content but in different locations.
A weighted policy requires multiple record sets to resolve a subdomain, and also a numeric value from 0 to 100, which is known as the weight for each record set. This weight value decides how to route AWS traffic flow toward the application endpoint. For example, if there are three resource record sets for a domain and each record has weights of 1, 1 and 3, respectively, its total sum is 5 (1+1+3). Based on this weight value, Route 53 selects each of the first two resource record sets one-fifth of the time; the third resource record set will return three-fifths of the time of total request.
A weighted routing policy is commonly used to migrate an application into a new environment. Consider a scenario in which a company migrates from on-premises to an AWS hosted service. Migration to a new environment needs lots of planning and testing to meet service-level agreements. In this scenario, a weighted routing policy is likely the best option.
In one example, at the initial stage of migration, 90% of traffic flow points toward the existing environment and the other 10% points toward the new environment. After several days or weeks, an admin should gradually decrease the weight value of the existing environment and increase the weight value of the new environment. This is also a good way to test whether the new versions of the infrastructure and application can handle the load, easing the transition from one environment to another.
Latency-based routing. The latency-based rule dictates AWS traffic flow-based DNS query latency results. The rule routes traffic according to the shortest delay so as to resolve a subdomain that has multiple resource record entries. The most common scenario for using latency-based routing is when an application is hosted in more than one region of AWS or in a virtual private cloud. When a request comes in to resolve a domain name, the latency-based router checks two factors -- the edge location closest to the IP address during the lookup and the AWS region specified in the record set.
Admins commonly use latency-based routing policies when it's vital to provide low-latency DNS resolution to customers; this also determines the performance of the hosted application for the global audience. For example, if a global airlines company operates the majority of its services through online and mobile media, high availability is especially important. This routing policy can ensure high performance for online users.
Failover routing. The failover rule, also known as active-passive failover, directs traffic based on the availability of resource record entries. That includes a health check for all available record sets in a subdomain to guarantee high availability. The health check function is best served to evaluate either the health of an Elastic load balancing endpoint or the health of particular records set in a hosted zone.
The most common scenario for using the failover policy is with AWS blue-green deployment or active-passive failover. If a company uses two different AWS regions -- one as active and one as backup -- it provides a failsafe for services during critical periods, such as an outage. The active region switches AWS traffic to the secondary/backup region.
Admins create primary and secondary resource record sets for the domain name. The routing policy depends on a health check of the resource record sets. If the health check fails for the primary record set, then the DNS query checks the health of the secondary record set. If a backup is available, traffic routes to the secondary record set. This approach of providing only one active address for underlying resources at a time saves maintenance costs and provides high availability of applications.
Geolocation routing. This rule occurs when an enterprise must create a routing policy based on end-user locations, such as if a web server is hosted in two different geographical locations. If an incoming traffic request generates from Canada, the record entry, which is set to North America, serves the traffic request.
Admins can localize content with a geolocation routing policy, creating sites that serve different languages based on the source of end-user request. The policy can also restrict content distribution based on the global distribution policy.
For example, if a company has language preferences set for different locations in which it operates or has customers, end users that access from Spain will see the page in Spanish and users from Germany will see German content. All AWS traffic from a particular geolocation will redirect to one consistent endpoint based on the address record.