Maksym Yemelyanov - stock.adobe.
How AWS Fargate reduces container management burdens
Enterprises face a number of management challenges when they adopt containers. While AWS Fargate eliminates some of these hurdles, be sure to mind your bill.
Increasingly, enterprise development and IT ops teams run containers on pools of cloud VMs, called clusters, rather than individual compute instances, such as those in EC2. But while this approach offers greater flexibility, it also creates more work for users, who must configure and manage two environments: EC2 and the container infrastructure.
Packaged orchestration services, such as Amazon Elastic Container Service (ECS), provide cluster management and the container control plane as a managed service, which helps eliminate some of this overhead. However, users still have to configure and manage the underlying instances that form the cluster.
In an attempt to change this, Amazon released AWS Fargate, which complements ECS and ECS for Kubernetes (EKS) and addresses the second piece of the container deployment puzzle: instance management.
Spin up some containers
Similar to Azure Container Instances, AWS Fargate turns individual containers into consumable cloud resources. With Fargate, a developer only needs to specify the parameters for a container instance, and then, AWS deploys it on its own hardware.
The fundamental elements of a Fargate deployment are compute, networking, storage, access permissions, logging and debugging configuration, and the runtime image -- which are encapsulated in task definitions.
Nearly identical to ECS task definitions, developers can write these immutable, versioned documents in JSON or YAML syntax. Multiple containers are described by a task run on a container cluster managed by ECS, with Kubernetes support via EKS coming later in 2018.
Tasks include the following elements:
- A unique family name and version number for the task definition;
- The required CPU resources, which are shared among all containers defined in the task, available in five virtual CPU (vCPU) sizes ranging from 0.25 to 4;
- The total memory for the task, as dictated by the vCPU allocation. For example, 0.5-vCPU tasks can select 0.5, 1 or 2 GB, while 4-vCPU tasks can select any value between 8 and 30 GB in 1 GB increments;
- A list of up to 10 container definitions, which include a name and image URL from either Amazon Elastic Container Registry (ECR) or the Docker Hub public registry, that define the properties of each container allocated to the same host;
- An optional CPU and memory sharing parameter that reserves a given fraction of the total vCPU and memory for a particular container instance. For example, if you have a composite microservices-based application that uses separate containers for each service, you might choose to guarantee front-end web servers a particular amount of vCPU;
- The networking mode, which provides each task definition with a private elastic network interface to an Amazon Virtual Private Cloud, a primary private IP address and an internal domain name system hostname. You can also opt to assign public IPs to a task to allow external traffic;
- The task-level load-balancing configuration, with support for Network Load Balancer or Application Load Balancer, the latter of which requires at least two subnets in different availability zones;
- Port mappings for each container in the task definition, which enable container instances to send and receive traffic over particular network ports;
- Storage configuration that supports both persistent and ephemeral Docker-layer storage, with the latter configurable up to 10 GB per task, shared across all containers. Layer storage does not enable other containers to see the data, while persistent storage provides this visibility between containers, as long as it's within the same session -- for example, visibility is gone when the task stops. Developers can use up to 4 GB of an Elastic Block Store volume per task definition and configure the associated mount points and volume source path;
- Three tiers of access permissions. These include cluster permissions, which can launch or describe tasks; application permissions, which define container access to external AWS resources; and housekeeping permissions, which allow access for various administrative tasks;
- CloudWatch log configuration to define parameters for the awslogs driver that sends application data to CloudWatch;
- Custom health check commands, such as the watch interval, timeout value and retry limit, to scan for stopped tasks.
These nested task and container settings provide everything Fargate needs to deploy multiple containers on a host. Additionally, developers can use ECS to manage Fargate workloads on a virtual cluster when multiple tasks are deployed together.
AWS Fargate vs. Google Kubernetes Engine pricing
It's difficult to make direct pricing comparisons between Fargate and Google Kubernetes Engine (GKE), as the node sizes are not directly comparable. That said, you can sketch out a rough estimate of how much each service will cost.
Fargate's pricing model is based on a combination of resource usage, such as vCPU and memory allocation, and execution time in one-second intervals with a one-minute minimum. AWS charges users once a container image starts to download from a repository and stops when the task terminates. So, if you have an application with 10 tasks, each using 0.25 vCPU and 1 GB of memory, and if it runs for an hour a day, that would cost $7.61 per month.
With GKE, if you use a separate node for each of the 10 tasks above and select the smallest shared instance that GKE provides -- the f1-micro -- Google's cost calculator estimates that hour-per-day usage would cost $2.31 per month; that's about 30% of Fargate's cost. However, GKE requires thorough knowledge of Kubernetes management and involves quite a bit more setup than Fargate.
The future of Fargate
AWS Fargate version 1.1 includes enhancements to the service, including support for task metadata endpoints, container health checks and ECS service discovery. Metadata endpoints enable third-party monitoring tools, such as Datadog, to collect Fargate metrics. Health checks identify failed tasks and generate an automatic restart of failed or hung tasks. And ECS service discovery facilitates connections between containerized services that run in different tasks.
AWS Fargate might be a boon for organizations that view container deployments as a hassle, as the service removes the need to manage EC2 instances. As IT teams gain experience running containerized applications, they can extend Fargate deployments to ECS -- and eventually EKS -- for cluster management, all with the comfort of knowing they can reuse these task configurations to gain more resource options and control.