Follow this AWS Fargate tutorial to deploy a containerized app
Enterprises increasingly build their software on containers, which offer both flexibility and reliability across multiple compute environments. AWS has several services to enable container use in its public cloud, including AWS Fargate.
Fargate is a serverless compute engine for Amazon Elastic Container Service (ECS) and Amazon Kubernetes Service customers who don't want to manage container infrastructure. This service is a good place to start for organizations still getting familiar with containers on AWS or those that don't want the hassle of instance management.
In this AWS Fargate tutorial for beginners, we deploy a container with Amazon ECS. First, we configure a container definition using a preconfigured sample app running HTTPD. We map a port using TCP port 80. Then, we set our task definition resources. We also create a task execution role that enables logs to be published to Amazon CloudWatch. And, finally, we deploy a service in a newly created cluster.
Fargate ensures that at least one container is always running. The cluster could host multiple services and enable communication between them. It can also combine Fargate managed containers with EC2-based containers. With our configuration complete, we identify the public IP address of our task and view the sample webpage.
There will be an associated Amazon Virtual Private Cloud network with security groups that can be utilized to create firewall rules. The container and the application will be configured with dedicated resources and memory limits. In the video, we also cover the difference between hard and soft memory limits, as they control the scalability and cost of our container-based application.
In this video, we'll demo how to use Amazon's managed container service -- Fargate. We'll open up the ECS service in the Management Console, and then click Get started.
We'll start by configuring our container definition. Our container definition will identify what image our container will use and its resources. Let's go ahead and click on the sample-app, then edit. Here, we can provide the name of our container and also the image name.
In this demo, we're using the built-in HTTPD image, but you could also use an image from any container repository. Next, we'll set the memory limits for our container definition. You can set a soft limit, a hard limit or both. A soft limit will allocate memory resources for your container. A hard limit, if reached, will kill your container. We'll leave the soft limit set to 512. Finally, we'll configure our port mappings. This is the port that our container will be listening on, and because we're doing HTTPD, we'll use the standard HTTP port of 80. Click Update.
With our container definition complete, let's go ahead and configure our task definition. Our task definition logically groups together containers for each application. So scroll down to task definition and click Edit.
Here, first we'll set our definition name, let's call this ourWebApp. We're using Fargate, so our network mode will always be AWS VPC. This means we can use security groups and other VPC network tools to interact with our container.
Next, we'll identify what role we want our task execution policy to run under. We'll go ahead and create a new role. Finally, we'll need to set the mask memory and task CPU numbers. Remember, a task can be an entire application, so I may have many containers inside of it, so make sure you size it appropriately. We'll leave our memory at a half a gig and our CPU at a quarter of a vCPU. We'll click Next.
Next, we'll edit our service settings. Here, we could specify a desired number of tasks and the service will ensure it has at least that many running. Fargate will create a security group for us that we can use to apply firewall rules with. And optionally, we can add an application load balancer, if we need high availability. Let's click Next.
Finally, we'll define the cluster that our service is running on top of. We can call this our demo cluster. This will create a VPC for us and subnets, and we could put additional services in this same cluster. Finally, we'll review our settings and click Create.
After all your creation steps are completed, click View service. Here, we can find all the details about our service, and if we want to validate it's working we can click on Tasks, see that we have one task running, and go ahead and click on its ID. Then, if we scroll down to the bottom, we can see the public IP address, and we can paste that into our browser URL. Great. We have an application and a container up and running in Amazon Fargate. Thanks for watching.