Amazon Fargate is a compute engine for Amazon ECS and Amazon EKS that allows you to run containers without having to manage servers or clusters. With Amazon Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. Amazon Fargate removes the need for you to interact with or think about servers or clusters. Fargate lets you focus on designing and building your applications instead of managing the infrastructure that runs them.
Amazon ECS has two modes: Fargate launch type and EC2 launch type. With Fargate launch type, all you have to do is package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. EC2 launch type allows you to have server-level, more granular control over the infrastructure that runs your container applications. With EC2 launch type, you can use Amazon ECS to manage a cluster of servers and schedule placement of containers on the servers. Amazon ECS keeps track of all the CPU, memory and other resources in your cluster, and also finds the best server for a container to run on based on your specified resource requirements. You are responsible for provisioning, patching, and scaling clusters of servers. You can decide which type of server to use, which applications and how many containers to run in a cluster to optimize utilization, and when you should add or remove servers from a cluster. EC2 launch type gives you more control of your server clusters and provides a broader range of customization options, which might be required to support some specific applications or possible compliance and government requirements.
Amazon Elastic Kubernetes Service (EKS) also has two compute modes: Fargate or EC2. When using Amazon Fargate with Amazon EKS, you don't have to provision, configure, or scale groups of virtual machines on your own to run containers. You also don't need to choose server types, decide when to scale your node groups, or optimize cluster packing. You can control which pods start on Fargate and how they run with Fargate profiles. Fargate profiles are defined as part of your Amazon EKS cluster.
No clusters to manage
With Amazon Fargate, you only have to think about the containers so you can just focus on building and operating your application. Amazon Fargate eliminates the need to manage a cluster of Amazon EC2 instances. You no longer have to pick the instance types, manage cluster scheduling, or optimize cluster utilization. All of this goes away with Fargate.
Amazon Fargate makes it easy to scale your applications. You no longer have to worry about provisioning enough compute resources for your container applications. After you define your application requirements (e.g., CPU, memory, etc.), Amazon Fargate manages all the scaling and infrastructure needed to run your containers in a highly-available manner. You no longer have to decide when to scale your clusters or pack them for optimal utilization. With Amazon Fargate, you can launch tens or tens of thousands of containers in seconds and easily scale to run your most mission-critical applications.
Integrated with Amazon ECS
Amazon Fargate seamlessly integrates with Amazon ECS. You just define your application as you do for Amazon ECS. You package your application into task definitions, specify the CPU and memory needed, define the networking and IAM policies that each container needs, and upload everything to Amazon ECS. After everything is setup, Amazon Fargate launches and manages your containers for you.
Integrated with Amazon EKS
You can use Amazon Fargate with Amazon EKS to run your Kubernetes applications using serverless compute. Simply create a Fargate Profile in your EKS cluster to determine which Kubernetes applications should run using Fargate and deploy them as usual. Using Amazon EKS with Fargate allows you to automatically scale, load balance, and optimize application availability through managed scheduling and compute provisioning, providing an easier way to build and operate Kubernetes applications.