Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of operating systems, load them with your custom application environment, manage your network’s access permissions, and run your image using as many or few systems as you desire.

To use Amazon EC2, you simply:

  • Select a pre-configured, templated Amazon Machine Image (AMI) to get up and running immediately. Or create an AMI containing your applications, libraries, data, and associated configuration settings.
  • Configure security and network access on your Amazon EC2 instance.
  • Choose which instance type(s) you want, then start, terminate, and monitor as many instances of your AMI as needed, using the web service APIs or the variety of management tools provided.
  • Determine whether you want to utilize static IP endpoints or attach persistent block storage to your instances.
  • Pay only for the resources that you actually consume, like instance-hours or data transfer.

Amazon EC2 provides a number of powerful features for building scalable, failure resilient, enterprise class applications.

Amazon Elastic Block Store (EBS) offers persistent storage for Amazon EC2 instances. Amazon EBS volumes are network-attached, and persist independently from the life of an instance. Amazon EBS volumes are highly available, highly reliable volumes that can be leveraged as an Amazon EC2 instance’s boot partition or attached to a running Amazon EC2 instance as a standard block device. When used as a boot partition, Amazon EC2 instances can be stopped and subsequently restarted, enabling you to only pay for the storage resources used while maintaining your instance’s state. Amazon EBS volumes offer greatly improved durability over local Amazon EC2 instance stores, as Amazon EBS volumes are automatically replicated on the backend (in a single Availability Zone). For those wanting even more durability, Amazon EBS provides the ability to create point-in-time consistent snapshots of your volumes that are then stored in Amazon S3, and automatically replicated across multiple Availability Zones. These snapshots can be used as the starting point for new Amazon EBS volumes, and can protect your data for long term durability. You can also easily share these snapshots with co-workers and other AWS developers. Amazon EBS provides two volume types: Standard volumes and Provisioned IOPS volumes. Standard volumes are designed for applications with moderate I/O requirements. They are also well-suited for use as boot volumes or applications where I/O can be bursty. Provisioned IOPS volumes offer storage with consistent and low-latency performance, and are designed for applications with I/O-intensive workloads such as databases. See Amazon Elastic Block Store for more details.

For an additional low, hourly fee, customers can launch selected Amazon EC2 instances types as EBS-optimized instances. EBS-optimized instances enable EC2 instances to fully use the IOPS provisioned on an EBS volume. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 and 2,000 Megabits per second (Mbps) depending on the instance type used. The dedicated throughput minimizes contention between Amazon EBS I/O and other traffic from your EC2 instance, providing the best performance for your EBS volumes. EBS-optimized instances are designed for use with both Standard and Provisioned IOPS Amazon EBS volumes. When attached to EBS-optimized instances, Provisioned IOPS volumes can achieve single digit millisecond latencies and are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time.

Optimize CPUs feature gives you greater control of your Amazon EC2 instances on two fronts. First, you can specify a custom number of vCPUs when launching new instances to save on vCPU-based licensing costs. Second, you can disable Intel Hyper-Threading Technology (Intel HT Technology) for workloads that perform well with single-threaded CPUs, such as certain high-performance computing (HPC) applications. To learn more about how Optimize CPUs can help you, visit the Optimize CPUs documentation here.

Elastic IP addresses are static IP addresses designed for dynamic cloud computing. An Elastic IP address is associated with your account not a particular instance, and you control that address until you choose to explicitly release it. Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or Availability Zone failures by programmatically remapping your public IP addresses to any instance in your account. Rather than waiting on a data technician to reconfigure or replace your host, or waiting for DNS to propagate to all of your customers, Amazon EC2 enables you to engineer around problems with your instance or software by quickly remapping your Elastic IP address to a replacement instance. In addition, you can optionally configure the reverse DNS record of any of your Elastic IP addresses by filling out this form.

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. See Amazon Virtual Private Cloud for more details.

Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources and applications, starting with Amazon EC2. It provides you with visibility into resource utilization, operational performance, and overall demand patterns—including metrics such as CPU utilization, disk reads and writes, and network traffic. You can get statistics, view graphs, and set alarms for your metric data. To use Amazon CloudWatch, simply select the Amazon EC2 instances that you’d like to monitor. You can also supply your own business or application metric data. Amazon CloudWatch will begin aggregating and storing monitoring data that can be accessed using web service APIs or Command Line Tools. See Amazon CloudWatch for more details.

Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you’re using scales up seamlessly during demand spikes to maintain performance, and scales down automatically during demand lulls to minimize costs. Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond Amazon CloudWatch fees. See Auto Scaling for more details.

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. Elastic Load Balancing detects unhealthy instances within a pool and automatically reroutes traffic to healthy instances until the unhealthy instances have been restored. You can enable Elastic Load Balancing within a single Availability Zone or across multiple zones for even more consistent application performance. Amazon CloudWatch can be used to capture a specific Elastic Load Balancer’s operational metrics, such as request count and request latency, at no additional cost beyond Elastic Load Balancing fees. See Elastic Load Balancing for more details.

Customers with complex computational workloads such as tightly coupled parallel processes, or with applications sensitive to network performance, can achieve the same high compute and network performance provided by custom-built infrastructure while benefiting from the elasticity, flexibility and cost advantages of Amazon EC2. C3 instances have been specifically engineered to provide high-performance network capability and can be programmatically launched into clusters – allowing applications to get the low-latency network performance required for tightly coupled, node-to-node communication. Cluster instances also provide significantly increased throughput making them well suited for customer applications that need to perform network-intensive operations.

VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back at any time. By importing virtual machines as ready to use EC2 instances, you can leverage your existing investments in virtual machines that meet your IT security, configuration management, and compliance requirements. You can export your previously imported EC2 instances back to your on-premise environment at any time. This offering is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3.

Enhanced Networking enables you to get significantly higher packet per second (PPS) performance, lower network jitter and lower latencies. This feature uses a new network virtualization stack that provides higher I/O performance and lower CPU utilization compared to traditional implementations. In order to take advantage of Enhanced Networking, you should launch an HVM AMI in VPC, and install the appropriate driver. Enhanced Networking is currently supported in C3 and I2 instances. For instructions on how to enable Enhanced Networking on EC2 instances, see the Enhanced Networking on Linux and Enhanced Networking on Windows tutorials. To learn more about this feature, check out the Enhanced Networking FAQ section.

Auto Recovery is an Amazon EC2 feature that is designed to increase instance availability. You can automatically recover supported instances when a system impairment is detected. You can enable your Auto Recovery for an instance by creating an AWS CloudWatch alarm. Auto Recovery keeps your existing instance running and automatically recovers your instance on new underlying hardware, if needed, so you do not generally need to migrate to a new instance. Instance recovery is subject to underlying limitations, as reflected in the Instance Recovery Troubleshooting documentation. To learn more or to get started, please visit the Auto Recovery documentation.


Amazon EC2 allows you to set up and configure everything about your instances from your operating system up to your applications. An Amazon Machine Image (AMI) is simply a packaged-up environment that includes all the necessary bits to set up and boot your instance. Your AMIs are your unit of deployment. You might have just one AMI or you might compose your system out of several building block AMIs (e.g., webservers, appservers, and databases). Amazon EC2 provides a number of tools to make creating an AMI easy including the AWS Management Console.

You can also choose from a library of globally available AMIs that provide useful instances. For example, if you just want a simple Linux server, you can choose one of the standard Linux distribution AMIs. Once you have set up your account and uploaded your AMIs, you are ready to boot your instance. You can start your AMI on any number and any type of instance by calling the RunInstances API.

If you wish to run more than 20 On-Demand or Reserved Instances, create more than 5,000 EBS volumes, need more than 5 Elastic IP addresses or 5 Elastic Load Balancers, or need to send large quantities of email from your EC2 account, please complete the Amazon EC2 instance request form, Amazon EBS volume request form, Elastic IP request form, Elastic Load Balancers, or the Email request form respectively and your request will be considered.


M4 instances are based on a custom Intel Broadwell or Haswell processors. M4 instances deliver fixed performance and provide customers with a set of resources for a high level of consistent processing performance on a low-cost platform. Instances in this family are ideal for applications that require balanced CPU and memory performance. Examples of applications that will benefit from the performance of General Purpose instances include encoding, high traffic content management systems, and other enterprise applications.

  • m4.large: 8 GiB of memory, 2 vCPU, EBS-only, 64-bit platform *
  • m4.xlarge: 16 GiB of memory, 4 vCPUs, EBS-only, 64-bit platform *
  • m4.2xlarge: 32 GiB of memory, 8 vCPUs, EBS-only, 64-bit platform *
  • m4.4xlarge: 64 GiB of memory, 16 vCPUs, EBS-only, 64-bit platform *
  • m4.10xlarge: 160 GiB of memory, 40 vCPUs, EBS-only, 64-bit platform **
  • m4.16xlarge: 256 GiB of memory, 64 vCPUs, EBS-only, 64-bit platform ***

* These instances may launch on an Intel Xeon E5-2686 v4 Broadwell or E5-2676 v3 Haswell processor

** This instance will launch on Intel Xeon E5-2676 v3 Haswell processor

*** This instance will launch on Intel Xeon E5-2686 v4 Broadwell processor

  • m3.medium: 3.75 GiB of memory, 1 vCPU, 4 GB of SSD-based local instance storage, 64-bit platform
  • m3.large: 7.5 GiB of memory, 2 VCPUs, 32 GB of SSD-based local instance storage, 64-bit platform
  • m3.xlarge: 15 GiB of memory, 4 vCPUs, 80 GB of SSD-based local instance storage, 64-bit platform
  • m3.2xlarge: 30 GiB of memory, 8 vCPUs, 160 GB of SSD-based local instance storage, 64-bit platform
  • m1.small: 1.7 GiB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform

Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

C4 instances are based on the Intel Xeon E5-2666 v3 ("Haswell") processor, and are designed to deliver the highest level of compute performance on Amazon EC2.

  • c4.large: 3.75 GiB of memory, 2 vCPUs, 64-bit platform
  • c4.xlarge: 7.5 GiB of memory, 4 vCPUs, 64-bit platform
  • c4.2xlarge: 15 GiB of memory, 8 vCPUs, 64-bit platform
  • c4.4xlarge: 30 GiB of memory, 16 vCPUs, 64-bit platform
  • c4.8xlarge: 60 GiB of memory, 36 vCPUs, 64-bit platform
  • c3.large: 3.75 GiB of memory, 2 vCPUs, 32 GB of SSD-based local instance storage, 64-bit platform
  • c3.xlarge: 7.5 GiB of memory, 4 vCPUs, 80 GB of SSD-based local instance storage, 64-bit platform
  • c3.2xlarge: 15 GiB of memory, 8 vCPUs, 160 GB of SSD-based local instance storage, 64-bit platform
  • c3.4xlarge: 30 GiB of memory, 16 vCPUs, 320 GB of SSD-based local instance storage, 64-bit platform
  • c3.8xlarge: 60 GiB of memory, 32 vCPUs, 640 GB of SSD-based local instance storage, 64-bit platform

X1 Instances are optimized for large-scale, enterprise-class, in-memory applications and have the lowest price per GiB of RAM among Amazon EC2 instance types.

  • x1.16xlarge: 976 GiB of memory, 64 vCPUs, 1 x 1,920 GB of SSD-based instance storage, 64-bit platform, 10 Gigabit Ethernet
  • x1.32xlarge: 1,952 GiB of memory, 128 vCPUs, 2 x 1,920 GB of SSD-based instance storage, 64-bit platform, 20 Gigabit Ethernet

R4 instances are ideal for in–memory processing for applications like Business Intelligence (BI), data mining & analysis, in-memory databases, distributed web scale in-memory caching, and applications performing real-time processing of unstructured big data.

  • r4.large: 15.25 GiB of memory, 2 vCPUs, 64-bit platform
  • r4.xlarge: 30.5 GiB of memory, 4 vCPUs, 64-bit platform
  • r4.2xlarge: 61 GiB of memory, 8 vCPUs, 64-bit platform
  • r4.4xlarge: 122 GiB of memory, 16 vCPUs, 64-bit platform
  • r4.8xlarge: 244 GiB of memory, 32 vCPUs, 64-bit platform, 10 Gb
  • r4.16xlarge: 488 GiB of memory, 64 vCPUs, 64-bit platform, 20 Gb
  • r3.large: 15.25 GiB of memory, 2 vCPUs, 1 x 32 GB of SSD-based instance storage, 64-bit platform
  • r3.xlarge: 30.5 GiB of memory, 4 vCPUs, 1 x 80 GB of SSD-based instance storage, 64-bit platform
  • r3.2xlarge: 61 GiB of memory, 8 vCPUs, 1 x 160 GB of SSD-based instance storage, 64-bit platform
  • r3.4xlarge: 122 GiB of memory, 16 vCPUs, 1 x 320 GB of SSD-based instance storage, 64-bit platform
  • r3.8xlarge: 244 GiB of memory, 32 vCPUs, 2 x 320 GB of SSD-based instance storage, 64-bit platform, 10 Gigabit Ethernet  

Instances of this family provide very high disk I/O performance or proportionally higher storage density per instance, and are ideally suited for applications that benefit from high sequential I/O performance across very large data sets. Storage-optimized instances also provide high levels of CPU, memory and network performance.

  • i2.xlarge: 30.5 GiB of memory, 4 vCPUs, 800 GB of SSD-based instance storage, 64-bit platform
  • i2.2xlarge: 61 GiB of memory, 8 vCPUs, 2 x 800 GB of SSD-based instance storage, 64-bit platform
  • i2.4xlarge: 122 GiB of memory, 16 vCPUs, 4 x 800 GB of SSD-based instance storage, 64-bit platform
  • i2.8xlarge: 244 GiB of memory, 32 vCPUs, 8 x 800 GB of SSD-based instance storage, 64-bit platform, 10 Gigabit Ethernet

Instances of this family provide low cost storage and very high disk throughput and are ideally suited for applications that benefit from high sequential I/O performance across very large datasets on local storage.

  • d2.xlarge: 30.5 GiB of memory, 4 vCPUs, 3 x 2000 GB of HDD-based instance storage, 64-bit platform
  • d2.2xlarge: 61 GiB of memory, 8 vCPUs, 6 x 2000 GB of HDD -based instance storage, 64-bit platform
  • d2.4xlarge: 122 GiB of memory, 16 vCPUs, 12 x 2000 GB of HDD-based instance storage, 64-bit platform
  • d2.8xlarge: 244 GiB of memory, 36 vCPUs, 24 x 2000 GB of HDD -based instance storage, 64-bit platform, 10 Gigabit Ethernet

Instances of this family provide access to workload accelerators such as GPU. They are ideal for applications such as machine learning, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and other high-performance computing workloads.

  • p2.xlarge: 1 GPU, 4 vCPUs, 61GiB of memory, high network performance
  • p2.8xlarge: 8 GPU, 32 vCPUs, 488GiB of memory, 10Gbit network performance
  • p2.16xlarge: 16 GPU, 64 vCPUs, 732GiB of memory, 20Gbit network performance
Backed by the NVIDIA Tesla M60 GPUs, G3 instances are ideal for graphics workloads such as 3D rendering, 3D visualizations, graphics-intensive remote workstations, video encoding, and virtual reality applications.
 
  • g3.4xlarge: 1 GPU, 16 vCPUs, 122 GiB of memory, up to 10Gbit network performance
  • g3.8xlarge: 2 GPUs, 32 vCPUs, 244 GiB of memory, 10Gbit network performance
  • g3.16xlarge: 4 GPUs, 64 vCPUs, 488 GiB of memory, 20Gbit network performance

Micro instances (t1.micro) provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available. They are well suited for lower throughput applications and web sites that require additional compute cycles periodically. You can learn more about how you can use Micro instances and appropriate applications in the Amazon EC2 documentation.

  • t1.micro: (Default) 613 MiB of memory, up to 2 ECUs (for short periodic bursts), EBS storage only, 32-bit or 64-bit platform

EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.


You will be charged at the end of each month for your EC2 resources actually consumed.


The best way to understand Amazon EC2 is to work through the Getting Started Guide, part of our Technical Documentation. Within a few minutes, you will be able to log into your own instance and start playing!


Your use of this service is subject to the AWS Customer Agreement.