General Purpose Instances

Current Generation

Amazon EC2 M5 Instances are the latest generation of General Purpose Instances powered by Intel Xeon® Platinum 8175 3.1GHz processors. With M5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5 instance.

  • m5.large, 8 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • m5.xlarge, 16 GiB of Memory, 4 vCPUs, EBS only, 64-bit platform
  • m5.2xlarge, 32 GiB of Memory, 8 vCPUs, EBS only, 64-bit platform
  • m5.4xlarge, 64 GiB of Memory, 16 vCPUs, EBS only, 64-bit platform
  • m5.8xlarge, 128 GiB of Memory, 32 vCPUs, EBS only, 64-bit platform
  • m5.12xlarge, 192 GiB of Memory, 48 vCPUs, EBS only, 64-bit platform
  • m5.16xlarge, 256 GiB of Memory, 64 vCPUs, EBS only, 64-bit platform
  • m5.24xlarge, 384 GiB of Memory, 96 vCPUs, EBS only, 64-bit platform
  • m5.metal: 384 GiB of Memory, 96 vCPUs, EBS only, 64-bit platform
  • m5d.metal: 384 GiB of Memory, 96 vCPUs, 4 x 900 NVMe SSD, 64-bit platform

 

  • m5a.large, 8 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • m5a.xlarge, 16 GiB of Memory, 4 vCPUs, EBS only, 64-bit platform
  • m5a.2xlarge, 32 GiB of Memory, 8 vCPUs, EBS only, 64-bit platform
  • m5a.4xlarge, 64 GiB of Memory, 16 vCPUs, EBS only, 64-bit platform
  • m5a.8xlarge, 128 GiB of Memory, 32 vCPUs, EBS only, 64-bit platform
  • m5a.12xlarge, 192 GiB of Memory, 48 vCPUs, EBS only, 64-bit platform
  • m5a.16xlarge, 256 GiB of Memory, 64 vCPUs, EBS only, 64-bit platform
  • m5a.24xlarge, 384 GiB of Memory, 96 vCPUs, EBS only, 64-bit platform

 

  • m5d.large, 8 GiB of Memory, 2 vCPUs, 1 x 75 NVMe SSD, 64-bit platform
  • m5d.xlarge, 16 GiB of Memory, 4 vCPUs, 1 x 150 NVMe SSD, 64-bit platform
  • m5d.2xlarge, 32 GiB of Memory, 8 vCPUs, 1 x 300 NVMe SSD, 64-bit platform
  • m5d.4xlarge, 64 GiB of Memory, 16 vCPUs, 2 x 300 NVMe SSD, 64-bit platform
  • m5d.8xlarge, 128 GiB of Memory, 32 vCPUs, 2 x 600 NVMe SSD, 64-bit platform
  • m5d.12xlarge, 192 GiB of Memory, 48 vCPUs, 2 x 900 NVMe SSD, 64-bit platform
  • m5d.16xlarge, 256 GiB of Memory, 64 vCPUs, 4 x 600 NVMe SSD, 64-bit platform
  • m5d.24xlarge, 384 GiB of Memory, 96 vCPUs, 4 x 900 NVMe SSD, 64-bit platform
Previous Generations
  • m4.large: 8 GiB of memory, 2 vCPU, EBS-only, 64-bit platform
  • m4.xlarge: 16 GiB of memory, 4 vCPUs, EBS-only, 64-bit platform
  • m4.2xlarge: 32 GiB of memory, 8 vCPUs, EBS-only, 64-bit platform
  • m4.4xlarge: 64 GiB of memory, 16 vCPUs, EBS-only, 64-bit platform
  • m4.10xlarge: 160 GiB of memory, 40 vCPUs, EBS-only, 64-bit platform
  • m4.16xlarge: 256 GiB of memory, 64 vCPUs, EBS-only, 64-bit platform
  • m3.medium: 3.75 GiB of memory, 1 vCPU, 4 GB of SSD-based local instance storage, 64-bit platform
  • m3.large: 7.5 GiB of memory, 2 VCPUs, 32 GB of SSD-based local instance storage, 64-bit platform
  • m3.xlarge: 15 GiB of memory, 4 vCPUs, 80 GB of SSD-based local instance storage, 64-bit platform
  • m3.2xlarge: 30 GiB of memory, 8 vCPUs, 160 GB of SSD-based local instance storage, 64-bit platform
  • m1.small: 1.7 GiB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform

T3 instances are the next generation low cost burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3 instances are designed for applications with moderate CPU usage that experience temporary spikes in use.

  • t3.nano, 0.5 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3.micro, 1 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3.small, 2 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3.medium, 4 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3.large, 8 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3.xlarge, 16 GiB of Memory, 4 vCPUs, EBS only, 64-bit platform
  • t3.2xlarge, 32 GiB of Memory, 8 vCPUs, EBS only, 64-bit platform

 

  • t3a.nano, 0.5 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3a.micro, 1 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3a.small, 2 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3a.medium, 4 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3a.large, 8 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t3a.xlarge, 16 GiB of Memory, 4 vCPUs, EBS only, 64-bit platform
  • t3a.2xlarge, 32 GiB of Memory, 8 vCPUs, EBS only, 64-bit platform


Previous Generations
  • t2.nano, 0.5 GiB of Memory, 1 vCPUs, EBS only, 64-bit platform
  • t2.micro, 1 GiB of Memory, 1 vCPUs, EBS only, 64-bit platform
  • t2.small, 2 GiB of Memory, 1 vCPUs, EBS only, 64-bit platform
  • t2.medium, 4 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t2.large, 8 GiB of Memory, 2 vCPUs, EBS only, 64-bit platform
  • t2.xlarge, 16 GiB of Memory, 4 vCPUs, EBS only, 64-bit platform
  • t2.2xlarge, 32 GiB of Memory, 8 vCPUs, EBS only, 64-bit platform

Compute Optimized Instances

C5 Instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio. With C5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C5 instance.

  • c5.large: 4 GiB of memory, 2 vCPUs, 64-bit platform
  • c5.xlarge: 8 GiB of memory, 4 vCPUs, 64-bit platform
  • c5.2xlarge: 16 GiB of memory, 8 vCPUs, 64-bit platform
  • c5.4xlarge: 32 GiB of memory, 16 vCPUs, 64-bit platform
  • c5.9xlarge: 72 GiB of memory, 36 vCPUs, 64-bit platform
  • c5.18xlarge: 144 GiB of memory, 72 vCPUs, 64-bit platform
  • c5.metal: 96 vCPUs, 192GiB of Memory, EBS only, 64-bit platform
  • c5d.metal: 96 vCPUs, 192GiB of Memory, 4 x 900 NVMe SSD, 64-bit platform

 

  • c5d.large: 4 GiB of memory, 2 vCPUs, 1 x 50 NVMe SSD, 64-bit platform
  • c5d.xlarge: 8 GiB of memory, 4 vCPUs, 1 x 100 NVMe SSD, 64-bit platform
  • c5d.2xlarge: 16 GiB of memory, 8 vCPUs, 1 x 200 NVMe SSD, 64-bit platform
  • c5d.4xlarge: 32 GiB of memory, 16 vCPUs, 1 x 400 NVMe SSD, 64-bit platform
  • c5d.9xlarge: 72 GiB of memory, 36 vCPUs, 1 x 900 NVMe SSD, 64-bit platform
  • c5d.18xlarge: 144 GiB of memory, 72 vCPUs, 2 x 900 NVMe SSD, 64-bit platform
Previous Generations
  • c4.large: 3.75 GiB of memory, 2 vCPUs, 64-bit platform
  • c4.xlarge: 7.5 GiB of memory, 4 vCPUs, 64-bit platform
  • c4.2xlarge: 15 GiB of memory, 8 vCPUs, 64-bit platform
  • c4.4xlarge: 30 GiB of memory, 16 vCPUs, 64-bit platform
  • c4.8xlarge: 60 GiB of memory, 36 vCPUs, 64-bit platform
  • c3.large: 3.75 GiB of memory, 2 vCPUs, 32 GB of SSD-based local instance storage, 64-bit platform
  • c3.xlarge: 7.5 GiB of memory, 4 vCPUs, 80 GB of SSD-based local instance storage, 64-bit platform
  • c3.2xlarge: 15 GiB of memory, 8 vCPUs, 160 GB of SSD-based local instance storage, 64-bit platform
  • c3.4xlarge: 30 GiB of memory, 16 vCPUs, 320 GB of SSD-based local instance storage, 64-bit platform
  • c3.8xlarge: 60 GiB of memory, 32 vCPUs, 640 GB of SSD-based local instance storage, 64-bit platform

Memory Optimized Instances

X1 Instances are optimized for large-scale, enterprise-class, in-memory applications and have the lowest price per GiB of RAM among Amazon EC2 instance types.

  • x1.16xlarge: 976 GiB of memory, 64 vCPUs, 1 x 1,920 GB of SSD-based instance storage, 64-bit platform, 10 Gigabit Ethernet
  • x1.32xlarge: 1,952 GiB of memory, 128 vCPUs, 2 x 1,920 GB of SSD-based instance storage, 64-bit platform, 20 Gigabit Ethernet

R5 instances deliver 5% additional memory per vCPU than R4 and the largest size provides 768 GiB of memory. In addition, R5 instances deliver a 10% price per GiB improvement and a ~20% increased CPU performance over R4. With R5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5 instance.

  • r5.large: 16 GiB of memory, 2 vCPUs, 64-bit platform
  • r5.xlarge: 32 GiB of memory, 4 vCPUs, 64-bit platform
  • r5.2xlarge: 64 GiB of memory, 8 vCPUs, 64-bit platform
  • r5.4xlarge: 128 GiB of memory, 16 vCPUs, 64-bit platform
  • r5.8xlarge: 256 GiB of memory, 32 vCPUs, 64-bit platform
  • r5.12xlarge: 384 GiB of memory, 48 vCPUs, 64-bit platform
  • r5.16xlarge: 512 GiB of memory, 64 vCPUs, 64-bit platform
  • r5.24xlarge: 768 GiB of memory, 96 vCPUs, 64-bit platform
  • r5.metal: 768 GiB of memory, 96 vCPUs, 64-bit platform
  • r5d.metal: 768 GiB of memory, 96 vCPUs, 4 x 900 NVMe SSD, 64-bit platform

 

  • r5a.large, 16 GiB of Memory, 2 vCPUs, EBS-Only, 64-bit platform
  • r5a.xlarge, 32 GiB of Memory, 4 vCPUs, EBS-Only, 64-bit platform
  • r5a.2xlarge, 64 GiB of Memory, 8 vCPUs, EBS-Only, 64-bit platform
  • r5a.4xlarge, 128 GiB of Memory, 16 vCPUs, EBS-Only, 64-bit platform
  • r5a.8xlarge, 256 GiB of Memory, 32 vCPUs, EBS-Only, 64-bit platform
  • r5a.12xlarge, 384 GiB of Memory, 48 vCPUs, EBS-Only, 64-bit platform
  • r5a.16xlarge, 512 GiB of Memory, 64 vCPUs, EBS-Only, 64-bit platform
  • r5a.24xlarge, 768 GiB of Memory, 96 vCPUs, EBS-Only, 64-bit platform

 

  • r5d.large: 16 GiB of memory, 2 vCPUs, 1 x 75 NVMe SSD, 64-bit platform
  • r5d.xlarge: 32 GiB of memory, 4 vCPUs, 1 x 150 NVMe SSD, 64-bit platform
  • r5d.2xlarge: 64 GiB of memory, 8 vCPUs, 1 x 300 NVMe SSD, 64-bit platform
  • r5d.4xlarge: 128 GiB of memory, 16 vCPUs, 2 x 300 NVMe SSD, 64-bit platform
  • r5d.8xlarge: 256 GiB of memory, 32 vCPUs, 2 x 600 NVMe SSD, 64-bit platform
  • r5d.12xlarge: 384 GiB of memory, 48 vCPUs, 2 x 900 NVMe SSD, 64-bit platform
  • r5d.16xlarge: 512 GiB of memory, 64 vCPUs, 4 x 600 NVMe SSD, 64-bit platform
  • r5d.24xlarge: 768 GiB of memory, 96 vCPUs, 4 x 900 NVMe SSD, 64-bit platform

Amazon EC2 z1d instances offer both high compute capacity and a high memory footprint. High frequency z1d instances deliver a sustained all core frequency of up to 4.0 GHz, the fastest of any cloud instance. With z1d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the z1d instance.

  • z1d.large: 16 GiB of Memory, 2 vCPUs, 1 x 75 NVMe SSD, 64-bit platform
  • z1d.xlarge: 32 GiB of Memory, 4 vCPUs, 1 x 150 NVMe SSD, 64-bit platform
  • z1d.2xlarge: 64 GiB of Memory, 8 vCPUs, 1 x 300 NVMe SSD, 64-bit platform
  • z1d.3xlarge: 96 GiB of Memory, 12 vCPUs, 1 x 450 NVMe SSD, 64-bit platform
  • z1d.6xlarge: 192 GiB of Memory, 24 vCPUs, 1 x 900 NVMe SSD, 64-bit platform
  • z1d.12xlarge: 384 GiB of Memory, 48 vCPUs, 2 x 900 NVMe SSD, 64-bit platform
  • z1d.metal: 384 GiB of Memory, 48 vCPUs, 2 x 900 NVMe SSD, 64-bit platform
Previous Generations
  • r4.large: 15.25 GiB of memory, 2 vCPUs, 64-bit platform
  • r4.xlarge: 30.5 GiB of memory, 4 vCPUs, 64-bit platform
  • r4.2xlarge: 61 GiB of memory, 8 vCPUs, 64-bit platform
  • r4.4xlarge: 122 GiB of memory, 16 vCPUs, 64-bit platform
  • r4.8xlarge: 244 GiB of memory, 32 vCPUs, 64-bit platform
  • r4.16xlarge: 488 GiB of memory, 64 vCPUs, 64-bit platform
  • r3.large: 15.25 GiB of memory, 2 vCPUs, 1 x 32 GB of SSD-based instance storage, 64-bit platform
  • r3.xlarge: 30.5 GiB of memory, 4 vCPUs, 1 x 80 GB of SSD-based instance storage, 64-bit platform
  • r3.2xlarge: 61 GiB of memory, 8 vCPUs, 1 x 160 GB of SSD-based instance storage, 64-bit platform
  • r3.4xlarge: 122 GiB of memory, 16 vCPUs, 1 x 320 GB of SSD-based instance storage, 64-bit platform
  • r3.8xlarge: 244 GiB of memory, 32 vCPUs, 2 x 320 GB of SSD-based instance storage, 64-bit platform, 10 Gigabit Ethernet

Storage Optimized Instances

Instances of this family provide very high disk I/O performance or proportionally higher storage density per instance, and are ideally suited for applications that benefit from high sequential I/O performance across very large data sets. Storage-optimized instances also provide high levels of CPU, memory and network performance.

  • i3.large: 15.25 GiB of memory, 2 vCPUs, 1 x 0.475 NVMe SSD, 64-bit platform
  • i3.xlarge: 30.5 GiB of memory, 4 vCPUs, 1 x 0.95 NVMe SSD, 64-bit platform
  • i3.2xlarge: 61 GiB of memory, 8 vCPUs, 1 x 1.9 NVMe SSD, 64-bit platform
  • i3.4xlarge: 122 GiB of memory, 16 vCPUs, 2 x 1.9 NVMe SSD, 64-bit platform
  • i3.8xlarge: 244 GiB of memory, 32 vCPUs, 4 x 1.9 NVMe SSD, 64-bit platform
  • i3.16xlarge: 488 GiB of memory, 64 vCPUs, 8 x 1.9 NVMe SSD, 64-bit platform
Previous Generations
  • i2.xlarge: 30.5 GiB of memory, 4 vCPUs, 800 GB of SSD-based instance storage, 64-bit platform
  • i2.2xlarge: 61 GiB of memory, 8 vCPUs, 2 x 800 GB of SSD-based instance storage, 64-bit platform
  • i2.4xlarge: 122 GiB of memory, 16 vCPUs, 4 x 800 GB of SSD-based instance storage, 64-bit platform
  • i2.8xlarge: 244 GiB of memory, 32 vCPUs, 8 x 800 GB of SSD-based instance storage, 64-bit platform, 10 Gigabit Ethernet

Instances of this family provide low cost storage and very high disk throughput and are ideally suited for applications that benefit from high sequential I/O performance across very large datasets on local storage.

  • d2.xlarge: 30.5 GiB of memory, 4 vCPUs, 3 x 2000 GB of HDD-based instance storage, 64-bit platform
  • d2.2xlarge: 61 GiB of memory, 8 vCPUs, 6 x 2000 GB of HDD -based instance storage, 64-bit platform
  • d2.4xlarge: 122 GiB of memory, 16 vCPUs, 12 x 2000 GB of HDD-based instance storage, 64-bit platform
  • d2.8xlarge: 244 GiB of memory, 36 vCPUs, 24 x 2000 GB of HDD -based instance storage, 64-bit platform, 10 Gigabit Ethernet

Accelerated Computing Instances

Instances of this family provide access to workload accelerators such as GPU. They are ideal for applications such as machine learning, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and other high-performance computing workloads.

Amazon EC2 P3 instances deliver high performance compute in the cloud with up to 8 NVIDIA® V100 Tensor Core GPUs and up to 100 Gbps of networking throughput for machine learning and HPC applications. These instances deliver up to one petaflop of mixed-precision performance per instance to significantly accelerate machine learning and high performance computing applications. Amazon EC2 P3 instances have been proven to reduce machine learning training times from days to minutes, as well as increase the number of simulations completed for high performance computing by 3-4x.  

  • p3.2xlarge: 1 GPU, 8 vCPUs, 61 GiB of memory, up to 10 Gbps network performance
  • p3.8xlarge, 4 GPU, 32 vCPUs, 244 GiB of memory, 10 Gbps network performance
  • p3.16xlarge, 8 GPU, 64 vCPUs, 488 GiB of memory, 25 Gbps network performance

Amazon EC2 G4 instances deliver a cost-effective GPU instance for deploying machine learning models in production and graphics-intensive applications. G4 instances provide the latest generation NVIDIA T4 GPUs, AWS custom Intel Cascade Lake CPUs, up to 100 Gbps of networking throughput, and up to 1.8 TB of local NVMe storage. These instances deliver up to 65 TFLOPs of FP16 performance to accelerate machine learning inference applications and ray-tracing cores to accelerate graphics workloads such as graphics workstations, video transcoding, and game streaming in the cloud.

  • g4dn.xlarge, 1 GPU, 4 vCPUs, 16 GiB of memory, 125 NVMe SSD, up to 25 Gbps network performance
  • g4dn.2xlarge, 1 GPU, 8 vCPUs, 32 GiB of memory, 225 NVMe SSD, up to 25 Gbps network performance
  • g4dn.4xlarge, 1 GPU, 16 vCPUs, 64 GiB of memory, 225 NVMe SSD, up to 25 Gbps network performance
  • g4dn.8xlarge, 1 GPU, 32 vCPUs, 128 GiB of memory, 1x900 NVMe SSD, 50 Gbps network performance
  • g4dn.16xlarge, 1 GPU, 64 vCPUs, 256 GiB of memory, 1x900 NVMe SSD, 50 Gbps network performance
  • g4dn.12xlarge, 4 GPUs, 48 vCPUs, 192 GiB of memory, 1x900 NVMe SSD, 50 Gbps network performance
Previous Generations

Backed by the NVIDIA Tesla M60 GPUs, G3 instances are ideal for graphics workloads such as 3D rendering, 3D visualizations, graphics-intensive remote workstations, video encoding, and virtual reality applications. g3.4xlarge: 1 GPU, 16 vCPUs, 122 GiB of memory, up to 10Gbit network performance

  • g3.4xlarge, 1 GPU, 16 vCPUs, 122 GiB of Memory, up to 10Gbit network performance
  • g3.8xlarge: 2 GPUs, 32 vCPUs, 244 GiB of memory, 10Gbit network performance
  • g3.16xlarge: 4 GPUs, 64 vCPUs, 488 GiB of memory, 20Gbit network performance
  • p2.xlarge: 1 GPU, 4 vCPUs, 61GiB of memory, high network performance
  • p2.8xlarge: 8 GPU, 32 vCPUs, 488GiB of memory, 10Gbit network performance
  • p2.16xlarge: 16 GPU, 64 vCPUs, 732GiB of memory, 20Gbit network performance

Micro Instances

Micro instances (t1.micro) provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available. They are well suited for lower throughput applications and web sites that require additional compute cycles periodically. You can learn more about how you can use Micro instances and appropriate applications in the Amazon EC2 documentation.

  • t1.micro: (Default) 613 MiB of memory, up to 2 ECUs (for short periodic bursts), EBS storage only, 32-bit or 64-bit platform

EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

contactus-chat

Online Live Chat
Chat With Cloud Computing Experts To Answer Your Questions

contactus-phone

Contact Us by Hotline
1010 0766 AWS China (Beijing) Region Operated By Sinnet
1010 0966 AWS China (Ningxia) Region Operated By NWCD

contactus-form

Contact Sales
Contact AWS experts to learn more about AWS