Posted On: Oct 25, 2017

We are excited to announce the availability of Amazon EC2 P3 instances, the next-generation of EC2 general-purpose GPU computing instances. P3 instances are powered by up to 8 of the latest-generation NVIDIA Tesla V100 GPUs, and are ideal for computationally advanced workloads such as machine learning (ML), high performance computing (HPC), scientific computing and simulations, financial analytics, image and video processing, and data compression.  

P3 instances provide a powerful platform for ML and HPC by leveraging up to eight Tesla V100 GPUs, 64 vCPUs using the custom Intel Xeon E5 processors, 488 GB of RAM, and up to 25 Gbps of aggregate network bandwidth leveraging Elastic Network Adapter technology. 

Based on NVIDIA’s latest Volta architecture, each Tesla V100 GPUs contains 5,120 CUDA Cores and 640 Tensor Cores to provide 125 TFLOPS of mixed-precision performance, 15.7 TFLOPS of single precision (FP32) performance and 7.8 TFLOPS of double precision (FP64) performance. A 300 GB/s NVLink hyper-mesh interconnect allows GPU-to-GPU communication at high speed and low latency.  

For ML applications, P3 instances offer up to 14x performance improvement over P2 instances, allowing developers to train their machine learning models in hours instead of days and bring their innovations to market faster.  

P3 instances are available in three instance sizes, p3.2xlarge with 1 GPU, p3.8xlarge with 4 GPUs and p3.16xlarge with 8 GPUs. Customers can launch P3 instances using the Amazon Web Services console, Amazon EC2 command line interface, Amazon Web Services SDKs and third-party libraries. They are available in Amazon Web Services China (Beijing) Region, operated by Sinnet as On-Demand Instances and Reserved Instances. Visit Amazon Web Services China (Beijing) region EC2 Pricing Page for China pricing. To learn more about P3 and other Amazon EC2 instances, visit the Amazon EC2 Instance Types.