Posted On: Apr 22, 2021
Amazon EC2 Inf1 instances is now available in Amazon Web Services China (Beijing) region, operated by Sinnet and Amazon Web Services China (Ningxia) region, operated by NWCD. Inf1 instances are powered by Amazon Inferentia chips, which Amazon Web Services custom-designed to provide high performance and lowest cost machine learning inference in the cloud.
These instances deliver up to 30% higher throughput and up to 45% lower cost per inference than the lowest cost GPU based instances and are ideal for applications such as image recognition, natural language processing, personalization and anomaly detection.
Developers can manage their own machine learning application development platforms by either launching Inf1 instances with Amazon Deep Learning AMIs, which include the Neuron SDK, or using Inf1 instances via Amazon Elastic Kubernetes Service (EKS) or Amazon Elastic Container Service (ECS) for containerized ML applications.
Amazon EC2 Inf1 instances are available in 4 sizes, providing up to 16 Inferentia chips, 96 vCPUs, 192GB of memory, 100 Gbps of networking bandwidth and 19 Gbps of Elastic Block Store (EBS) bandwidth. These instances are purchasable On-Demand, as Reserved Instances, or as Spot instances.