Quickly build deep learning applications

The AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud, at any scale. You can quickly launch Amazon EC2 instances pre-installed with popular deep learning frameworks such as Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, Pytorch, and Keras to train sophisticated, custom AI models, experiment with new algorithms, or to learn new skills and techniques.

Whether you need Amazon EC2 GPU or CPU instances, there is no additional charge for the Deep Learning AMIs – you only pay for the AWS resources needed to store and run your applications.

Even for experienced machine learning practitioners, getting started with deep learning can be time consuming and cumbersome. The three types of AMIs we offer support the various needs of developers. To help guide you through the getting started process, also visit the AMI selection guide and more deep learning resources.

Conda AMI

For developers who want pre-installed pip packages of deep learning frameworks in separate virtual environments, the Deep Learning Conda-based AMI is available in in Ubuntu and Amazon Linux versions.

Base AMI

For developers who want a clean slate to set up private deep learning engine repositories or custom builds of deep learning engines, the Deep Learning Base AMI is available in Ubuntu and Amazon Linux versions.

AMI with source code

For developers who want pre-installed deep learning frameworks and their source code in a shared Python environment, this Deep Learning AMI is available for P3 instances in CUDA 9 Ubuntu and Amazon Linux versions as well as for P2 instances in CUDA 8 Ubuntu and Amazon Linux versions.

The AWS Deep Learning AMIs support all the popular deep learning frameworks allowing you to define models and then train them at scale. Built for Amazon Linux and Ubuntu, the AMIs come pre-configured with Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, PyTorch, and Keras, enabling you to quickly deploy and run any of these frameworks at scale.

To expedite your development and model training, the AWS Deep Learning AMIs include the latest NVIDIA GPU-acceleration through pre-configured CUDA and cuDNN drivers, as well as the Intel Math Kernel Library (MKL), in addition to installing popular Python packages and the Anaconda Platform.

  • The AWS Deep Learning AMIs run on Amazon EC2 P2 instances, as well as P3 instances that take advantage of NVIDIA's Volta architecture. The AMIs are pre-installed with NVIDIA CUDA and cuDNN drivers to substantially accelerate the time to complete your computations.
  • The AWS Deep Learning AMIs run on Amazon EC2 Intel-based C5 instances designed for inference.
  • The AMIs come installed with Jupyter notebooks loaded with Python 2.7 and Python 3.5 kernels, along with popular Python packages, including the AWS SDK for Python.
  • To simplify package management and deployment, the AWS Deep Learning AMIs install the Anaconda2 and Anaconda3 Data Science Platform, for large-scale data processing, predictive analytics, and scientific computing.

We have three types of AWS Deep Learning AMIs available to support the various needs of machine learning practitioners. Visit our AMI selection guide, simple tutorials, and more deep learning resources to get started today.

You can find the Deep Learning AMI of your choice in the Quick Start section of the Step 1: Choose an Amazon Machine Image (AMI) in the EC2 instance launch wizard.

launch-wizard