Posted On: Aug 6, 2018

The Amazon Deep Learning AMIs for Ubuntu and Amazon Linux now come with an optimized build of TensorFlow 1.9 custom-built directly from the source code and fine-tuned for high performance training, Horovod for TensorFlow multi-GPU scaling, the latest Apache MXNet 1.2 performance and usability improvements, the new Keras 2-MXNet backend with high performance multi-GPU training support, a new MXBoard tool for improved debugging and visualization of MXNet training models, an upgraded NIVIDA stack, and support for Open Neural Network Exchange (ONNX) for framework interoperability.  

Faster training with optimized TensorFlow 1.9

Deep Learning AMIs now come with a compute-optimized build of TensorFlow 1.9 custom-built directly from the source code to accelerate training performance on the Intel Xeon Platinum processors powering Amazon EC2 C5 instances. They also offer a GPU-optimized build of TensorFlow 1.9 configured with NVIDIA CUDA 9 and cuDNN 7 to take advantage of mixed precision training on the Volta V100 GPUs powering Amazon EC2 P3 instances. Deep Learning AMIs automatically deploy the high performance build of TensorFlow optimized for the EC2 instance of your choice when you activate the TensorFlow virtual environment for the first time.

In addition, for developers looking to scale their TensorFlow training from single GPU to multiple GPUs, the AMIs come fully-configured with Horovod, which is a popular open-source distributed training framework. We have released several performance improvements and configurations in this pre-built version of Horovod that make it faster to run distributed training over clusters of Amazon EC2 P3 instances.

Apache MXNet 1.2 Improvements

Deep Learning AMIs support the latest release of Apache MXNet 1.2, offering better ease-of-use and faster performance. MXNet 1.2 includes a new Scala-based, thread-safe, high-level inference API that makes it easier to perform predictions using deep learning models trained with MXNet. MXNet 1.2 also offers the new Intel MKL-DNN integration that accelerates neural network operators such as convolution, deconvolution, and pooling on compute-optimized C5 instances, and support for enhanced FP16 that accelerates mixed precision training on Tensor Cores of NVIDIA Volta V100 GPUs powering Amazon EC2 P3 instances.

High performance multi-GPU training with MXNet backend for Keras 2

Deep Learning AMIs come pre-installed with the new Keras-MXNet deep learning backend. Keras is a high-level, Python neural network API that is popular for its quick and easy prototyping of convolutional neural networks (CNNs) and recurrent neural networks (RNNs). Keras developers now use MXNet as their backend deep engine for distributed training of CNNs and RNNs, and get higher performance. Developers can design in Keras, train with Keras-MXNet, and run inference with MxNet in large-scale production environments.

Improved debugging support with MXBoard

With MXBoard, a Python package that provides APIs for logging MXNet data for visualization in TensorBoard, developers can easily debug and visualize their MXNet model training. MXBoard supports a range of visualizations including histograms, convolutional filters, visual embedding, and more.

NVIDIA stack upgrade

NIVIDA stack in the AMIs now include the latest NVIDIA GPU driver 396.37, CUDA 8.0, 9.0 and 9.2, cuDNN 7.1.4, and NCCL 2.2.13, so developers don’t have to install the drivers on their own.

Framework interoperability with ONNX

The Deep Learning AMIs come pre-installed with Open Neural Network Exchange (ONNX), an open source format for neural network computational graph, that is supported by an increasing list of frameworks, including Apache MXNet, TensorFlow, PyTorch, Chainer, and Cognitive Toolkit (CNTK). ONNX gives developers the flexibility to migrate between frameworks. For example, developers can use PyTorch for prototyping, building and training their models, and then use ONNX to migrate their models to MXNet to leverage its scalability for inference.

Getting started with the Deep Learning AMIs

You can quickly get started with the Amazon Deep Learning AMIs by using the tutorials in the developer guide. You can find the Deep Learning AMI of your choice in the Quick Start section of the Step 1: Choose an Amazon Machine Image (AMI) in the EC2 instance launch wizard (see the following image).