In this module, you will build the container image for your monolithic node.js application and push it to Amazon Elastic Container Registry. Start Building

deployment to amazon ecr

Containers allow you to easily package an application's code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. Containers can help ensure that applications deploy quickly, reliably, and consistently regardless of deployment environment.

architecture overview

Launching a container with a new release of code can be done without significant deployment overhead. Operational speed is improved, because code built in a container on a developer’s local machine can be easily moved to a test server by simply moving the container. At build time, this container can be linked to other containers required to run the application stack.

Dependency Control & Improved Pipeline
A Docker container image is a point in time capture of an application's code and dependencies. This allows an engineering organization to create a standard pipeline for the application life cycle. For example:

  1. Developers build and run the container locally.
  2. Continuous integration server runs the same container and executes integration tests against it to make sure it passes expectations.
  3. The same container is shipped to a staging environment where its runtime behavior can be checked using load tests or manual QA.
  4. The same container is shipped to production.

Being able to build, test, ship, and run the exact same container through all stages of the integration and deployment pipeline makes delivering a high quality, reliable application considerably easier.

Density & Resource Efficiency
Containers facilitate enhanced resource efficiency by allowing multiple heterogeneous processes to run on a single system. Resource efficiency is a natural result of the isolation and allocation techniques that containers use. Containers can be restricted to consume certain amounts of a host's CPU and memory. By understanding what resources a container needs and what resources are available from the underlying host server, you can right-size the compute resources you use with smaller hosts or increase the density of processes running on a single large host, increasing availability and optimizing resource consumption.

The flexibility of Docker containers is based on their portability, ease of deployment, and small size. In contrast to the installation and configuration required on a VM, packaging services inside of containers allows them to be easily moved between hosts, isolated from failure of other adjacent services, and protected from errant patches or software upgrades on the host system. 

Time to Complete: 20 minutes

Services Used:

For the first part of this tutorial, you will build the Docker container image for your monolithic node.js application and push it to Amazon Elastic Container Registry. Click on each step number to expand the section.

  • Step 1. Get Setup

    In the next few steps, you are going to be using Docker, Github, Amazon ECS, and Amazon ECR to deploy code into containers. To make sure you can complete these steps, you will need to ensure you have the right tools.

    1. Have an Amazon Web Services Account: If you don't already have an account with Amazon Web Services, you can sign up here
      ⚐ NOTE: some of the services you will be using may require your account to be active for more than 12 hours. If you experience difficulty with any services and have a newly created account, please wait a few hours and try again.
    2. Install Docker: You will be using Docker to build the image files that you will run in your containers. Docker is an open source project and you can download it here for Mac or for Windows.
      Once Docker is installed, you can check it is working by running Docker --version in the terminal. You should see something like this: Docker version 17.03.0-ce, build 60ccb22.
    3. Install the Amazon CLI
      • You will use the Amazon Command Line Interface (Amazon CLI) to push the images to Amazon Elastic Container Registry. You can learn about the Amazon CLI here.
      • Once the Amazon CLI is installed, you can check it is working by running aws --version in the terminal. You should see something like this: aws-cli/1.11.63 Python/2.7.10 Darwin/16.5.0 botocore/1.5.26.
      • If you already have the Amazon CLI installed, run the following command in the terminal to ensure it is updated to the latest version: pip install awscli --upgrade --user
    4. Have a Text Editor: If you don't already have a text editor for coding, you should install one to your local environment. Atom is a simple, open-source text editor from GitHub that is popular with developers.
  • Step 2. Download & Open the Project

    Download Code from GitHub: Navigate to and select 'Clone or Download' to download the GitHub repository to your local environment. You can also use GitHub Desktop or Git to clone the repository.

    Open the Project Files: Start Atom, select 'Add Project Folder', and select the folder where you saved the repository 'amazon-ecs-nodejs-microservices'. This will add the entire project into Atom so you can easily work with it.

    In your project folder, you should see folders for infrastructure and services. Infrastructure holds the Amazon CloudFormation infrastructure configuration code you will use in the next step. The folder services contains the code that forms the node.js application.

    Take a few minutes to click through the files and familiarize yourself with the different aspects of the application, including the database db.json, the server server.js, package.json, and the application dockerfile.

    microservices project
  • Step 3. Provision a Repository

    Create the Repository:

    Record Repository Information:

    • After you hit next, you should get a message that looks like this:
    • The repository address follows a simple format: [account-id].dkr.ecr.[region][repo-name].


    ⚐ NOTE: You will need this address, including your account ID and the region you are using in the next steps.

  • Step 4. Build & Push the Docker Image

    Open your terminal, and set your path to the 2-containerized/services/api section of the GitHub code in the directory you have it cloned or downloaded it into: ~/amazon-ecs-nodejs-microservices/2-containerized/services/api.

    Authenticate Docker Login with Amazon Web Services:

    1. Run aws ecr get-login --no-include-email --region [region]. Example: aws ecr get-login --no-include-email --region us-west-2 If you have never used the Amazon CLI before, you may need to configure your credentials.
    2. You're going to get a massive output starting with docker login -u AWS -p ... Copy this entire output, paste, and run it in the terminal.
    3. You should see Login Succeeded.

    ⚐ NOTE: If this login does not succeed, it may be because you have a newer version of Docker that has depreciated the -e none flag. To correct this, paste the output into your text editor, remove -e none from the end of the output, and run the updated output in the terminal.

    • Build the Image: In the terminal, run docker build -t api . NOTE, the . is important here.
    • Tag the Image: After the build completes, tag the image so you can push it to the repository: docker tag api:latest [account-id].dkr.ecr.[region]

    ⚐ Pro tip: the :v1 represents the image build version. Every time you build the image, you should increment this version number. If you were using a script, you could use an automated number, such as a time stamp to tag the image. This is a best practice that allows you to easily revert to a previous container image build in the future.

    • Push the image to ECR: Run docker push to push your image to ECR: docker push [account-id].dkr.ecr.[region]

    If you navigate to your ECR repository, you should see your image tagged latest.