DEV Community

Gino Osahon
Gino Osahon

Posted on • Edited on

Getting Started with AWS Containers Part Two

Docker Image

Overview

Welcome to part 2 of my blog post on getting started with AWS Containers. In part 2, You will learn about the following concepts. Setting up AWS for Containers, Building and Packaging Containerized Applications, Setting Up Docker on AWS, Deploying ECS containers. Not to worry, if these terms are new to you — you'll learn about them here!

You will learn about the following topics in this second part of the blog post.

  • Setting Up AWS for Containers
    • Creating an AWS account
    • Configuring AWS Identity and Access Management (IAM) roles and policies
    • Creating a VPC with Public and Private Subnets for Your Clusters
  • Building and Packaging Containerized Applications
    • Choosing a containerization technology (Docker, containerd, etc.)
    • Introduction to Docker
    • Setting Up Docker on AWS
    • How to deploy and manage Docker containers on AWS EC2

Prerequisites

To follow this tutorial, you are strongly advised to read article one here

1. Setting Up AWS for Containers

Creating an AWS Account

Watch this YouTube video created by Cloud Tech on how to create an AWS account :

Configuring AWS Identity and Access Management (IAM) Roles and Policies

Watch this YouTube video created by Inquirinity on how to configure IAM for an AWS account :

You can also visit Amazon Webservice documentation website to learn all about IAM on AWS

Creating a VPC with Public and Private Subnets for Your Clusters

Container instances in your clusters need external network access to communicate with the Amazon ECS service endpoint. However, you might have tasks and services that you would like to run in private subnets. Creating a VPC with both public and private subnets provides you the flexibility to launch tasks and services in either a public or private subnet. Tasks and services in the private subnets can access the internet through a NAT gateway. Services in both the public and private subnets can be configured to use a load balancer so that they can still be reached from the public internet.

This tutorial guides you through creating a VPC with two public subnets and two private subnets, which are provided with internet access through a NAT gateway.

Step 1: Create an Elastic IP Address for Your NAT Gateway

A NAT gateway requires an Elastic IP address in your public subnet, but the VPC wizard does not create one for you. Create the Elastic IP address before running the VPC wizard.

To create an Elastic IP address, open the Amazon VPC console or visit this URL https://console.aws.amazon.com/vpc/ and sign in. Follow the below steps:

  • In the left navigation pane, choose Elastic IPs.

  • Choose Allocate new address, Allocate, Close.

Note the Allocation ID for your newly created Elastic IP address; you enter this later in the VPC wizard.

Step 2: Run the VPC Wizard

  • The VPC wizard automatically creates and configures most of your VPC resources for you.
  • To run the VPC wizard, in the left navigation pane, choose VPC Dashboard.
  • Choose Launch VPC Wizard, VPC with Public and Private Subnets, Select.
  • For VPC name, give your VPC a unique name.
  • For Elastic IP Allocation ID, choose the ID of the Elastic IP address that you created earlier.
  • Choose Create VPC.

When the wizard is finished, choose OK. Note the Availability Zone in which your VPC subnets were created. Your additional subnets should be created in a different Availability Zone.

Non-default subnets, such as those created by the VPC wizard, are not auto-assigned public IPv4 addresses. Instances launched in the public subnet must be assigned a public IPv4 address to communicate with the Amazon ECS service endpoint.

To modify your public subnet's IPv4 addressing behavior

  • In the left navigation pane, choose Subnets.

  • Select the public subnet for your VPC. By default, the name created by the VPC wizard is Public subnet.

  • Choose Actions, Modify auto-assign IP settings.

  • Select the Enable auto-assign public IPv4 address check box, and then choose Save.

Step 3: Create Additional Subnets

The wizard creates a VPC with a single public and a single private subnet in a single Availability Zone. For greater availability, you should create at least one more of each subnet type in a different Availability Zone so that your VPC has both public and private subnets across two Availability Zones.

To create an additional private subnet

  • In the left navigation pane, choose Subnets.

  • Choose Create Subnet.

  • For Name tag, enter a name for your subnet, such as Private subnet.

  • For VPC, choose the VPC that you created earlier.

  • For Availability Zone, choose a different Availability Zone than your original subnets in the VPC.

  • For IPv4 CIDR block, enter a valid CIDR block. For example, the wizard creates CIDR blocks in 10.0.0.0/24 and 10.0.1.0/24 by default. You could use 10.0.3.0/24 for your second private subnet.

  • Choose Yes, Create.

To create an additional public subnet

  • In the left navigation pane, choose Subnets and then Create Subnet.

  • For Name tag, enter a name for your subnet, such as Public subnet.

  • For VPC, choose the VPC that you created earlier.

  • For Availability Zone, choose the same Availability Zone as the additional private subnet that you created in the previous procedure.

  • For IPv4 CIDR block, enter a valid CIDR block. For example, the wizard creates CIDR blocks in 10.0.0.0/24 and 10.0.1.0/24 by default. You could use 10.0.2.0/24 for your second public subnet.

  • Choose Yes, Create.

  • Select the public subnet that you just created and choose Route Table, Edit.

  • By default, the main route table is selected. Choose the other available route table so that the 0.0.0.0/0 destination is routed to the internet gateway (igw-xxxxxxxx) and choose Save.

  • With your second public subnet still selected, choose Subnet Actions, Modify auto-assign IP settings.

  • Select Enable auto-assign public IPv4 address and choose Save, Close.

2. Building and Packaging Containerized Applications

Choosing a Containerization Technology

Containerization has become a popular approach for packaging and deploying applications. Several containerization technologies are available, including Docker, containerd, and others. In this tutorial, you will focus on Docker as the containerization technology of choice.

Introduction to Docker

Docker has revolutionized the way developers package and deploy applications, offering a lightweight and portable solution for containerization. With Docker, developers can create self-contained units called containers that encapsulate their applications and all their dependencies, including libraries, frameworks, and runtime environments. This approach ensures that applications run consistently across different environments, from development to production, regardless of the underlying infrastructure.

One of the key benefits of Docker is improved application isolation. By leveraging containerization, applications are encapsulated within their own runtime environment, isolated from the host system and other applications. This isolation prevents conflicts between dependencies and ensures that applications operate reliably and predictably.

Furthermore, Docker brings consistency to application deployment. With Docker containers, developers can create a standardized environment, ensuring that applications behave consistently across different development machines, testing environments, and production servers. This eliminates the "works on my machine" problem and facilitates smoother collaboration between teams.

Simplified deployment is another advantage provided by Docker. With Docker containers, developers can package their applications and dependencies into a single, self-contained unit that can be easily distributed and deployed on any system supporting Docker. This portability enables seamless deployment across different cloud platforms, on-premises servers, or even developer laptops, reducing deployment complexities and enabling efficient scaling of applications.

Additionally, Docker promotes scalability and resource efficiency. Its lightweight nature allows for quick provisioning and deployment of containers, enabling rapid scaling of applications based on demand. Containers consume fewer resources compared to traditional virtual machines, making more efficient use of system resources and allowing for higher density of application instances on a single host.

Setting Up Docker on AWS

Docker is a popular containerization technology that enables developers to package and deploy applications in lightweight and portable containers. Amazon Web Services (AWS) provides a robust infrastructure for running Docker containers. This guide will walk you through the steps to set up Docker on AWS.

Step 1: Create an AWS Account

If you don't have an AWS account already, go to the AWS website (https://aws.amazon.com/) and click on "Create an AWS Account." Follow the instructions to set up your account. You will need to provide billing information and create a new IAM user with appropriate permissions.

Step 2: Launch an EC2 Instance

Once you have an AWS account, launch an EC2 instance to host your Docker containers. Follow these steps:

  • Open the AWS Management Console and navigate to the EC2 service.
  • Click on "Launch Instance" to start the instance creation process.
  • Choose an Amazon Machine Image (AMI) that supports Docker, such as Amazon Linux 2 or Amazon ECS-optimized AMI.
  • Select an instance type based on your application's requirements.
  • Configure the instance details, such as the number of instances, VPC settings, and security groups.
  • Add storage and configure any additional settings as needed.
  • Review your configuration and launch the instance.

Step 3: Connect to the EC2 Instance

After the EC2 instance is launched, you need to connect to it using SSH. Follow these steps:

  • Locate the public IP address or DNS name of your EC2 instance in the AWS Management Console.
  • Open a terminal on your local machine and run the following command, replacing your-instance-ip with the IP address or DNS name:

ssh -i your-key-pair.pem ec2-user@your-instance-ip

Note: You may need to adjust the SSH command based on the operating system and key pair you used during the EC2 instance setup.

Step 4: Install Docker on the EC2 Instance

Once connected to the EC2 instance, install Docker by executing the following commands:

  • Update the package index: sudo yum update -y
  • Install Docker: sudo amazon-linux-extras install docker -y
  • Start the Docker service: sudo service docker start
  • Add the ec2-user to the Docker group to run Docker commands without using sudo: sudo usermod -a -G docker ec2-user

Step 5: Verify the Docker Installation

To confirm that Docker is installed and running correctly on the EC2 instance, run the following command:

docker info

If Docker is installed properly, you will see information about the Docker version and configuration.

Deploying and Managing Docker Containers on AWS EC2

Once you have Docker set up on your AWS EC2 instance, you can begin deploying and managing Docker containers. This guide will walk you through the steps to deploy and manage Docker containers on your AWS EC2 instance.

Step 1: Prepare your Docker Image

Before deploying a container, ensure you have a Docker image ready for your application. You can either build your own Docker image using a Dockerfile or pull an existing image from a registry like Docker Hub.

Step 2: Pull or Load the Docker Image

If you are using an existing Docker image from a registry, use the docker pull command to fetch the image onto your EC2 instance. For example:

docker pull your-image:tag

If you have a locally-built Docker image or a saved image file, use the docker load command to load the image into Docker on your EC2 instance. For example:

docker load -i your-image.tar

Step 3: Run the Docker Container

To deploy a Docker container based on your image, use the docker run command. Customize the command based on your application's requirements. Here's an example:

docker run -d --name your-container -p host-port:container-port your-image:tag

Replace your-container with a desired name for your container, host-port with the port number on your EC2 instance you want to map to the container's port, and container-port with the port your application listens on inside the container.

Step 4: Manage and Monitor Docker Containers

You can manage and monitor your Docker containers on your EC2 instance using various Docker commands. Here are some commonly used commands:

  • docker ps: Lists the running containers on the EC2 instance.
  • docker stop container-id: Stops a running container. Replace container-id with the actual container ID or name.
  • docker start container-id: Starts a stopped container.
  • docker logs container-id: Displays the logs of a specific container.
  • docker exec -it container-id /bin/bash: Accesses the terminal inside a running container for executing commands or debugging.

Step 5: Clean Up Unused Containers

To keep your EC2 instance clean and efficient, it's essential to remove unused containers. Use the docker rm command followed by the container ID or name to remove a stopped container. For example:

docker rm container-id
Be cautious while removing containers as it permanently deletes the container and its associated data.

Step 6: Automate Container Deployment and Management

To streamline container deployment and management, consider using container orchestration tools like Amazon Elastic Container Service (ECS) or Kubernetes on AWS. These tools provide more advanced features, such as automatic scaling, load balancing, and improved container management capabilities.

Feel free to visit the official Amazon Container Documentation Website and explore further resources from Amazon

Top comments (0)