DEV Community

Cover image for Everything You Need to Know About Docker
Zeeshan Haider Shaheen
Zeeshan Haider Shaheen

Posted on

Everything You Need to Know About Docker

In today’s fast-paced development world, containerization has revolutionized how we build, ship, and run applications. Docker is the leading platform that makes containerization accessible, efficient, and reliable. In this post, we’ll cover every detail—from the underlying concepts and architecture to building a full-stack application with Docker and deploying it on another machine. Whether you’re new to Docker or an experienced developer looking to refine your skills, this guide will provide a comprehensive overview.


Table of Contents

  1. What is Docker?
  2. Key Docker Concepts and Terminology
  3. Why Use Docker?
  4. Installing Docker
  5. Docker Architecture
  6. Understanding and Writing a Dockerfile
  7. Docker Compose for Multi-Container Applications
  8. Containerizing a Full-Stack Application
  9. Running Your Dockerized Application on Another Machine
  10. Conclusion

What is Docker?
Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization. In simple terms, Docker allows you to package an application along with its dependencies, libraries, and configuration files into a single container. This container can then run reliably on any machine—whether it’s a developer’s laptop, a testing environment, or a production server—without any “it works on my machine” issues.

Key Takeaway
Consistency: Docker ensures that the environment is consistent regardless of where the application is deployed.


Key Docker Concepts and Terminology
Before diving deeper, let’s review some essential Docker terms:

  • Image:
    A Docker image is a lightweight, standalone, and executable package that contains everything needed to run a piece of software. Think of it as a snapshot or template of your application and its environment.

  • Container:
    A container is a runtime instance of an image. When you run a Docker image, it becomes one or more containers. Containers are isolated from one another and from the host system, which helps prevent conflicts.

  • Dockerfile:
    A Dockerfile is a text file that contains a list of instructions on how to build a Docker image. It automates the image creation process, making it reproducible.

  • Docker Hub:
    Docker Hub is a cloud-based repository where you can find and share container images. It’s similar to GitHub for code but for Docker images.

  • Volumes:
    Volumes are used for persisting data outside of containers. They are essential when you need to store data (like database files) that shouldn’t be removed when a container is deleted.

  • Networks:
    Docker networks allow containers to communicate with each other and with external systems securely. They can be configured to control which containers can talk to each other.


Why Use Docker?
Using Docker offers several significant benefits:

  1. Portability:
    Containers can run on any system that supports Docker. This means the environment you develop in can be identical to your production environment.

  2. Isolation:
    Each container is isolated from other containers and the host system. This isolation prevents conflicts and makes it easier to manage dependencies.

  3. Scalability:
    Docker integrates with orchestration tools like Kubernetes, allowing you to scale applications horizontally and manage complex deployments efficiently.

  4. Resource Efficiency:
    Unlike full-blown virtual machines, Docker containers share the host’s OS kernel, which makes them much lighter and faster to start up.

  5. Simplified Dependency Management:
    With Docker, you don’t have to worry about setting up the correct versions of libraries and tools on every machine because they are bundled with your application.


Installing Docker

For macOS and Windows

  1. Download Docker Desktop:
    Visit Docker Desktop and download the installer for your operating system.

  2. Installation:
    Follow the installation instructions. Docker Desktop provides a user-friendly graphical interface to manage your containers, images, and settings.

For Linux
On Linux (e.g., Ubuntu/Debian), follow these steps:

sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

# Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Set up the stable repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

Enter fullscreen mode Exit fullscreen mode

Post-installation tip:
Add your user to the dockergroup to run Docker commands without sudo:

sudo usermod -aG docker $USER

Enter fullscreen mode Exit fullscreen mode

After a system logout/login, you’ll be able to run Docker commands without administrative privileges.


Docker Architecture

Understanding the architecture behind Docker helps explain its power and flexibility:

  • Docker Daemon (dockerd):
    The Docker daemon is a background service responsible for building, running, and managing containers. It listens for Docker API requests and handles container lifecycle events.

  • Docker Client:
    The Docker client is a command-line tool (docker) that allows users to interact with the Docker daemon. You type commands like docker run or docker build, and the client passes these commands to the daemon.

  • REST API:
    Docker exposes a REST API which allows programmatic control over Docker. This API is used by the Docker client and can be integrated into other tools.

  • Containers and Images:
    Images are the blueprints for containers. When the daemon creates a container from an image, it uses isolation technologies (like namespaces and control groups) provided by the host OS.


Understanding and Writing a Dockerfile

A Dockerfile is a script that contains instructions to assemble a Docker image. Let’s break down a sample Dockerfile for a Node.js application:

# Use an official Node.js runtime as a parent image
FROM node:14

# Set the working directory inside the container
WORKDIR /app

# Copy package.json and package-lock.json to install dependencies
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the rest of the application code into the container
COPY . .

# Expose the port on which the app will run
EXPOSE 3000

# Define the command to run the application
CMD ["node", "app.js"]

Enter fullscreen mode Exit fullscreen mode

Key Concepts in the Dockerfile:

  • FROM:
    Specifies the base image. In this case, we’re using the official Node.js image tagged with version 14.

  • WORKDIR:
    Sets the working directory inside the container. All subsequent commands are executed in this directory.

  • COPY:
    Copies files from your host machine to the container. We copy the dependency files first to leverage Docker’s caching mechanism.

  • RUN:
    Executes a command in the container. Here, it installs the dependencies.

  • EXPOSE:
    Informs Docker that the container listens on the specified network port at runtime.

  • CMD:
    Specifies the default command to run when the container starts.

Building and Running the Image:

# Build the Docker image with a specific tag
docker build -t my-node-app .

# Run a container from the built image, mapping port 3000 on the host to port 3000 in the container
docker run -p 3000:3000 my-node-app


---

Enter fullscreen mode Exit fullscreen mode

Docker Compose for Multi-Container Applications
Real-world applications often consist of multiple services (e.g., frontend, backend, database). Docker Compose allows you to define and manage multi-container applications using a simple YAML file.

Example: Full-Stack Application
Consider an application with a frontend (React), a backend (Node.js), and a PostgreSQL database. Below is a sample docker-compose.yml:

version: '3.8'

services:
  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    depends_on:
      - backend

  backend:
    build: ./backend
    ports:
      - "5000:5000"
    depends_on:
      - db
    environment:
      - DB_HOST=db
      - DB_USER=yourusername
      - DB_PASSWORD=yourpassword

  db:
    image: postgres:13
    environment:
      POSTGRES_DB: yourdatabase
      POSTGRES_USER: yourusername
      POSTGRES_PASSWORD: yourpassword
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

Enter fullscreen mode Exit fullscreen mode

Explanation:

  • services:
    Each service (frontend, backend, db) is defined separately.

  • build:
    For services you build yourself, the build directive points to the directory containing the Dockerfile.

  • ports:
    Maps ports on the host to ports in the container.

  • depends_on:
    Specifies service dependencies. For example, the backend depends on the database.

  • environment:
    Sets environment variables for the container.

  • volumes:
    Creates persistent storage (in this case, for the database) that persists even when containers are restarted.

Running the full-stack application:

docker-compose up --build

Enter fullscreen mode Exit fullscreen mode

This command builds and starts all defined services, ensuring they can communicate using Docker’s networking.


Containerizing a Full-Stack Application: Step by Step
Let’s break down the process of containerizing a full-stack application into clear steps.

1. Backend Service

  • Dockerfile in /backend:
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["node", "server.js"]

Enter fullscreen mode Exit fullscreen mode
  • Key Points:

    • Set the working directory to ensure all files are placed correctly.
    • Install dependencies before copying the rest of the code to leverage Docker caching.
    • Expose the port where your API server listens.

2. Frontend Service

  • Dockerfile in /frontend:

For a React application, you might use a multi-stage build:

# Build stage
FROM node:14 as build
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build

# Production stage
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Enter fullscreen mode Exit fullscreen mode
  • Key Points:

    • The multi-stage build reduces the final image size by using Node.js only for building and Nginx for serving static files.
    • Ensure the build output is copied to the correct directory that Nginx expects.

3. Database Service

  • Usage:
    • Instead of building your own image, leverage an official PostgreSQL image.
    • Use environment variables to configure the database and a volume for data persistence.

4. Networking and Communication

  • Docker Compose:

    • Docker Compose automatically creates a network where each service can be reached by its service name (e.g., the backend can refer to the database as db).
  • Environment Variables:

    • Use environment variables to share configuration settings (like database credentials) between services.

Running Your Dockerized Application on Another Machine
One of Docker’s key advantages is portability. Here’s how to ensure your application runs flawlessly on another machine:

1. Install Docker on the Target Machine:
Ensure that Docker (or Docker Desktop) is installed and running on the new machine.

2. Share Your Code and Configuration:
Provide the full source code, including all Dockerfiles, the docker-compose.yml, and any necessary .env files. This ensures that every dependency and configuration is present.

3. Build or Pull Images:

  • Option 1: Build Locally Run:
docker-compose up --build

Enter fullscreen mode Exit fullscreen mode

This command rebuilds your images based on the provided Dockerfiles.

  • Option 2: Use a Container Registry Push your images to a registry (such as Docker Hub) from your development machine:
docker tag my-node-app yourusername/my-node-app
docker push yourusername/my-node-app

Enter fullscreen mode Exit fullscreen mode

Then, update your docker-compose.yml to use the image from the registry. On the target machine, simply run:

docker-compose pull
docker-compose up

Enter fullscreen mode Exit fullscreen mode

Conclusion

Docker has fundamentally changed how we develop, ship, and run applications. By using Docker, you can ensure that your application runs consistently across all environments—be it local development, testing, or production. This guide has walked you through every detail: from core concepts and Docker architecture to writing Dockerfiles, using Docker Compose for multi-container applications, containerizing a full-stack application, and finally running your Dockerized app on another machine.

Armed with these insights and best practices, you’re now well-equipped to leverage Docker for your projects, ensuring portability, scalability, and reliability. Happy Dockering!

Top comments (0)