DEV Community

Cover image for Docker Mastery: A Comprehensive Guide for Beginners and Pros
Yasir Rehman
Yasir Rehman

Posted on

Docker Mastery: A Comprehensive Guide for Beginners and Pros

Docker is a powerful platform that simplifies the creation, deployment, and management of applications within lightweight, portable containers. It allows developers to package applications and their dependencies into a standardized unit for seamless development and deployment. Docker enhances efficiency, scalability, and collaboration across different environments, making it an essential tool for modern software development and DevOps practices.
We'll delve into every aspect of Docker, from installation and configuration to mastering images, storage, networking, and security.

Installation and configuration

Basic guides for installing Docker Community Edition (CE) on CentOS and Ubuntu are given below.
Install Docker CE on CentOS

  • Install the required packages:
    sudo yum install -y () device-mapper-persistent-data lvm2

  • Add Docker CE yum repository:
    sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

  • Install the Docker CE packages:
    sudo yum install -y docker-ce-18.09.5 docker-ce-cli-18.09.5 containerd.io

  • Start and enable Docker Service:
    sudo systemctl start docker
    sudo systemctl enable docker

    Add the user to Docker group to grant the user permission to run Docker commands. It will have access to Docker after its next login.
    sudo usermod -a -G docker

Installing Docker CE on Ubuntu

  • Install the required packages:
    sudo apt-get update
    sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

  • Add the Docker repo's GNU Privacy Guard (GPG) key:
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

  • Add the Docker Ubuntu repository:
    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable

  • Install packages:
    sudo apt-get install -y docker-ce=5:18.09.5~3-0~ubuntu-bionic docker-ce-cli=5:18.09.5~3-0~ubuntu-bionic containerd.io
    Add the user to Docker group to grant the user permission to run Docker commands. It will have access to Docker after its next login.
    sudo usermod -a -G docker

Selecting a storage driver
A storage driver is a pluggable driver that handles internal storage for containers. The default driver for CentOS and Ubuntu systems is overlay2.
To determine the current storage driver:
docker info | grep "Storage"
One way to select a different storage driver is to pass the --storage-driver flag over to the Docker daemon. The recommended method to set the storage driver is using the Daemon Config file.

  • Create or edit the Daemon config file:
    sudo vi /etc/docker/daemon.json

  • Add the storage driver value:
    "storage-driver": "overlay2"
    Remember to restart Docker after any changes, and then check the status.
    sudo systemctl restart docker
    sudo systemctl status docker

Running a container
docker run IMAGE[:TAG] [COMMAND] [ARGS]
IMAGE: Specifies the image to run a container.
COMMAND and ARGS: Run a command inside the container.
TAG: Specifies the image tag or version
-d: Runs the container in detached mode.
--name NAME: Gives the container a specified name instead of the usual randomly assigned name.
--restart RESTART: Specifies when Docker should automatically restart the container.

  • no (default): Never restart the container.
  • on-failure: Only if the container fails (exits with a non-zero exit code).
  • always: Always restart the container whether it succeeds or fails.
  • unless-stopped: Always restart the container whether it succeeds or fails, and on daemon startup unless the container is manually stopped.

-p HOST_PORT: CONTAINER_PORT: Publish a container's port. The HOST_PORT is the port that listens on the host machine, and traffic to that port is mapped to the CONTAINER_PORT on the container.
--memory MEMORY: Set a hard limit on memory usage.
--memory-reservation MEMORY: Set a soft limit on memory usage.

docker run -d --name nginx --restart unless-stopped -p 8080:80 --memory 500M --memory-reservation 256M nginx:latest

Some of the commands for managing running containers are:
docker ps: List running containers.
docker ps -a: List all containers, including stopped containers.
docker container stop [alias: docker stop]: Stop a running container.
docker container start [alias: docker start]: Start a stopped container.
docker container rm [alias: docker rm]: Delete a container (must be stopped first)

Upgrading the Docker Engine
Stop the Docker service:
sudo systemctl stop docker
Install the required version of docker-ce and docker-ce-cli:
sudo apt-get install -y docker-ce=<new version> docker-ce-cli=<new version>
Verify the current version
docker version

Image creation, management, and registry

An image is an executable package containing all the software needed to run a container.
Run a container using an image with:
docker run IMAGE
Download an image with:
docker pull IMAGE
docker image pull IMAGE

Images and containers use a layered file system. Each layer contains only the differences from the previous layer.
View file system layers in an image with:
docker image history IMAGE
A Dockerfile is a file that defines a series of directives and is used to build an image.

# Use the official Nginx base image
FROM nginx:latest

# Set an environment variable
ENV MY_VAR=my_value

# Copy custom configuration file to container
COPY nginx.conf /etc/nginx/nginx.conf

# Run some commands during the build process
RUN apt-get update && apt-get install -y curl

# Expose port 80 for incoming traffic
EXPOSE 80

# Start Nginx server when the container starts
CMD ["nginx", "-g", "daemon off;"]
Enter fullscreen mode Exit fullscreen mode

Build an image:
docker build -t TAG_NAME DOCKERFILE_LOCATION
Dockerfile directives:
FROM: Specifies the base image to use for the Docker image being built. It defines the starting point for the image and can be any valid image available on Docker Hub or a private registry.
ENV: Sets environment variables within the image. These variables are accessible during the build process and when the container is running.
COPY or ADD: Copies files and directories from the build context (the directory where the Dockerfile is located) into the image. COPY is generally preferred for simple file copying, while ADD supports additional features such as unpacking archives.
RUN: Executes commands during the build process. You can use RUN to install dependencies, run scripts, or perform any other necessary tasks.
EXPOSE: Informs Docker that the container will listen on the specified network ports at runtime. It does not publish the ports to the host machine or make the container accessible from outside.
CMD or ENTRYPOINT: Specifies the command to run when a container is started from the image. CMD provides default arguments that can be overridden, while ENTRYPOINT specifies a command that cannot be overridden.
WORKDIR: Sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY, or ADD instructions.
STOPSIGNAL: Sets a custom signal that will be used to stop the container process.
HEALTHCHECK: Sets a command that will be used by the Docker daemon to check whether the container is healthy

A multi-stage build in a Dockerfile is a technique used to create more efficient and smaller Docker images. It involves defining multiple stages within the Dockerfile, each with its own set of instructions and dependencies.
An example Dockerfile containing a multi-stage build definition is:

# Build stage
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app

# Copy and restore project dependencies
COPY *.csproj .
RUN dotnet restore

# Copy the entire project and build
COPY . .
RUN dotnet build -c Release --no-restore

# Publish the application
RUN dotnet publish -c Release -o /app/publish --no-restore

# Runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime
WORKDIR /app
COPY --from=build /app/publish .

# Expose the required port
EXPOSE 80

# Set the entry point for the application
ENTRYPOINT ["dotnet", "YourApplication.dll"]
Enter fullscreen mode Exit fullscreen mode

Managing images
Some key commands for image management are:
List images on the system:
docker image ls
List images on the system including intermediate images:
docker image ls -a
Get detailed information about an image:
docker image inspect <IMAGE>
Delete an image:
docker rmi <IMAGE>
docker image rm <IMAGE>
docker image rm -f <IMAGE>

An image can only face deletion if no containers or other image tags reference it. Find and delete dangling or unused images:
docker image prune

Docker registries
Docker Registry serves as a centralized repository for storing and sharing Docker images. Docker Hub is the default, publicly available registry managed by Docker. By utilizing the registry image, we can set up and manage our own private registry at no cost.
Run a simple registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Upload an image to a registry:
docker push <IMAGE>:<TAG>
Download an image from a registry:
docker pull <IMAGE>:<TAG>
Login to a registry:
docker login REGISTRY_URL
There are two authentication methods for connecting to a private registry with an untrusted or self-signed certificate:
Secure: This involves adding the registry's public certificate to the /etc/docker/certs.d/ directory.
Insecure: This method entails adding the registry to the insecure-registries list in the daemon.json file or passing it to dockerd using the --insecure-registry flag.

Storage and volumes
The storage driver controls how images and containers are stored and managed on your Docker host. Docker supports several storage drivers, using a pluggable architecture.
overlay2: Preferred for all Linux distributions
fuse-overlayfs: Preferred only for running Rootless Docker (not Ubuntu or Debian 10)
vfs: Intended for testing purposes, and for situations where no copy-on-write filesystem can be used.

Storage models
*Filesystem storage: *

  • Data is stored in the form of regular files on the host disk
  • Efficient use of memory
  • Inefficient with write-heavy workloads
  • Used by overlay2

Block Storage:

  • Stores data in blocks using special block storage devices
  • Efficient with write-heavy workloads
  • Used by btrfs and zfs

Object Storage:

  • Stores data in an external object-based store
  • Applications must be designed to use object-based storage.
  • Flexible and scalable.

Configuring the overlay2 storage driver
Stop Docker service:
sudo systemctl stop docker
Create or edit the Daemon config file:
sudo vi /etc/docker/daemon.json
Add/edit the storage driver value:
"storage-driver": "overlay2"
Remember to restart Docker after any changes, and then check the status.
sudo systemctl restart docker
sudo systemctl status docker

Docker Volumes
There are two different types of data mounts on Docker:
Bind Mount: Mounts a specific directory on the host to the container. It is useful for sharing configuration files, and other data between the container and host.
Named Volume: Mounts a directory to the container, but Docker controls the location of the volume on disk dynamically.
There are different syntaxes for adding bind mounts or volumes to containers:
-v syntax _
Bind mount: The source begins with a forward slash "/" which makes this a bind mount.
docker run -v /opt/data:/tmp nginx
Named volume: The source is just a string, which means this is a volume. It will be automatically created if no volume exists with the provided name.
docker run -v my-vol:/tmp nginx
_--mount syntax

Bind mount:
docker run --mount source=/opt/data,destination=/tmp nginx
Named volume:
docker run --mount source=my-vol,destination=/tmp nginx
We can mount the same volume to multiple containers, allowing them to share data. We can also create and manage volumes by ourselves without running a container.

Some common and useful commands:
docker volume create VOLUME: Creates a volume.
docker volume ls: Lists volumes.
docker volume inspect VOLUME: Inspects a volume.
docker volume rm VOLUME: Deletes a volume.

Image Cleanup
Check Docker's disk usage:
docker system df
docker system df -v
Delete unused or dangling images:
docker image prune
docker image prune -a

Docker networking

Docker Container Networking Model (CNM) is a conceptual model that describes the components and concepts of Docker networking.
There are multiple implementations of the Docker CNM:
Sandbox: An isolated unit containing all networking components associated with a single container.
Endpoint: Connects one sandbox to one network.
Network: A collection of endpoints that can communicate with each other. Network Driver: A pluggable driver that provides a specific implementation of the CNM.
IPAM Driver: Provides IP address management. Allocates and assigns IP addresses.

Built-In Network Drivers
Host: This driver connects the container directly to the host's networking stack. It provides no isolation between containers or between containers and the host.
docker run --net host nginx
Bridge: This driver uses virtual bridge interfaces to establish connections between containers running on the same host.
docker network create --driver bridge my-bridge-net
docker run -d --network my-bridge-net nginx

Overlay: This driver uses a routing mesh to connect containers across multiple Docker hosts, usually in a Docker swarm.
docker network create --driver overlay my-overlay-net
docker service create --network my-overlay-net nginx

MACVLAN: This driver connects containers directly to the host's network interfaces but uses a special configuration to provide isolation.
docker network create -d macvlan --subnet 192.168.0.0/24 --gateway 192.168.0.1 -o parent=eth0 my-macvlan-net
docker run -d --net my-macvlan-net nginx

None: This driver provides sandbox isolation, but it does not provide any implementation for networking between containers or between containers and the host.
docker run --net none -d nginx

Creating a Docker Bridge network
It is the default driver. Therefore, any network that is created without specifying the driver will be a bridge network.
Create a bridge network.
docker network create my-net
Run a container on the bridge network.
docker run -d --network my-net nginx

By default, containers and services on the same network can communicate with each other simply by using their container or service names. Docker provides DNS resolution on the network that allows this to work.
Supply a network alias to provide an additional name by which a container or service is reached.
docker run -d --network my-net --network-alias my-nginx-alias nginx

Some useful commands for when one must interact with Docker networks are:
docker network ls: Lists networks.
docker network inspect NETWORK: Inspects a network.
docker network connect CONTAINER NETWORK: Connects a container to a network.
docker network disconnect CONTAINER NETWORK: Disconnects a container from a network.
docker network rm NETWORK: Deletes a network.

Creating a Docker Overlay Network
Create an overlay network:
docker network create --driver overlay NETWORK_NAME
Create a service that uses the network:
docker service create --network NETWORK_NAME IMAGE

Network Troubleshooting
View container logs:
docker logs CONTAINER
View logs for all tasks of a service:
docker service logs SERVICE
View Docker daemon logs:
sudo jounralctl -u docker

We can use the nicolaka/netshoot image to perform network troubleshooting. It comes packaged with a variety of useful networking-related tools. We can inject a container into another container's networking sandbox for troubleshooting purposes.
docker run --network container:CONTAINER_NAME nicolaka/netshoot

Configuring Docker to Use External DNS
Set the system-wide default DNS for Docker containers in daemon.json:
{
"dns": ["8.8.8.8"]
}

Set the DNS for an individual container.
docker run --dns 8.8.4.4 IMAGE

Security

Signing Images and Enabling Docker Content Trust
Docker Content Trust (DCT) is a feature that allows us to sign images and verify signatures before running them. Enable Docker Content Trust by setting an environment variable:
DOCKER_CONTENT_TRUST=1
The system will not run images if they are unsigned or if the signature is not valid with Docker Content Trust enabled.
Sign and push an image with:
docker trust sign
With DOCKER_CONTENT_TRUST=1, docker push automatically signs the image before pushing it.

Default Docker Engine Security
Basic Docker security concepts:
Docker uses namespaces to isolate container processes from one another and the host. This prevents an attacker from affecting or gaining control of other containers or the host if they manage to gain control of one container.
The Docker daemon must run with root access. Before allowing anyone to interact with the daemon, be aware of this. It could be used to gain access to the entire host.
Docker leverages Linux capabilities to assign granular permissions to container processes. For example, listening on a low port (below 1024) usually requires a process to run as root, but Docker uses Linux capabilities to allow a container to listen on port 80 without running as root.

Securing the Docker Daemon HTTP Socket
Generate a certificate authority and server certificates for the Docker server.

openssl genrsa -aes256 -out ca-key.pem 4096` 

`openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem -subj "/C=US/ST=Texas/L=Keller/O=Linux Academy/OU=Content/CN=$HOSTNAME" openssl genrsa -out server-key.pem 4096 `

`openssl req -subj "/CN=$HOSTNAME" -sha256 -new -key server-key.pem -out server.csr \ echo subjectAltName = DNS:$HOSTNAME,IP:,IP:127.0.0.1 >> extfile.cnf `

`echo extendedKeyUsage = serverAuth >> extfile.cnf `

`openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf`
Generate client certificates:
`openssl genrsa -out key.pem 4096 

openssl req -subj '/CN=client' -new -key key.pem -out client.csr 

echo extendedKeyUsage = clientAuth > extfile-client.cnf 

openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -extfile extfile-client.cnf
Enter fullscreen mode Exit fullscreen mode

Set appropriate permissions on the certificate files:
chmod -v 0400 ca-key.pem key.pem server-key.pem chmod -v 0444 ca.pem server-cert.pem cert.pem
Configure the Docker host to use tlsverify mode with the certificates created earlier:

sudo vi /etc/docker/daemon.json

{
 "tlsverify": true,
 "tlscacert": "/home/user/ca.pem",
 "tlscert": "/home/user/server-cert.pem",
 "tlskey": "/home/user/server-key.pem"
 }
Enter fullscreen mode Exit fullscreen mode

Edit the Docker service file, look for the line that begins with ExecStart and change the -H.
sudo vi /lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd -H=0.0.0.0:2376 --containerd=/run/containerd/containerd.sock

sudo systemctl daemon-reload
sudo systemctl restart docker
Copy the CA cert and client certificate files to the client machine.
On the client machine, configure the client to connect to the remote Docker daemon securely:
mkdir -pv ~/.docker
cp -v {ca,cert,key}.pem ~/.docker
export DOCKER_HOST=tcp://:2376 DOCKER_TLS_VERIFY=1
Test the connection:
docker version

Conclusion

In conclusion, mastering Docker transforms your development workflow by streamlining installation, configuration, image management, storage, networking, and security. This guide equips you with essential knowledge and practical skills, enabling you to build, ship, and run applications efficiently. Embrace Docker's power to elevate your container management to the next level.

Top comments (18)

Collapse
 
bobbyiliev profile image
Bobby Iliev

Great article!

For anyone interested in learning more, here is a free open-source eBook that you can checkout:

GitHub logo bobbyiliev / introduction-to-docker-ebook

Free Introduction to Docker eBook

💡 Introduction to Docker

This is an open-source introduction to Docker guide that will help you learn the basics of Docker and how to start using containers for your SysOps, DevOps, and Dev projects. No matter if you are a DevOps/SysOps engineer, developer, or just a Linux enthusiast, you will most likely have to use Docker at some point in your career.

The guide is suitable for anyone working as a developer, system administrator, or a DevOps engineer and wants to learn the basics of Docker.

🚀 Download

To download a copy of the ebook use one of the following links:

📘 Chapters

🌟 Sponsors

Thanks to these fantastic companies that made this book possible!

Collapse
 
theyasirr profile image
Yasir Rehman

Thanks for the feedback, Bobby.

Collapse
 
syedmuhammadaliraza profile image
Syed Muhammad Ali Raza

Thanks for Sharing

Collapse
 
theyasirr profile image
Yasir Rehman

Hope you enjoyed it and was beneficial for you.

Collapse
 
adrianchong profile image
Adrian Chong

Thanks for sharing

Collapse
 
theyasirr profile image
Yasir Rehman

You are welcome, Adrian. Hope it was beneficial for you!

Collapse
 
ricardogesteves profile image
Ricardo Esteves

Nice article, thanks for sharing it!

Collapse
 
theyasirr profile image
Yasir Rehman

Thanks for the feedback, Ricardo.

Collapse
 
litlyx profile image
Antonio | CEO at Litlyx.com

Nice article. Thanks for sharing this valuable knowledge!

Collapse
 
theyasirr profile image
Yasir Rehman

Thanks for the feedback, Antonio.

Collapse
 
little_twinkle_0ae2b15172 profile image
little twinkle

Thanks for sharing it. Instead of running command, create UI

Collapse
 
theyasirr profile image
Yasir Rehman

Thanks for the feedback!

Collapse
 
fera2k profile image
Fernando Vieira

I am used to docker but I have learned some new things reading this article. Thanks.

Collapse
 
theyasirr profile image
Yasir Rehman

Good to hear that! Thanks for the feedback, Fernando.

Collapse
 
vuluu2k profile image
Lưu Công Quang Vũ

Nice article, thank author for sharing it!

Collapse
 
theyasirr profile image
Yasir Rehman

Thanks for the feedback!

Collapse
 
jkula profile image
josue Zolan

Thanks for sharing. I thought you’d explain how to secure docker SYS_SOCkET socket between user space and the Linux kernel. That’s the most unsolved vulnerability

Collapse
 
theyasirr profile image
Yasir Rehman

Thanks for the feedback, Josue. Sure, I will include (edit) securing the Docker SYS_SOCkET socket between user space and the Linux kernel.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.