DEV Community

mohammed afif ahmed
mohammed afif ahmed

Posted on • Edited on

Everything you need to know about Docker Swarm

Introduction

Docker is a free and open framework for building, delivering, and running apps. Docker allows you to decouple your code from your hardware, allowing you to easily deliver apps. You will handle the infrastructure the same way you manage your apps with Docker. One of the characteristics you have with docker is that you can run multiple docker environments in the same host environment.
However, docker may be able to maintain only a certain number of containers, because it runs on a single node. But what if someone wants to work with or develop thousands of containers.

This is where Docker swarm comes into the picture. A docker swarm is the virtualization of the large number of nodes(running Docker engine) running in a cluster. These nodes can communicate with each other, helping developers to maintain multiple nodes in a single environment.

Docker swarm

A swarm mode consists of Multiple Docker hosts which serve as managers (to handle membership and delegation) and workers which run swarm services. A Docker host may be a manager, a worker, or both at the same time. You determine the optimum state of service when you develop it (number of replicas, network and storage resources available to it, ports the service exposes to the outside world, and more). Docker strives to keep the optimal state. Docker schedules a worker node's activities on other nodes if that node becomes inaccessible. A job, as opposed to a standalone container, is a running container that is part of a swarm service and operated by a swarm manager.

When you're in swarm mode, you can change the configuration of a service, including the networks and volumes it's connected to, without having to restart it manually. Docker will upgrade the configuration, avoid any service tasks that have out-of-date configurations, and start new ones that fit the desired configuration.
So, the difference between swarm mode and standalone docker containers is that, when in swarm mode only managers can manage containers or swarm, unlike standalone containers which can be started on any daemon. But, the daemon can participate in swarm mode as a manager or worker, or both.

Docker swarm architecture

architecture diagram
Previously, we have used terms such as manager, worker, nodes, etc, Now let us try to understand what it means and how the docker swarm works.

Node

A node is a Docker engine instance that is part of the swarm. This can also be thought of as a Docker server. One or more nodes may operate on a single physical device or cloud server. But in development, these swarm cluster nodes can be spread over several machines on the cloud.
There are two types of nodes, a manager node, and a worker node.

  • Manager
    In the above image, we can see a swarm manager who is responsible to manage what a docker worker does. It maintains track of all of its workers' whereabouts. Docker Manager knows what job the worker is working on, what task it has been allocated, how assignments are distributed to all jobs, and whether the worker is up and running.
    Docker Manager's API is used to build a new service and orchestrate it. Assigns tasks to workers using the worker’s IP addresses.

  • Worker
    The Docker Manager has complete power over a Docker Worker. The Docker Worker accepts and executes the tasks/instructions that the Docker Manager has delegated to it. A Docker Worker is a client agent that informs the manager about the state of the node it’s been running on through a REST API over HTTP protocol.

services

The tasks to be executed on the manager or worker nodes are described by a service. It is the swarm system's core mechanism and the main point of user engagement with the swarm.
When you create a service, you also create containers and also specify tasks, which must be executed inside them.
The swarm manager distributes a certain number of replica tasks among the nodes in the replicated resources model depending on the scale you set in the desired state.

  • Load Balancing To expose the resources you want to make available externally to the swarm, the swarm manager uses ingress load balancing. The swarm manager can automatically add a PublishedPort to the service automatically or manually configure one. If you don't mention a port, the swarm manager assigns a port between 30000 and 32767 to the operation.

External modules, such as cloud load balancers, can access the service through the PublishedPort of any node in the cluster. regardless of whether that node is actually performing the service's task. Ingress links are routed to a running task instance by all nodes in the swarm.

Swarm features

After looking at what the docker swarm is and its related terminology, let us see what are the different features that swarm mode offers on the docker engine

  • Use the Docker Engine CLI to create a swarm of Docker Engines where you can deploy application services. You don’t need additional orchestration software to create or manage a swarm.
  • The Docker Engine manages some specialization at runtime, rather than managing distinction between node roles at deployment time. The Docker Engine can be used to deploy both manager and worker nodes. This means that a swarm can be created on just a disc image.
  • Docker Engine takes a declarative approach of defining the optimal state of your application stack's different resources. For example, a web front-end server with message queueing services and a backend database might be described as an application.
  • You should declare the number of tasks to be performed with each service. When the swarm manager scales up or down(this means that we are scaling up or down the number of services or containers), it automatically adapts to preserve the desired environment, by adding or deleting tasks.
  • The swarm manager node checks the cluster status continuously and reconciles any inconsistencies between the current state and the target state. For instance, you set up a service of running 10 container replicas on 5 workers, and two of the replicas on one of the worker crashes. Then, the manager creates two more replicas and assigns them to the worker who is up and running.
  • A network overlay for your services may be specified. When the application is initialized or modified the swarm manager immediately assigns addresses to the containers in the overlay network. The ports may be shown to an external load balancer for the utilities. Internally, you should decide how service containers can be distributed between nodes.

Now, let us get into some practical, we are going to create a cluster, add two worker nodes and then deploy services to that swarm.
Pre-requisites:

  1. Three Linux machines, that can communicate over a network
  2. Docker installed on all three of them.

This tutorial includes three Linux hosts with Docker-enabled and network-compatible communications. They can be physical, virtual, or Amazon EC2 instances or otherwise hosted.
One will be a manager and the other two will be workers.
We are going to use three Linux machines hosted on AWS, that is EC2 instances.

While creating EC2 instances add the following rules in the security group as follows:
The following ports must be available. On some systems, these ports are open by default.

  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • UDP port 4789 for overlay network traffic

aws image-1
While creating the manager machine add these rules.
Then while creating the worker nodes use the same security group created for the manager machine.

aws image-2
Next, ssh into all the machines and install docker-engine.
Use the following commands to install docker-engine on all three machines.

  • Update and add packages of the apt installation index so that apt can use an HTTPS repository.
$ sudo apt-get update
Enter fullscreen mode Exit fullscreen mode
$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
Enter fullscreen mode Exit fullscreen mode
  • Add Docker’s official GPG key:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg 
Enter fullscreen mode Exit fullscreen mode
  • Use the following command to set up the stable repository
$ echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Enter fullscreen mode Exit fullscreen mode
  • Install docker engine
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
Enter fullscreen mode Exit fullscreen mode

Repeat this for all the three machines.

You are able to create a swarm after you have completed all the setup steps. Ensure that the host machines are running the Docker Engine daemon. Open a terminal and ssh on the computer where the manager node must be executed.

Run the following command to create a new swarm:

$ docker swarm init --advertise-addr <MANAGER-IP>
Enter fullscreen mode Exit fullscreen mode

We have a manager machine with IP 172.31.80.181, so the command is:

$ docker swarm init --advertise-addr 172.31.80.181
Enter fullscreen mode Exit fullscreen mode

image-1
The --advertise-addr flag configures the manager node as 172.31.80.181 to publish its address. The other swarm nodes must have access to the IP address of the manager.
The output consists of commands to attach new nodes to the swarm. Nodes will enter as managers or workers according to the flag value.

To see the current state you can use docker info:

$ sudo docker info
Enter fullscreen mode Exit fullscreen mode

image-2
In this above image, we can see that there are no containers running on the docker server and the swarm flag is active. This also prints out the clusterId and number of managers and nodes etc.

To view information about nodes use the command

$ docker node ls
Enter fullscreen mode Exit fullscreen mode

image-3
The * next to the ID represents that this is the node we are currently connected to.
The swarm mode Docker Engine names the machine hostname node automatically.

Adding Worker Nodes

It's time to add worker nodes to the above-created swarm cluster.
ssh into the machine where you want to run your worker.
Now, we must run the output of docker swarm init as a command in this worker terminal:

$ docker swarm join --token SWMTKN-1-05ikz9ituzi3uhn1dq1r68bywhfzczg260b9zkhigj9bubomwb-a003dujcz7zu93rlb48wd0o87 172.31.80.181:2377 
Enter fullscreen mode Exit fullscreen mode

image-4

You can execute the following command on a manager node to retrieve a worker's join command if your command is not available:

$ docker swarm join-token worker
Enter fullscreen mode Exit fullscreen mode

image-5
Do the same with the other worker as well. SSH into another machine and run the join command.

To view worker nodes, open a terminal and ssh in the machine that runs the manager node and execute the Docker node ls command:

$ docker node ls
Enter fullscreen mode Exit fullscreen mode

image-6
The manager nodes in the swarm are determined by the column MANAGER. Worker1 and worker2 are identified as working nodes by the empty state in this column.

Deploy service to the swarm

Now we have a cluster with a manager and two workers. We can now deploy services to the swarm cluster.

Open the terminal into the manager node after SSH, and run the following command:

$ docker service create --replicas 1 --name helloworld alpine ping docker.com
Enter fullscreen mode Exit fullscreen mode

Let's break down the above command:

  • docker service create: to create a service
  • --replicas: this flag indicates the desired state of 1 running instance.
  • --name: used to name the service
  • alpine ping docker.com: this indicates that the services is going to run alpine Linux and the primary command to run inside the instance or service is ping docker.com image-7

To see the list of running services run the below command:

$ docker service ls
Enter fullscreen mode Exit fullscreen mode

image-8
This image lists the name of the service we just create and the number of replicas, along with the base image, which is alpine.

Conclusion

In this article, We started with a description of the docker, then we discussed the need for multiple docker hosts. We then described what is docker swarm, its uses, and its working through docker swarm architecture, we also covered, different docker swarm terminologies like manager node and worker node. After we thoroughly understood the docker swarm we stated to implement it or run services in a swarm cluster. We started with creating 3 Linux hosts in AWS as EC2 instances along with the security group configuration(by adding TCP and UDP rules).
We looked at how to create or initialize a swarm cluster through a manager node, then we added a couple of worker nodes to the same cluster. TO conclude we added a service running Linux alpine to run a ping command.
Also this is my first article on dev.to, and really enjoyed writing it.

liked the post?
ko-fi

Top comments (5)

Collapse
 
peteking profile image
Pete King

Docker Swarm was awesome, unfortunately, the container wars sorted that out fairly quickly in the end; well a few years.

Anyway, Kubernetes won as we should all know by now, and I have good knowledge of K8 from running from dev to production workloads - it is great! However, be warned, initial learning curve is high, the initial overall investment is high in all conceivable areas. When you're all done though, the speed at which engineers can go from dev to production is immense.

Docker Swarm was sold off to Mirantis with the Docker Enterprise business, they've kept it going, but even Mirantis has said, it's all essentially dead; swam mode is baked into Docker, so that's still there.

The way I like to put it is, "Kubernetes is a swiss army knife, it can do practically anything, whereas Docker Swarm is was a good knife, and it does did it well."

The best thing so far I've heard of to replace the Docker Swarm knife is Hashicorp Nomad - it's starting to look like a good knife.

 
edelvalle profile image
Eddy Ernesto del Valle Pino

Is kind of like CoffeeScript it is "dead" still has 1.3 million downloads per week.
Swarm is also way easier to setup and manage than k8, at my company we use swarm.

The good parts:

  • Is just docker, no thing else needs to be installed.
  • Is just docker, deployment and networking works very similar to using docker-compose
  • Automatic load balancing.
  • Swarmpit let's you deploy just by pushing an image to a registry, when swarmpit sees that the hash of the tag changed it pulls that image and deploys it. No need for CI to tell the cluster to deploy something, it just needs to build and push an image.

The bad:

  • No global volumes, if a container moves node it leaves the volume behind and creates a new one, so not very suitable for stateful servers like DB servers except you pin them to a node but then you loose reliability.
  • Main tools to manage the cluster are kind of unmaintained, like: swarmpit
  • Some people think is uncool, some people want to use k8 because they know they get asked for it to get a new job.
Collapse
 
edelvalle profile image
Eddy Ernesto del Valle Pino

Ehh... no

Collapse
 
udondan profile image
Daniel Schroeder

Swarm mode can also be used on a single node. The benefit is, services automatically start when docker comes up. Using this extensively on my local network to ensure my apps start after reboot.

Collapse
 
afif_ahmed profile image
mohammed afif ahmed

Totally, by default swarm mode is disabled and when you run swarm init, your docker engine runs in swarm mode on your current node.