Number 10 of the 12 Commandments states that our development environment should be as similar as possible to what we have in production. Not doing so can lead to many predicaments when debugging critical, live issues.
One important step in this regard would be to mimic a standard distributed production setup: a load-balancer sitting in front of multiple backend server instances, dividing incoming HTTP traffic among them.
This article is not an introduction to Docker, Compose or Nginx. But rather a guide to setting up the distributed system described above, assuming we’ve installed Docker and are familiar with images, containers etc. I’ll try to provide enough information about Compose and Nginx so we can get our hands dirty without succumbing to the noise that normally goes with them.
Each of our backend server instances (simple Node.js servers) and the Nginx load-balancer will be hosted inside Docker-based Linux containers. If we stick to 3 backend servers and 1 load-balancer we’ll need to manage/provision 4 Docker containers, for which Compose is a great tool.
What We Want
Here’s our directory structure:
docker-nginx/
backend/
src/
index.js
package-lock.json
Dockerfile
load-balancer/
nginx.conf
Dockerfile
docker-compose.yml
The src directory will contain our server-side code, in this case a simple Hello World Node (Express) app (of course your backend can be anything):
// index.js
const express = require('express');
const app = express();
app.get('/', (req, res) => {
console.log('I just received a GET request on port 3000!');
res.send('Hello World!');
});
app.listen(3000, () => console.log('I just connected on port 3000!'));
The package-lock.json has nothing but an Express dependency .
Setting Up Our Dockerfiles and Compose Config
backend/Dockerfile
will be used to build an image, which will then be used by Compose to provision 3 identical containers:
# Use one of the standard Node images from Docker Hub
FROM node:boron
# The Dockerfile's author
LABEL Usama Ashraf
# Create a directory in the container where the code will be placed
RUN mkdir -p /backend-dir-inside-container
# Set this as the default working directory.
# We'll land here when we SSH into the container.
WORKDIR /backend-dir-inside-container
# Our Nginx container will forward HTTP traffic to containers of
# this image via port 3000. For this, 3000 needs to be 'open'.
EXPOSE 3000
load-balancer/Dockerfile
:
# Use the standard Nginx image from Docker Hub
FROM nginx
# The Dockerfile's author
LABEL Usama Ashraf
# Copy the configuration file from the current directory and paste
# it inside the container to use it as Nginx's default config.
COPY nginx.conf /etc/nginx/nginx.conf
# Port 8080 of the container will be exposed and then mapped to port
# 8080 of our host machine via Compose. This way we'll be able to
# access the server via localhost:8080 on our host.
EXPOSE 8080
# Start Nginx when the container has provisioned.
CMD ["nginx", "-g", "daemon off;"]
load-balancer/nginx.conf
:
http {
events { worker_connections 1024; }
upstream localhost {
# These are references to our backend containers, facilitated by
# Compose, as defined in docker-compose.yml
server backend1:3000;
server backend2:3000;
server backend3:3000;
}
server {
listen 8080;
server_name localhost;
location / {
proxy_pass http://localhost;
proxy_set_header Host $host;
}
}
}
This is a bare-bones Nginx configuration file. If anyone would like help with more advanced options please do post a comment and I’ll be happy to assist.
docker-compose.yml
:
version: '3.2'
services:
backend1:
build: ./backend
tty: true
volumes:
- './backend/src:/backend-dir-inside-container'
backend2:
build: ./backend
tty: true
volumes:
- './backend/src:/backend-dir-inside-container'
backend3:
build: ./backend
tty: true
volumes:
- './backend/src:/backend-dir-inside-container'
loadbalancer:
build: ./load-balancer
tty: true
links:
- backend1
- backend2
- backend3
ports:
- '8080:8080'
volumes:
backend:
Without going into details, here’s some insight into our Compose config:
A single Compose service generally uses one image, defined by a Dockerfile. When we build our services, the images are built & provisioned as containers.
If you’re new to Docker but familiar with VMs then may be this analogy will help: an ISO file for an OS (image) is used by VirtualBox (Compose) to launch a running VM (container). A service is made up of at least one running container.
build
tells Compose where to look for a Dockerfile to build the image for the service.
tty
just tells the container to keep running even when there’s no daemon specified via CMD in the Dockerfile. Otherwise, it’ll shut down right after provisioning (sounds strange, I know).
volumes
in our case defines where to put the server-side code in the container (oversimplification). Volumes are a storage mechanism within containers and not a trivial feature.
links
does two things: makes sure theloadbalancer service doesn’t start unless the backend services have started. And it allows backend1 , backend2 and backend3 to be used as references within loadbalancer, which we did in our nginx.conf.
ports
specifies a mapping between a host port and a container port. 8080 of the container will receive client requests made to localhost:8080 on the host.
Launch
Run sudo docker-compose up --build
inside docker-nginx
(or whatever your project root is). After all the services have started, run sudo docker ps
and you’ll see something like the following, a list of all the containers just launched:
Let’s SSH into one of the backend containers and start our Node server. After hitting localhost:8080
from our browser 5 times, this is what we get:
Of course the browser is hitting 8080 on the host machine, which has been mapped to 8080 on the Nginx container, which in turn forwards the request to port 3000 of the backend container. Hence the log shows requests on port 3000.
Open two new terminals on your host machine, SSH into the other two backend containers and start the Node servers within both:
sudo docker exec -it dockernginx_backend1_1 bash
node index.js
sudo docker exec -it dockernginx_backend3_1 bash
node index.js
Hit localhost:8080
on your browser multiple times (fast) and you’ll see that the requests are being divided among the 3 servers!
Now you can simulate stuff like session, cache persistence across multiple servers, concurrency issues or even get a rough idea about the throughput increase we can achieve when we scale out (e.g. by using wrk).
To deploy a similar Compose setup in production I recommend this; and you might enjoy kompose if you need to export it to Kubernetes.
Note: The more experienced might bring up Compose’s ability to scale a running service and ask whether it is possible to configure Nginx so as to auto-scale at run-time: sort of. The Nginx daemon will have to be restarted (downtime) and of course we’ll need a way to dynamically edit and add to the upstream servers’ group, which is certainly possible, but a fruitless hassle in this case IMO. If more server instances are needed, add more services and rebuild the images.
Top comments (2)
Great article brother.
A simple suggestion from my end, instead of using backend1, backend2, backend3 in the Dockerfile we can define a single service as backend. We can rather scale out using "docker-compose up -d --scale backend=n", where n is the number of replicas. In nginx.conf, we simply need to define the server config as "server foldername_backend_1:3000 max_fails=3 fail_timeout=10s weight=1;".
Wow, super interesting! Thank you Usama!
I've been spoiled by years on Heroku and haven't given containers the attention they deserve. This is a great use case :)