DEV Community

Shitij Bhatnagar
Shitij Bhatnagar

Posted on

Simplified: Spring Boot with Docker (Part 2)

We covered the 'Getting Started' angle of Spring Boot and Docker in the Part 1 article of this series.

In this Part 2 article, we shall see how to span up multiple instances of a Spring Boot REST service using containers on a developer machine itself. The need to have multiple service instances can arise due to a number of reasons:

  • Observe REST service behavior in multi-instance setup
  • Simulate a test environment setup locally
  • Experimentation / Getting hands dirty before moving to more complicated scenarios (like proving service to service connectivity, orchestration expansion & more)

In the Part 3 article of the series, we shall bring in more details around inter-service hard dependencies, orchestration methods (& more) however here, we shall cover multi-instance possibilities in Docker.


Pre-requisites:
To experience the journey as outlined in the article, it is recommended to have the pre-requisites in place before getting on with the technical steps.

  • Reading: Basic awareness about Containerization & Docker
  • Tool: Installed Docker Desktop – contains multiple Docker services
  • Tool: Familiarity with Java (preferably 8 onwards), Spring Boot 3.4.x
  • Tool: Installed JDK 17 & Maven (build tool)
  • Hardware: Recommended to have at least 08 GB RAM (more the merrier)
  • Reading: Understood the Part 1 article (Basics)
  • Reuse: We shall re-use the 'Tower' Spring Boot service and Docker setup used in Part 1 article

So, in Part 1, we saw how to deploy the 'tower' service in Docker and consume it through the application URL http://localhost:8085/tower

Now, if we want to have multiple instances of the 'tower' service aka run 'tower' service in multiple containers, what are the different options to do so in Docker, let's explore the same below.


Objective: We should have multiple instances of the 'tower' service, each ready to serve clients

Option A: Execute multiple 'docker run' commands

For creating multiple containers, we can first try to run the 'docker run' command (same as in Part 1 article) multiple times - this might seem like first no-brainer option to exercise. Let's have a look-

Command 1: docker run -p 8085:8085 sb-tower-service:V1
Command 2: docker run -p 8085:8085 sb-tower-service:V1

When the first command is executed, the 'tower' service comes up successfully (accessible at http://localhost:8085/tower) and we can verify the service (image below).

Service Verification

However when we execute the second command, instead of a new container getting setup, we see an error message like below. This is because the host port 8085 is already occupied by the first 'tower' service and cannot be re-used.

Docker run command execution error

So, we realize that we cannot plainly apply the Part 1 article approach here for creating multiple containers for the same service and need to look for a different option.


Option B: Execute multiple 'docker run' commands but with different host port

Command 1: docker run -p 8085:8085 sb-tower-service:V1
Command 2: docker run -p 8086:8085 sb-tower-service:V1 (notice the different port 8086)

When the first command is executed, the 'tower' service comes up successfully (accessible at http://localhost:8085/tower) and we can verify the service (image below).

Service Verification

When the second command is executed, the second 'tower' service also gets started successfully (accessible at http://localhost:8086/tower) and we are able to verify the service (image below)

Service Verification

So, while we are able to have multiple instances of the 'tower' service running in Docker, we have a problem :) i.e. we have two URLs for the same service. This approach is not extensible (i.e. if we create 05 instances, we shall have 05 application URLs) and we cannot have tests written for multiple URLs every time except for specific scenarios like A/B testing but again, we would typically not do this kind of testing on local machine - generally.


Before moving to further options, we need to understand another tool called Docker Compose.

Docker Compose is a docker utility that allows us to describe service(s) run configuration in an YML file (in addition to Dockerfile) and then instead of using 'docker run' command to create a container, we can use command 'docker compose up' to execute the run configuration to create the container(s). Likewise we can use 'docker compose down' to dismantle the running containers that were created previously. One point to note is that Docker Compose does not create an image of a service automatically, so to use the 'tower' service in Docker Compose, we need to have an existing image (like we created in Part 1 already) named sb-tower-service:V1.

The real benefit of Docker Compose is that we can execute a number of scenarios like creating multiple service instances, creating dependency between services etc. quite easily without going into multi node and network setups explicitly.

For a better understanding, let's experience the Option B again using docker compose tool. For the same, we need to create a docker-compose.yml file with the following content:

docker-compose.yml

services:
  tower-services: 
    image: sb-tower-service:V1
    ports:
    - "8085-8086:8085" 
    deploy:
      replicas : 2
Enter fullscreen mode Exit fullscreen mode

Let's break down the docker compose content below-

tower-services - this is the service name 'tower-services'
sb-tower-service:V1 - this is the docker image name
8085-8086:8085 - instructs Docker to use a host port range (8085-8086)
replicas : 2 - instructs Docker to create 02 service instances

Then, we execute the command docker compose up and the 02 'tower' instances come up using port 8085 & 8086 (image below). The services are verifiable in the same way as done earlier in Option B.

Successful docker compose start

Finally, we can exit the docker compose by executing the command docker compose down and the 'tower' instances are removed (image below).

Successful docker compose stop

Note: Docker also creates/destroys a Docker network by the name 'tower_default' automatically for supporting network/communication


Option C: Setup multiple service instances on same host port (using Docker Compose & front-proxy)

Now that we have understood Docker Compose, let's try to accomplish
our objective using a simple arrangement. This arrangement would have an additional service, NGINX web server (open source) to front-end the client calls, while in the background it would send the client requests to multiple 'tower' service instances (using round robin allocation). This way, the client only gets a single application URL (e.g. http://localhost:8000/tower) to access even though there are multiple service instances.

The configuration for Option C consists of additional artefacts-
1) docker-compose.yml - provides the 'tower' & NGINX services info
2) nginx.conf - NGINX server config file including association with the tower services/instances

docker-compose.yml

services:
  tower-services:
    image: sb-tower-service:V1
    expose:
      - '8085'
    deploy:
      replicas : 2

  nginx:
      image: nginx:latest
      volumes:
         - ./nginx.conf:/etc/nginx/nginx.conf:ro
      depends_on:
         - tower-services
      ports:
         - '8000:8000'
Enter fullscreen mode Exit fullscreen mode

Let's break down the docker compose content below-

tower-services - this is the service name 'tower-services'
sb-tower-service:V1 - this is the docker image name
expose 8085 - instructs Docker to use the port 8085 in REST service
replicas : 2 - instructs Docker to create 02 REST service instances
nginx _ - this is the NGINX web server & service name
depends_on
- instructs that the NGINX depends on 'tower-services'
volumes & nginx.conf - instructs a volume and NGINX config to refer
ports '8000:8000' - instructs docker to ensure port 8000 for NGINX

nginx.conf

user  nginx;
events {
    worker_connections   10;
}
http {
        server {
              listen 8000;
              location / {
                proxy_pass http://tower-services:8085;
              }
        }
}
Enter fullscreen mode Exit fullscreen mode

Let's break down the nginx.conf content below-

worker_connections 10 - instructs NGINX to use up to 10 worker connections when talking to the 'tower' instances
server listen 8000 - instructs NGINX to use 8000 as the listening port; note the same port number is mapped in docker compose yml also
proxy_pass http://tower-services:8085 - instructs NGINX to pass on the requests received on port 8000 to the service 'tower-services' listening on port 8085.

So now our expectation is that when the above configurations are executed, the clients would should be able to hit the application URL http://localhost:8000/tower (external URL) and this should internally talk to http://localhost:8085/tower (internal URL) however the internal URL will not be accessible to clients.

Now it's time to let Docker to the above setup for us. When the command docker compose up is executed, the NGINX server and the 02 'tower' service instances get successfully created.

Successful docker compose including NGINX

We are also able to verify the service through a CURL command hitting http://localhost:8000/tower.

Successful NGINX service verification

We can also see the application log in one of the 'tower' service instances.

Service log

Finally with this proving complete, we close the Docker compose service by running the command docker compose down.

Docker compose down


Additional Note

1) Scaling possibility in Option C: We can over-ride the replica count 2 mentioned in the docker compose yml by providing that value in the docker compose up command e.g. 'docker-compose up --scale tower-services=5'. This would end up creating 05 instances of the Spring Boot service instead of 2.

2) Other aspects of Option C for multi-instance service setup

Benefits:

  • all configuration stays in docker compose yml & NGINX conf files
  • possible to scale the no of instances above the 'replica' count mentioned in the docker compose yml

Drawbacks:

  • Docker compose file becomes more complicated with NGINX added (but still is one-time effort)
  • cannot scale dynamically / additionally after services have started (e.g. cannot add more instances post start up)

Summary

We saw how multiple instances of same Spring Boot REST application can be setup using Docker to service client requests while utilizing additional services like NGINX. In the Part 3 of the article series, we shall try to cover additional use cases like hard dependencies between services (e.g. web dependent on database), start up sequencing (& known issues), more orchestration scenarios (like Docker Swarm or K8s) & more.

Would appreciate any feedback for improvement on the approach and content.


References:

https://www.nutrient.io/blog/how-to-use-docker-compose-to-run-multiple-instances-of-a-service-in-development/

https://stackoverflow.com/questions/64819934/running-multiple-services-on-same-port-in-docker-compose

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.