DEV Community

Cover image for Automating a 3-Tier Application Deployment with Docker & Jenkins
Avesh
Avesh

Posted on

Automating a 3-Tier Application Deployment with Docker & Jenkins

Here's a guide to implementing a 3-tier application project using Docker and Jenkins. We'll walk through the components of a 3-tier application, creating Docker containers for each tier, and setting up a Jenkins pipeline to automate the deployment.

Overview of a 3-Tier Architecture

A 3-tier architecture typically comprises:

  1. Presentation Layer (Frontend) - Handles the user interface.
  2. Application Layer (Backend) - Contains the business logic.
  3. Data Layer (Database) - Stores and manages the application’s data.

Each layer will be deployed in a Docker container, and we'll use Jenkins to manage the CI/CD pipeline for the application.


Project Roadmap

  1. Setup Environment:

    • Install Docker and Docker Compose.
    • Set up Jenkins for CI/CD.
  2. Build Docker Images:

    • Create Docker images for each layer (frontend, backend, database).
  3. Configure Docker Compose:

    • Use Docker Compose to define multi-container applications.
  4. Develop Jenkins Pipeline:

    • Create Jenkins jobs and scripts to build, test, and deploy the Docker containers.
  5. Deploy and Test:

    • Deploy the application and test functionality across the three layers.

Tools Required

  • Docker: To containerize the application components.
  • Docker Compose: To manage multi-container Docker applications.
  • Jenkins: For CI/CD automation.
  • Git: Version control.
  • A Code Editor: Visual Studio Code or any preferred IDE.

Step 1: Setting up Docker and Jenkins

  1. Install Docker and Docker Compose on your local machine or server.
  2. Install Jenkins on your local machine or a server, then set up necessary plugins:
    • Docker Pipeline Plugin
    • Git Plugin
    • Pipeline Plugin

Step 2: Structure of the 3-Tier Application

The directory structure for a 3-tier application project can look like this:

3-tier-app/
├── frontend/
│   ├── Dockerfile
│   └── app/
│       └── index.html
├── backend/
│   ├── Dockerfile
│   └── app/
│       └── server.js
├── database/
│   ├── Dockerfile
│   └── data/
├── docker-compose.yml
└── Jenkinsfile
Enter fullscreen mode Exit fullscreen mode

Frontend (Presentation Layer)

This layer can be a simple HTML file served by Nginx.

Frontend Dockerfile (frontend/Dockerfile):

FROM nginx:alpine
COPY app /usr/share/nginx/html
EXPOSE 80
Enter fullscreen mode Exit fullscreen mode

Backend (Application Layer)

For this example, the backend will be a Node.js application.

Backend Dockerfile (backend/Dockerfile):

FROM node:alpine
WORKDIR /app
COPY app /app
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

Backend Code (backend/app/server.js):

const express = require('express');
const app = express();
app.get('/', (req, res) => res.send('Hello from Backend!'));
app.listen(3000, () => console.log('Backend server running on port 3000'));
Enter fullscreen mode Exit fullscreen mode

Database (Data Layer)

For the database layer, we can use MySQL with a custom Dockerfile.

Database Dockerfile (database/Dockerfile):

FROM mysql:5.7
ENV MYSQL_ROOT_PASSWORD=rootpassword
ENV MYSQL_DATABASE=app_db
EXPOSE 3306
Enter fullscreen mode Exit fullscreen mode

Step 3: Configuring Docker Compose

Docker Compose file (docker-compose.yml):

version: '3'
services:
  frontend:
    build: ./frontend
    ports:
      - "80:80"
    networks:
      - app-network

  backend:
    build: ./backend
    ports:
      - "3000:3000"
    depends_on:
      - database
    networks:
      - app-network

  database:
    build: ./database
    environment:
      MYSQL_ROOT_PASSWORD: rootpassword
      MYSQL_DATABASE: app_db
    ports:
      - "3306:3306"
    networks:
      - app-network

networks:
  app-network:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

This setup defines each service and links them in a common Docker network, app-network, which enables inter-container communication.


Step 4: Creating the Jenkins Pipeline

Create a Jenkinsfile in the root of the project directory. This file will define the stages for the CI/CD pipeline.

Jenkinsfile:

pipeline {
    agent any
    environment {
        DOCKER_HUB_CREDENTIALS = credentials('dockerhub')
    }
    stages {
        stage('Checkout') {
            steps {
                checkout scm
            }
        }
        stage('Build Docker Images') {
            steps {
                script {
                    docker.build("frontend", "./frontend")
                    docker.build("backend", "./backend")
                    docker.build("database", "./database")
                }
            }
        }
        stage('Push Images to Docker Hub') {
            steps {
                script {
                    docker.withRegistry('https://index.docker.io/v1/', 'DOCKER_HUB_CREDENTIALS') {
                        docker.image("frontend").push("latest")
                        docker.image("backend").push("latest")
                        docker.image("database").push("latest")
                    }
                }
            }
        }
        stage('Deploy with Docker Compose') {
            steps {
                sh 'docker-compose down'
                sh 'docker-compose up -d'
            }
        }
    }
    post {
        always {
            echo 'Pipeline Complete!'
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Explanation of Jenkinsfile Stages:

  • Checkout: Pulls the latest code from the repository.
  • Build Docker Images: Builds Docker images for frontend, backend, and database.
  • Push Images to Docker Hub: Pushes the images to Docker Hub (ensure Jenkins has access to Docker credentials).
  • Deploy with Docker Compose: Pulls down the running containers (if any) and redeploys the new versions.

Jenkins Configuration

  1. Add Docker Hub credentials to Jenkins by navigating to Manage Jenkins > Manage Credentials.
  2. Set up a Jenkins job:
    • Point the job to the repository containing the Jenkinsfile.
    • Trigger the job manually or set it to trigger on commits.

Step 5: Testing the Application

Once the pipeline is complete:

  1. Access the frontend by navigating to http://localhost.
  2. Access the backend via http://localhost:3000.
  3. Ensure the backend is connected to the database by checking logs for successful queries.

Security and Best Practices

  • Limit Container Permissions: Use non-root users within containers.
  • Network Segmentation: Use Docker networks to limit container communication.
  • Secrets Management: Use tools like Docker Secrets or environment variables managed through Jenkins.
  • Regular Backups: Set up regular backups for the database container’s data volume.

Final Thoughts

This 3-tier architecture using Docker and Jenkins provides a reliable, isolated environment for each layer of the application. With Jenkins managing the pipeline, deployments are automated and can be easily triggered on code changes. This approach reduces manual intervention, enhances consistency, and enables fast iterations, making it ideal for modern applications.

Top comments (1)

Collapse
 
priyankdeep78 profile image
Priyank Deep Singh

Thanks for the valuable insights, @i_am_vesh ! By the way, I just launched my new side project, and I'd love to get your feedback on it. Check it out here: Launching my project - A Color Palette Generator (CoolorBrew).