Introduction
Hey there! So you want to set up a continuous integration and continuous deployment (CI/CD) pipeline for a microservices application? Don't worry if some of these terms sound intimidating—I'll walk you through everything step by step, explaining what we're doing and why we're doing it.
In this guide, we'll build an automated system that takes code from developers, packages it into containers, and deploys it to a cloud environment with minimal manual intervention. Pretty cool, right? By the end, you'll have implemented a professional-grade pipeline that many tech companies use for their applications.
What We're Building
We're setting up an automated pipeline for an e-commerce application made up of 11 microservices. Each microservice is responsible for a specific function, working together to deliver a seamless shopping experience. Here’s an overview of some key services:
🔹 Frontend (Customer Interface)
The website customers see and interact with. It communicates with backend services to display products, handle user actions, and process orders.
🔹 Cart Service (Shopping Cart Management)
Keeps track of the items a customer adds to their cart before checkout. It ensures cart data is stored and updated in real time.
🔹 Payment Service (Transaction Processing)
Handles customer payments securely. It interacts with payment gateways to process transactions and confirm successful payments.
🔹 Shipping Service (Order Fulfillment)
Manages shipping details, calculates delivery times, and tracks orders after they leave the warehouse.
🔹 Email Service (Notifications & Communication)
Sends order confirmations, shipping updates, and promotional emails to customers.
Other microservices like Product Catalog, Recommendation Service, and Checkout Service help complete the shopping experience by handling product listings, personalized recommendations, and final order processing.
The beauty of microservices is that we can update one part of the application without touching the others. Think of microservices like a string of festive lights—if one bulb goes out, you can replace that single bulb without having to take down and replace the entire string. Each light operates independently, but together they create a seamless experience!
Project Credit: DevOps Shack
Why This Matters
Before we dive into the technical steps, let's understand why this approach is so valuable:
1️⃣ Independent Development & Deployment
Each microservice can be developed, tested, and deployed separately, allowing teams to work on different services without impacting the entire application.
2️⃣ Scalability
Microservices can be scaled individually based on demand. If one part of the system experiences high traffic, only that service needs to be scaled, saving resources and costs.
3️⃣ Technology Flexibility
Teams can use different programming languages, databases, and frameworks best suited for each service rather than being locked into a single tech stack.
4️⃣ Fault Isolation & Resilience
If one microservice fails, it doesn’t bring down the entire system. The failure is contained, improving system reliability and availability.
5️⃣ Faster Development & Innovation
Smaller, independent services enable faster development cycles, allowing teams to release updates and new features more frequently.
6️⃣ Better Maintainability
Since each service has a smaller codebase and a clear scope, it’s easier to update, debug, and refactor without affecting other parts of the system.
7️⃣ Enhanced Security & Compliance
Sensitive data and business logic can be isolated within specific microservices, making it easier to enforce security policies and meet compliance requirements.
The Tools We'll Use
We'll be working with several powerful tools:
- GitHub: Stores our code
- Jenkins: Automates our build and deployment processes
- Docker: Packages our applications into containers
- Kubernetes (EKS): Manages how our containers run in the cloud
- AWS: Provides the cloud infrastructure where everything runs
Don't worry if you're not familiar with all of these—I'll explain each one as we go along.
Step 1: Setting Up Our Server
First, we need a computer in the cloud_ (an EC2 instance)_ that will run Jenkins and manage our Kubernetes cluster.
Creating an EC2 Instance
- Log into your AWS account
- Navigate to EC2 service
- Click "Launch Instance"
- Select a name for your instance (e.g., "Jenkins-Kubernetes-Server")
- Choose an Amazon Machine Image (AMI) with Ubuntu
- Select instance type: t2.large (2 vCPUs, 8GB memory)
- Configure storage: 20 GB minimum
- Create or select a key pair for SSH access
- Launch the instance
- Configure your instance
inbound rules
with the ports shown in the image below.
Step 2: Setting Up AWS Permissions
Before we can create our Kubernetes cluster, we need to set up the proper permissions.
Creating an IAM User for EKS
- In the AWS console, navigate to IAM (Identity and Access Management)
- Create a new user named "eks-devops" #use whatever name you want
-
Attach the following permission policies:
- AmazonEKSClusterPolicy
- AmazonEKSServicePolicy
- AmazonEC2FullAccess
- AmazonS3FullAccess
- IAMFullAccess
- AmazonVPCFullAccess
Create an inline policy with the necessary permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "eks:*",
"Resource": "*"
}
]
}
- Generate access keys for this user and save them securely.
Step 3: Installing Essential Tools on Our Server
Now let's connect to our EC2 instance and install the tools we need.
Connecting to Your EC2 Instance
ssh -i path/to/your-key.pem ubuntu@your-ec2-public-ip
Alternatively, you can connect to EC2 using MobaXterm.
Once you have successfully ssh-ed into your EC2 instance run the following command
sudo apt update
Installing AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip -y
unzip awscliv2.zip
sudo ./aws/install
Configuring AWS CLI
aws configure
When prompted, enter:
- AWS Access Key ID: [Your access key]
- AWS Secret Access Key: [Your secret key]
- Default region name:
af-south-1
#your preferred region - Default output format:
json
Installing kubectl
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client
Beginner's Note:
kubectl
is a command-line tool that allows us to interact with and manage a Kubernetes cluster. Think of it like a remote control for your Kubernetes setup—it lets you deploy applications, check their status, and make changes, all from your terminal
Installing eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
Beginner's Note: eksctl is a tool specifically designed to make creating Kubernetes clusters on AWS easier.
An alternative, is to create a script to install all the tools
Step 4: Creating a Kubernetes Cluster on AWS
Now we'll create our Kubernetes cluster where our application will run.
Creating the EKS Cluster
eksctl create cluster --name=eks-microservices \
--region=af-south-1 \
--zones=af-south-1a,af-south-1b \ #your-region
--without-nodegroup
Beginner's Note: This command creates an Amazon EKS (Elastic Kubernetes Service) cluster.
Setting Up Identity and Access for Kubernetes
eksctl utils associate-iam-oidc-provider \
--region af-south-1 \
--cluster eks-microservices \
--approve
Beginner's Note This command sets up a secure trust between your EKS cluster and AWS, allowing Kubernetes applications to access AWS services safely without storing long-term credentials.
Adding Worker Nodes to Our Cluster
eksctl create nodegroup --cluster=eks-microservices \
--region=af-south-1 \
--name=node2 \
--node-type=t3.medium \
--nodes=3 \
--nodes-min=2 \
--nodes-max=4 \
--node-volume-size=20 \
--ssh-access \
--ssh-public-key=DevOps \
--managed \
--asg-access \
--external-dns-access \
--full-ecr-access \
--appmesh-access \
--alb-ingress-access
Beginner's Note: We're adding worker machines (nodes) to our Kubernetes cluster. We start with 3, but the system can automatically adjust between 2 and 4 based on demand.
Step 5: Installing Jenkins on Our Server
Jenkins will orchestrate our CI/CD pipeline, automatically building and deploying our code.
Installing Java (Required for Jenkins)
sudo apt install openjdk-17-jdk -y
Installing Jenkins
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt update
sudo apt install jenkins -y
Starting Jenkins Service
sudo systemctl start jenkins
sudo systemctl enable jenkins
Accessing Jenkins
Open a web browser and navigate to:
http://your-ec2-public-ip:8080
To get the initial admin password:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Beginner's Note: Jenkins will automatically build, test, and deploy our code whenever changes are made.
Step 6: Installing Docker on Our Server
Docker will package our applications into containers.
sudo apt install docker.io -y
sudo systemctl start docker
sudo systemctl enable docker
Giving Jenkins Permission to Use Docker
sudo usermod -aG docker jenkins
sudo chmod 666 /var/run/docker.sock
sudo systemctl restart jenkins
Step 7: Configuring Jenkins for Our Pipeline
Now let's set up Jenkins to work with Docker and Kubernetes.
Installing Necessary Plugins
- Navigate to "Manage Jenkins" >** "Plugins"** > "Available Plugins"
- Search for and install:
- Docker Pipeline
- Docker
- Kubernetes
- Kubernetes CLI
- Multibranch Scan Webhook Trigger
Configuring Docker in Jenkins
- Go to** "Manage Jenkins"** > "Tools"
- Scroll to "Docker installations"
- Click "Add Docker"
- Set the name as "docker" and check "Install automatically"
- Save
Adding Docker Hub Credentials
- Go to "Manage Jenkins" > "Credentials" > "System" > "Global credentials"
- Click "Add Credentials"
- Choose "Username with password"
- Enter your
Docker Hub username
andpassword
- Set ID as
"docker-cred"
- Click "Create"
Beginner's Note: We're setting up Jenkins with our Docker Hub account so it can push our container images to a central repository.
Adding GitHub Credentials (Optional for Public Repos)
- Go to "Manage Jenkins" *> *"Credentials" > "System" > "Global credentials"
- Click "Add Credentials"
- Choose "Username with password"
- Enter your
GitHub username
andpersonal access token
- Set ID as
"github-cred"
- Click "Create"
Step 8: Creating a Multi-Branch Pipeline in Jenkins
This is where the magic happens! We'll set up Jenkins to automatically detect and build all our microservices.
- From the Jenkins dashboard, click "New Item"
- Enter a
name
for your pipeline (e.g. Microservice-ecommerce") - Select "Multibranch Pipeline" and click "OK"
- Under "Branch Sources", select "Git"
- Enter your repository URL
- Set credentials to
"github-cred"
(if using private repo) - Under "Build Configuration", set "Mode" to "by Jenkinsfile"
- Set "Script Path" to "Jenkinsfile"
- Click "Save"
Setting Up Webhook for Automatic Triggers
- In Jenkins, go to your pipeline and click "Configure"
- Scroll down to find "Scan by webhook"
- Check the box and set a trigger token (e.g., "my-webhook-token")
- Save the configuration
- In GitHub, go to your repository settings
- Click** "Webhooks" > **"Add webhook"
- Set Payload URL to:
http://your-jenkins-url/multibranch-webhook-trigger/invoke?token=my-webhook-token
- Choose** "Content-type"** as "application/json"
- Click "Add webhook"
Beginner's Note: We're connecting Jenkins to our GitHub repository and setting up an automatic trigger so that whenever code is pushed, Jenkins will automatically start building.
Step 9: Understanding the Jenkinsfile
Each branch in our repository has a Jenkinsfile that defines what happens when code changes. Here's a typical Jenkinsfile for our microservices:
pipeline {
agent any
stages {
stage('Build & Tag Docker Image') {
steps {
script {
withDockerRegistry(credentialsId: 'docker-cred', toolName: 'docker') {
sh "docker build -t yourusername/servicename:latest ."
}
}
}
}
stage('Push Docker Image') {
steps {
script {
withDockerRegistry(credentialsId: 'docker-cred', toolName: 'docker') {
sh "docker push yourusername/servicename:latest"
}
}
}
}
}
}
Beginner's Note: This Jenkinsfile is like a recipe that tells Jenkins what to do with our code. It builds a Docker container and uploads it to Docker Hub.
Step 10: Setting Up Kubernetes for Deployment
Now we need to configure Kubernetes so Jenkins can deploy our applications.
Creating a Namespace
kubectl create namespace webapps
Beginner's Note: A namespace is like a virtual cluster inside our Kubernetes cluster. It helps us organize our applications.
Creating a Service Account
Create a file named svc.yml
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: webapps
Apply it:
kubectl apply -f svc.yml
Creating a Role with Permissions
Create a file named role.yml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: webapps
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- pods
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Apply it:
kubectl apply -f role.yml
Binding the Role to the Service Account
Create a file named bind.yml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-rolebinding
namespace: webapps
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: app-role
subjects:
- namespace: webapps
kind: ServiceAccount
name: jenkins
Apply it:
kubectl apply -f bind.yml
Beginner's Note: We're creating a special account (service account) for Jenkins in Kubernetes and giving it permissions to deploy and manage our applications.
Creating a Token for Authentication
Create a file named sec.yml
:
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: jenkins-token
annotations:
kubernetes.io/service-account.name: jenkins
Apply it:
kubectl apply -f sec.yml -n webapps
Retrieve the token:
kubectl describe secret jenkins-token -n webapps
Copy the token value shown in the output.
Step 11: Creating a Deployment Pipeline in Jenkins
Now let's set up the final piece: a pipeline that deploys our applications to Kubernetes.
- From the Jenkins dashboard, click "New Item"
- Enter a
name
(e.g., "E-Commerce-Deployment") - Select "Pipeline" and click "OK"
- Scroll down to **"Pipeline" **section
- Click "Pipeline Syntax" to open the syntax generator
- Find
"withKubeCredentials: Configure Kubernetes CLI"
- Click "Add" > "Jenkins" > "Secret text"
- Paste your
Kubernetes token
and set ID as"k8-token"
- Enter your
Kubernetes API Endpoint
(found in the EKS console) - Set Cluster Name to
"eks-microservice"
and Namespace to"webapps"
Cluster name and Namespace must match what you named them. - Click "Generate Pipeline Script" and copy the output
- Use this script to create your Jenkinsfile in the main branch:
pipeline {
agent any
stages {
stage('Deploy to Kubernetes') {
steps {
withKubeCredentials(kubectlCredentials: [[caCertificate: '', clusterName: ' eks-mciroservice', contextName: '', credentialsId: 'k8s-token', namespace: 'webapps', serverUrl: 'https://84D54FB3773FF9409BD7701FD10FE847.gr7.af-south-1.eks.amazonaws.com']]) {
sh "kubectl apply -f deployment-service.yml"
}
}
}
stage('verify deployment') {
steps {
withKubeCredentials(kubectlCredentials: [[caCertificate: '', clusterName: ' eks-mciroservice', contextName: '', credentialsId: 'k8s-token', namespace: 'webapps', serverUrl: 'https://84D54FB3773FF9409BD7701FD10FE847.gr7.af-south-1.eks.amazonaws.com']]) {
sh "kubectl get service -n webapps"
}
}
}
}
}
Beginner's Note: This pipeline takes our Docker containers and deploys them to Kubernetes, making our application available to users.
Step 12: Putting It All Together
Let's recap how the entire system works:
- A developer pushes code changes to a branch in GitHub
- GitHub webhook triggers Jenkins build for that specific microservice
- Jenkins builds a Docker container for the microservice
- The container is pushed to Docker Hub
- The deployment pipeline pulls the latest container and deploys it to Kubernetes
- Users can access the updated application without any downtime
Accessing Your Deployed Application
To see your running application:
kubectl get service -n webapps
Look for services with TYPE LoadBalancer and use the EXTERNAL-IP to access your application in a web browser.
Conclusion
Congratulations! You've successfully set up a complete CI/CD pipeline for a microservices application on AWS EKS using Jenkins multibranch pipelines. This setup allows for:
- Automatic building of code changes
- Containerization of applications
- Deployment to a scalable, redundant Kubernetes cluster
- Independent updating of individual microservices
While this guide covers the essential steps, there's always room for improvement:
- Add automated testing before deployment
- Implement monitoring and alerting
- Set up blue/green deployments for zero-downtime updates
- Add security scanning for your containers
Remember, this system isn't just a technical achievement—it's a business advantage that allows for faster innovation and more reliable service for your customers.
Cleanup (When Needed)
If you want to remove the entire system to avoid ongoing charges:
eksctl delete cluster --name EKS-1 --region ap-south-1
Note: This will delete your Kubernetes cluster and all running applications.
Log into your console and delete your EC2 instance.
I Hope you learnt something. I would love to get feedback from you. Let's connect on LinkedIn, Medium
Top comments (0)