Learn how to leverage Shell scripting to streamline Jenkins log management and reduce cloud storage costs with AWS S3
๐ก Introduction
Welcome to the world of DevOps! Today, we are diving into shell scripting by creating a script that will help reduce our infrastructure costs by storing Jenkins logs in AWS S3 rather than VMs.
Imagine a company like Google, which has thousands of microservices running, collecting logs, metrics, and traces stored in servers. Popular observability stacks used by companies are ELK or EFK. These tools collect different types of logs, such as:
- Application Logs: High-priority logs used to troubleshoot applications.
- Kubernetes Control-Plane Logs: High-priority logs for troubleshooting cluster issues.
- Infrastructure Logs: Logs from tools like Jenkins or Terraform.
Infrastructure logs are typically not as critical to store on servers. For example, if a Jenkins build fails, notifications are sent via email or Slack, enabling instant troubleshooting. However, retaining these logs for backup and restoration purposes is important.
To address this, weโve created a shell script that uploads Jenkins build logs to S3, ensuring cost optimisation while maintaining access to logs when needed.
๐ก Prerequisites
Before starting the project, ensure the following requirements are met:
- An AWS Account.
- Basic understanding of Shell Scripting.
- Basic understanding of AWS S3.
- Jenkins Installed on your system.
๐ก Setting Up the Environment
Before we create the script, we need:
-
A Sample Jenkins Project:
- Create a pipeline project in Jenkins named
hello-world-project
using a Hello World template.
- Create a pipeline project in Jenkins named
- Run the pipeline 3โ4 times to generate 3โ4 log files.
-
An S3 Bucket:
- Create an S3 bucket. For example:
bucket-for-jenkins-logs
(ensure the name is unique).
- Create an S3 bucket. For example:
-
Configure AWS CLI:
- Generate an access key from AWS IAM.
- Use the command
aws configure
to set up your AWS CLI.
-
Locate the Jenkins Home Directory:
- Go to Jenkins > Manage Jenkins > System. The Jenkins home directory path is displayed at the top.
๐ก Writing the Shell Script
Now, letโs create the script that will automate the upload process:
The Script: s3upload.sh
#!/bin/bash
######################
## Author: Pravesh-Sudha
## Description: Shell script to upload Jenkins logs to S3 bucket
## Version: v1
######################
# Variables
JENKINS_HOME="/Users/praveshsudha/.jenkins" # Replace with your Jenkins home directory
S3_BUCKET="s3://bucket-for-jenkins-logs" # Replace with your S3 bucket name
DATE=$(date +%Y-%m-%d) # Today's date
# Check if AWS CLI is installed
if ! command -v aws &> /dev/null; then
echo "AWS CLI is not installed. Please install it to proceed."
exit 1
fi
# Iterate through all job directories
for job_dir in "$JENKINS_HOME/jobs/"*/; do
job_name=$(basename "$job_dir")
# Iterate through build directories for the job
for build_dir in "$job_dir/builds/"*/; do
# Get build number and log file path
build_number=$(basename "$build_dir")
log_file="$build_dir/log"
# Check if log file exists and was created today
if [ -f "$log_file" ] && [ "$(date -r "$log_file" +%Y-%m-%d)" == "$DATE" ]; then
# Upload log file to S3 with the build number as the filename
aws s3 cp "$log_file" "$S3_BUCKET/$job_name-$build_number.log" --only-show-errors
if [ $? -eq 0 ]; then
echo "Uploaded: $job_name/$build_number to $S3_BUCKET/$job_name-$build_number.log"
else
echo "Failed to upload: $job_name/$build_number"
fi
fi
done
done
Running the Script
- Grant Execution Permissions:
chmod 777 s3upload.sh
- Execute the Script:
./s3upload.sh
The script will:
- Iterate through Jenkins job and build directories.
- Identify logs created on the current date.
- Upload logs to your specified S3 bucket with appropriate naming.
๐ก Conclusion
Congratulations! ๐ Youโve successfully created a shell script to automate the process of uploading Jenkins logs to S3, reducing costs and ensuring seamless log management.
But thereโs more! AWS S3 supports Lifecycle Management, allowing you to define rules to automatically transition older or less important logs to cheaper storage classes like Glacier or Deep Archive. These options provide even greater cost savings for infrequently accessed logs.
This project demonstrates how a simple shell script, combined with cloud services, can solve real-world challenges in a DevOps workflow.
Keep experimenting, keep learning, and most importantly, keep scripting! ๐
๐ For more informative blog, Follow me on Hashnode, X(Twitter) and LinkedIn.
Top comments (0)