This is the second part of a two-parter. Just realized I was about to write a book if I kept going. Now that we have the necessary instances and setup in part 1, we will continue and start building out our images.
Note:
I am using openSUSE Leap, which will soon be discontinued and only maintained for a few years after 15.6. Alternatives would be either Tumbleweed, MicroOS, and Slowroll.
Objectives
Primary:
- Create a repository for Packer Templates.
- Use Jenkins to build the images.
- I'm going to use a Jenkinsfile as well.
Secondary:
- Parallel Build in AWS and Azure.
Prerequisites
Assumption is a basic knowledge of Jenkins, Packer, and Azure. I'm not going to go into much details on how to set up Azure accounts, service accounts, and some other items in this. There are basic examples out the which can be referenced.
- Version Control System (GitHub, GitLab, etc.)
- Azure Subscription
- AWS Account
Repository Setup
*I'm going to keep the repository simple. In most cases the following are needed. *
- Folder/Files necessary for CI/CD pipelines
- In this case, I will end up using Jenkinsfile after all. Didn't initially plan to.
- If this was GitHub actions, I'd have a .github with workflows folder. For Jenkins, I'll drop them under a Jenkins directory.
- Packer Folder
- In this folder I also have:
- Builds: for build templates
- Files: used by the templatefile function or file provisioner.
- Scripts: for longer scripts which will be more frequently updated. I try to stay away from inline scripts with a provisioner.
- Var-Files: directory to store var-files which can get pulled in by the pipeline.
- In this folder I also have:
*For a tree view: *
.
├── jenkins
│ └── suse-linux-base
└── packer
├── builds
│ ├── builds_suse.pkr.hcl
│ └── variables.pkr.hcl
├── files
│ └── README.md
├── scripts
│ └── README.md
└── var-files
└── README.md
7 directories, 6 files
The repository was made public for anyone that wants to fork it as a demo.
https://github.com/benjamin-lykins/demo-jenkins-packer.git
Jenkins Setup
Create a folder for packer builds.
Jenkins Setup - Credentials
The next item you will want to set up is credentials which will be used by jobs in this folder.
*On the left hand side, select Credentials
. *
*Select stores scoped to packer builds
or the name of the folder you created. Click on the global domain. *
*Add credentials. *
*For credentials, we will do secret text for all Azure related credentials. Create a secret for each: *
1. Client ID (Service Account ID)
- azure_client_id
2. Client Secret (Service Account Secret)
- azure_client_secret
3. Subscription ID
- azure_subscription_id
4. Tenant ID
- azure_tenant_id
*When all the credentials have been added, it should look like this: *
Jenkins Setup - First Workflow
This first workflow will be simple. Get the latest openSuse image and build and image.
Go into the packer builds folder and create a new pipeline. I called mine suse-linux-base
.
*Nothing major setup wise, for Git: *
For Jenkinsfile:
Jenkins - First Build
If forking the repository I provided, the first run should build and run properly.
This is coming from a fresh Azure Account and Subscription.
No resource groups exist.
At this point we can kick off the first build.
In suse-linux-base
, select Build Now.
In my case I had a small hiccup, took me a couple of tries to get it working:
Ended up changing the Create Azure Resource step a couple of times, it was running commands out of order, once I got it running, it was doing its thing properly.
- packer-rg was created.
- packer-rg-temp was created as a temporary resource group to build images in.
Build Completion
You can see the build was completed in the both Jenkins and in the Azure console.
Create VM from Image
From the image, select create VM. Once ready, ssh to connect.
openSUSE Leap 15.5 x86_64 (64-bit)
If you are using extensions consider to enable the auto-update feature
of the extension agent and restarting the service. As root execute:
- sed -i s/AutoUpdate.Enabled=n/AutoUpdate.Enabled=y/ /etc/waagent.conf
- rcwaagent restart
As "root" use the:
- zypper command for package management
- yast command for configuration management
Have a lot of fun...
Now that is it. Pretty straight-forward setup. If this is all I wanted, then this would be the stopping point. Very basic usage of Packer, but looking to expand upon what is possible.
Parallel Builds
One of my old jobs I was a vSphere administrator. Every month the virtual machine templates would need to be patched and updated in some way. It was a very manual process and we did not use a tool like Packer to automate or store the configuration in code. It was document and click-ops heavy.
Now, I will not be building images in vSphere, but I will like to build the same image in two public clouds, AWS and Azure.
AWS - Setup
Nothing major needed here, you will need two items. Packer will use default VPC and subnet.
- AWS Secret and Key
- Subscription to openSUSE Leap
Packer - Setup
To set this up, I will need to add in an additional plugin to the configuration.
Create New Branch
I am going to create a new branch called parallel-aws-azure
for this use case.
packer {
required_plugins {
azure = {
source = "github.com/hashicorp/azure"
version = "~> 2"
}
amazon = {
source = "github.com/hashicorp/amazon"
version = "~> 1"
}
}
}
Add a new source block for AWS.
Check for ssh_username: Default user names. In this scenario
ec2-user
orroot
would have worked.
source "amazon-ebs" "suse" {
ami_name = "suse-image-${local.time}"
instance_type = "t2.micro"
region = "us-east-2"
source_ami_filter {
filters = {
name = "openSUSE-Leap-*-hvm-ssd-x86_64-*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["679593333241"]
}
ssh_username = "ec2-user"
}
And lastly, update the build block with the new source.
build {
sources = ["source.azure-arm.suse", "source.amazon-ebs.suse"]
provisioner "shell" {
inline = ["echo foo"]
}
}
Jenkins - Setup
I'll keep the same jenkinsfile for this. All I will update is adding credentials for AWS.
environment {
AZURE_SUBSCRIPTION_ID = credentials('azure_subscription_id')
AZURE_TENANT_ID = credentials('azure_tenant_id')
AZURE_CLIENT_ID = credentials('azure_client_id')
AZURE_CLIENT_SECRET = credentials('azure_client_secret')
AWS_ACCESS_KEY_ID= credentials('aws_access_key_id')
AWS_SECRET_ACCESS_KEY= credentials('aws_secret_access_key')
}
Also, ensure thepacker build
folder has these keys.
Create New Pipeline
I called mine parallel-build
.
Point to the new branch:
Keep the same Jenkinsfile path:
Once ready, hit build and see how it goes.
Packer by default will run both builds concurrently.
[1;32mazure-arm.suse: output will be in this color.[0m
[1;36mamazon-ebs.suse: output will be in this color.[0m
When completed, Packer will output the AMI id and Azure managed image:
Packer:
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs.suse: AMIs were created:
us-east-2: ami-09de620f124c9bc00
Azure:
--> azure-arm.suse: Azure.ResourceManagement.VMImage:
OSType: Linux
ManagedImageResourceGroupName: packer-rg
ManagedImageName: suse-image-20240226195516
ManagedImageId: /subscriptions/****/resourceGroups/packer-rg/providers/Microsoft.Compute/images/suse-image-20240226195516
ManagedImageLocation: East US
In AWS console:
In Azure console:
Nothing too drastically different when it comes to parallel builds in multiple public clouds when setting up in Jenkins.
Conclusion
This is a very surface level example of using Packer with Jenkins. Additional use cases would be:
- Layering Builds
- Reference newly build images and add components necessary for hosting an application, such as IIS or Tomcat on the host.
- Chaining Pipelines
- This can be done similar to layering builds. Have the base image pipeline run first to build a base image, if success, have a subsequent pipeline trigger to build on the image built in the base pipeline.
- Automate Provisioning of Images
- When a new image is built, trigger a Terraform Cloud workspace to run a plan and apply to pick up the latest image.
- In AWS, Autoscaling Groups can trigger a refresh.
- When a new image is built, trigger a Terraform Cloud workspace to run a plan and apply to pick up the latest image.
- Automate Testing of Images
- I honestly do not have many ideas for this. For me if an image is built with Packer and can be provisioned whether in AWS or Azure, and finally an application can be deployed on it, then that is enough for me. I could see security and compliance tools. Probably just do not have the exposure for this.
Top comments (0)