First, I want to notify that I'm a software developer and not a devops engineer. That's why if you have any good practice tips, feel free to give me feedback, I will enjoy that!
The goal of this guide is to show you a way to automate the deployment of your virtual machine on AWS. We will need to use Terraform to describe our infrastructure as code and Github Action to trigger/automate the deployment. This combo is very powerful if you want a flexible infrastructure, so I will try to show you a clean overview of what you can do for your professional/side projects.
Terraform
Terraform creates and manages resources on cloud platforms and other services through their application programming interfaces (APIs). Providers enable Terraform to work with virtually any platform or service with an accessible API.
The benefit of using Terraform
Terraform will allow you to define your infrastructure as code. Combined with pipeline like Github Action you can automize the deployment of your entire infrastructure. You can deploy
, destroy
and change
your cloud infrastructure with one command line, thank's to the Terraform CLI.
Introduction
First, you will need to install Terraform in your machine. I let you have a look at the documentation here. If you are using macOS and brew like me, type these two commands:
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
To use your IAM credentials to authenticate the Terraform AWS provider, set the AWS_ACCESS_KEY_ID
and the AWS_SECRET_ACCESS_KEY
environment variables.
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
Files architecture
I'll present you a way to arrange your Terraform files into a terraform folder (for readability reasons).
File | Description |
---|---|
ec2.tf | Configuration of our virtual machine (in our case an ec2 from aws). |
security_group.tf | Rules of security for the machine. |
variables.tf | Set variables from our environment. |
vpc.tf | Definition of our virtual private network. |
Let's build our virtual machine !
Now, I'll explain how to define a simple EC2 virtual machine. First, we need to create an SSH key pair to allow access remote on the machine. If you don't know how to do, take a look at the Github documentation.
Now that you have your SSH key pair, you should set the TF_VAR_SSH_PUB_KEY
of your public key.
Notice: All your environment variables used for Terraform will need the TF_VAR prefix.
# variables.tf
variable "SSH_PUB_KEY" {
type = string
}
The goal of the provider.tf
file, is to define the provider we are going to use, here we will specify AWS with the region (for french people, you can use eu-west-3
)
# provider.tf
provider "aws" {
region = "eu-west-3"
}
Then, we can write our ec2.tf
file, which is the heart of the configuration :
# ec2.tf
resource "aws_key_pair" "admin" {
key_name = "admin"
public_key = var.SSH_PUB_KEY
}
resource "aws_instance" "SRV-Front-ESP" {
ami = "ami-002ff2c881c910aa8"
instance_type = "t2.nano"
key_name = "admin"
tags = {
Name = "SRV-Front-ESP"
"Application" = "Web-ESP"
"Environment" = "PRD"
"OS" = "Debian"
"Role" = "Frontend"
}
}
resource "aws_eip_association" "eip_assoc" {
instance_id = aws_instance.SRV-Front-ESP.id
allocation_id = var.EIP_ASSOC_FRONTEND
}
terraform {
backend "s3" {
bucket = "kom-front-bucket-ml"
key = "terraform.tfstate"
region = "eu-west-3"
}
}
Each resource
block define a feature of our ec2. The aws_key_pair
resource will store into the virtual machine the public key. We should specify a name, to be able to manage multiples key pair. Here, I named my key admin
.
The next resource aws_instance
is very important, it may change with your needs. First, we will need to define an ami
. An ami (Amazon Machine Images) as its name indicates, is the image for your virtual machine, In my case I choose a Debian image. You can also create your own ami, if you have specific needs.
Next, I defined t2.nano
as instance_type. The t2.nano
is a virtual machine with 1 vcpu ans 0.5 GiB of ram, it's the smallest offer. If you need more power, you can choose another offer here.
The key_name
is used to provide the good SSH key for your virtual machine. And finally, the block tags
is here to provide informations, you can fill it as you want.
Now, let me introduce you the state
in Terraform. The notion of state
is very important to understand how Terraform works. This state
is used by Terraform to map real world resources to your configuration. It's stored by default in a local file named "terraform.tfstate", but it can also be stored remotely, which works better in a team environment.
That's why I use a bucket from Amason S3 to store the terraform.tfstate
. Now, your file in your bucket will be up to date compared to the actual state of your server.
That's all, now you can run terraform init
, this command performs several different initialization steps in order to prepare the current working directory for use with Terraform. After initialization, you can run terraform plan
to see what will make your terraform configuration. If you are ok with the result of the plan, you can finally run terraform apply
and see your virtual machine alive !
Now that we can deploy our virtual machine, we will need to setup Github Actions to automate the deployment.
Github Actions
Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline.
The global idea is to have in your Github repository of your project, a Terraform folder with your Terraform config and a .github folder with your jobs workflow.
Introduction
To run Terraform in your workflow, you will need to setup our secrets in the settings of your github's repository or your github organization (to share your secrets between repositories).
AWS credentials:
Terraform environment variables:
Let's build our workflow !
A workflow run is made up of one or more jobs
, which run in parallel by default. In our case, we need just one job. This job should setup and run Terraform.
# .github/workflows/deploy.yml
name: DEPLOY
on:
pull_request:
branches:
- main
types: [closed]
jobs:
deploy-vm:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Install terraform
run: sudo apt-get install -y software-properties-common gnupg2 curl
- name: Install dependencies
run: |
curl https://apt.releases.hashicorp.com/gpg | gpg --dearmor > hashicorp.gpg
sudo install -o root -g root -m 644 hashicorp.gpg /etc/apt/trusted.gpg.d/
sudo apt-add-repository "deb [arch=$(dpkg --print-architecture)] https://apt.releases.hashicorp.com focal main"
sudo apt update
sudo apt install terraform
- name: Terraform init
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
TF_VAR_EIP_ASSOC_FRONTEND: ${{ secrets.TF_VAR_EIP_ASSOC_FRONTEND }}
TF_VAR_SSH_PUB_KEY: ${{ secrets.TF_VAR_SSH_PUB_KEY }}
run: |
pwd
ls -la
cd terraform
terraform init
- name: Terraform apply
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
TF_VAR_EIP_ASSOC_FRONTEND: ${{ secrets.TF_VAR_EIP_ASSOC_FRONTEND }}
TF_VAR_SSH_PUB_KEY: ${{ secrets.TF_VAR_SSH_PUB_KEY }}
run: |
cd terraform
terraform apply -auto-approve
First, we define when we want to trigger the pipeline with the on
block. Here, I have decided to deploy/apply changes on my virtual machine for each closed merge request on the main branch, but you can change for scheduled task, push on develop, etc..
Finally, you can define one step to install dependencies. Here I have defined ubuntu-latest
as image for the pipeline, so I should install dependencies with the package manager apt-get
. And now, you can write two steps, for terraform init
and terraform apply
.
And voila ! You can see your workflows in the Actions tab of your repository !
That's all ! I hope I have helped you to see it more clearly !
As said at the beginning, I'm a software developer, I just love learning new things, but I understand that I can tell some mistakes, feel free to give me feedback ! I will fix it.
Top comments (0)