Refer here for corresponding Git commit: Provision Terraform Backend
Pre-requisites
Throughout this guide, all infrastructure and applications will either be named or prefixed with "opsninja" for easy identification. Our deployments will take place in the us-east-1 region.
Before we start deploying our AWS EKS cluster using Terraform, we need to ensure that the required tools are installed and properly configured.
1. Install AWS CLI v2
The AWS CLI (Command Line Interface) v2 is required to interact with AWS services. To install it, follow these steps:
For macOS
brew install awscli
For Linux
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
For Windows
- Download the installer from the official AWS website: AWS CLI v2
- Run the installer and follow the setup instructions.
- Verify the installation:
aws --version
You should see an output similar to:
aws-cli/2.x.x Python/x.x.x Linux/x86_64
2. Install Helm (Kubernetes Package Manager)
Helm is a package manager for Kubernetes that simplifies the deployment of applications and services.
For macOS
brew install helm
For Linux
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
For Windows (using Chocolatey)
choco install kubernetes-helm
Verify the installation:
helm version
You should see an output similar to:
version.BuildInfo{Version:"v3.x.x", GitCommit:"xxxx", GoVersion:"go1.x.x"}
3. Install AWS IAM Authenticator
The AWS IAM Authenticator is required for authentication with the Amazon EKS cluster.
For macOS
brew install aws-iam-authenticator
For Linux
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.24.3/2022-06-24/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mv ./aws-iam-authenticator /usr/local/bin/
For Windows
Download the binary from the official AWS EKS release and add it to your system's PATH.
Verify the installation:
aws-iam-authenticator version
Expected output:
{"Version":"0.x.x","Commit":"xxxx"}
4. Configure AWS CLI with a Profile Named terraform
We will be using a dedicated AWS CLI profile called terraform
to provision our infrastructure with Terraform.
Run the following command to configure AWS CLI with the terraform
profile:
aws configure --profile terraform
You'll be prompted to enter your AWS credentials:
AWS Access Key ID [None]: <YOUR_ACCESS_KEY>
AWS Secret Access Key [None]: <YOUR_SECRET_KEY>
Default region name [None]: us-east-1
Default output format [None]: json
Verify the Profile
To check if the profile is configured correctly, run:
aws configure list --profile terraform
or list all available profiles:
aws configure list-profiles
This profile will be used throughout this guide to authenticate and interact with AWS when deploying infrastructure using Terraform.
5. Validate AWS Configuration
To confirm that the AWS CLI and profile are correctly set up, run:
aws sts get-caller-identity --profile terraform
This should return details of the authenticated IAM user or role.
With AWS CLI, Helm, and AWS IAM Authenticator installed and the terraform
profile configured, we are now ready to proceed with configuring the Terraform backend
Configuring the Terraform Backend
Throughout this guide, all infrastructure and applications will either be named or prefixed with "opsninja" for easy identification. Our deployments will take place in the us-east-1 region.
For better state management, we will configure a Terraform backend using AWS S3 and DynamoDB. This will allow us to store our Terraform state remotely and enable state locking to prevent conflicts in collaborative environments.
Terraform Backend Configuration (main.tf
)
Below is the Terraform backend configuration, which uses an S3 bucket for state storage and a DynamoDB table for state locking.
provider "aws" {
region = var.region
profile = "terraform"
}
resource "aws_s3_bucket" "terraform_state_bucket" {
bucket = var.s3_bucket_name
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
lifecycle {
prevent_destroy = true
}
tags = {
Name = var.s3_bucket_name
Environment = var.environment
App = var.app
ManagedBy = "Terraform"
}
}
resource "aws_dynamodb_table" "terraform_lock_table" {
name = var.dynamodb_table_name
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags = {
Name = var.dynamodb_table_name
Environment = var.environment
App = var.app
ManagedBy = "Terraform"
}
}
Terraform Variables (variables.tf
)
variable "app" {
description = "App Name"
type = string
default = "opsninja"
}
variable "region" {
description = "AWS region"
type = string
default = "us-east-1"
}
variable "s3_bucket_name" {
description = "The name of the S3 bucket for storing Terraform state"
type = string
default = "opsninja-tf-state-prod"
}
variable "dynamodb_table_name" {
description = "The name of the DynamoDB table for state locking"
type = string
default = "opsninja-tf-state-prod"
}
variable "environment" {
description = "Environment for tagging purposes (e.g., dev, prod)"
type = string
default = "prod"
}
Terraform Output (output.tf
)
output "s3_bucket_name" {
value = aws_s3_bucket.terraform_state_bucket.bucket
}
output "dynamodb_table_name" {
value = aws_dynamodb_table.terraform_lock_table.name
}
Deploying the Terraform Backend
Once the backend is defined, apply Terraform using the following command:
terraform init
terraform apply
Deployment Output and State Management
Once the deployment is complete, the output will display the S3 bucket and DynamoDB table that were created. Make sure to note these values, as they will be required for future steps in this guide.
Additionally, the state file for backend provisioning is saved locally. This file must be preserved, as it is necessary to manage the lifecycle of the S3 bucket and DynamoDB table in the future.
Top comments (0)