DEV Community

Prithiviraj R
Prithiviraj R

Posted on

Creating an EKS Cluster Using Terraform

In this guide, we’ll use Terraform to create an Amazon Elastic Kubernetes Service (EKS) cluster. The
focus is on simplicity and accuracy while leveraging Terraform’s modular approach.

What is Terraform?
Terraform is an open-source tool by HashiCorp for defining, provisioning, and managing
cloud infrastructure using Infrastructure-as-Code (IaC).
How Terraform Works

  1. Write: Define your resources in .tf files.
  2. Plan: Preview changes before applying them.
  3. Apply: Provision resources as defined in the configuration.
  4. Manage: Maintain infrastructure with a state file that tracks the resources. Why Terraform for EKS?
  • Automation: Handles complex configurations effortlessly.
  • Consistency: Guarantees reproducible environments.
  • Scalability: Makes scaling clusters easy. Prerequisites
  • AWS CLI installed and configured with appropriate permissions.
  • kubectl installed for Kubernetes management.
  • Terraform CLI installed on your machine.
  • Maintain infrastructure with a state file that tracks the resources.

Step 1: Set Up the Project

  1. Create a new directory for the project:
  2. Create the required Terraform files:
  • main.tf
  • variables.tf
  • providers.tf
  • vpc.tf
  • eks.tf Step 2: Terraform Configuration

1. providers.tf
Configure the AWS provider:

provider "aws" {
region = "eu-west-1"
default_tags {
tags = local.tags
}
}
terraform {
required_providers {
aws = {
source
= "hashicorp/aws"
version = ">= 4.67.0"
}
}
required_version = ">= 1.4.2"
}
Enter fullscreen mode Exit fullscreen mode

2. main.tf

locals {
tags = {
created-by = "eks-demo"
env
= var.cluster_name
}
}
Enter fullscreen mode Exit fullscreen mode

3. vpc.tf

data "aws_availability_zones" "available" {
state = "available"
}
module "vpc" {
source
= "terraform-aws-modules/vpc/aws"
version = "~> 5.1"
name = var.cluster_name
cidr = var.vpc_cidr
azs= local.azs
public_subnets= local.public_subnets
private_subnets= local.private_subnets
public_subnet_suffix= "SubnetPublic"
private_subnet_suffix = "SubnetPrivate"
enable_nat_gateway= true
create_igw= true
enable_dns_hostnames = true
single_nat_gateway
= true
# Manage so we can name
manage_default_network_acl= true
default_network_acl_tags= { Name = "${var.cluster_name}-default" }
manage_default_route_table= true
default_route_table_tags= { Name = "${var.cluster_name}-default" }
manage_default_security_group = true
default_security_group_tags
= { Name = "${var.cluster_name}-default" }
public_subnet_tags = merge(local.tags, {
"kubernetes.io/role/elb" = "1"
})
private_subnet_tags = merge(local.tags, {
"karpenter.sh/discovery"
= var.cluster_name
"kubernetes.io/role/internal-elb" = "1"
})
tags = local.tags
}
Enter fullscreen mode Exit fullscreen mode

4. eks.tf

module "eks" {
source
= "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name= var.cluster_name
cluster_version= var.cluster_version
cluster_endpoint_public_access= true
enable_cluster_creator_admin_permissions = true
cluster_addons = {
vpc-cni = {
before_compute = true
most_recent
= true
configuration_values = jsonencode({
env = {
ENABLE_POD_ENI= "true"
ENABLE_PREFIX_DELEGATION= "true"
POD_SECURITY_GROUP_ENFORCING_MODE = "standard"
}
nodeAgent = {
enablePolicyEventLogs = "true"
}
enableNetworkPolicy = "true"
})
}
}
vpc_id
= module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
create_cluster_security_group = false
create_node_security_group
= false
eks_managed_node_groups = {
default = {
instance_types= ["m5.large"]
force_update_version= true
release_version= var.ami_release_version
use_name_prefix= false
iam_role_name= "${var.cluster_name}-ng-default"
iam_role_use_name_prefix = false
min_size= 3
max_size= 6
desired_size = 3
update_config = {
max_unavailable_percentage = 50
}
labels = {
workshop-default = "yes"
}}
}
tags = merge(local.tags, {
"karpenter.sh/discovery" = var.cluster_name
})
}
Enter fullscreen mode Exit fullscreen mode

5.variables.tf

variable "cluster_name" {
description = "EKS cluster"
type= string
default= "eks-demo"
}
variable "cluster_version" {
description = "EKS cluster version."
type= string
default= "1.30"
}
variable "ami_release_version" {
description = "Default EKS AMI release version for node groups"
type= string
default= "1.30.0-20240625"
}
variable "vpc_cidr" {
description = "Defines the CIDR block used on Amazon VPC created for Amazon
EKS."
type= string
default= "10.42.0.0/16"
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Initialize Terraform
Run the following to initialize Terraform:

terraform init
Enter fullscreen mode Exit fullscreen mode

Image description

step 4: Validate,Plan and Apply

  1. Validate the deployment
terraform validate
Enter fullscreen mode Exit fullscreen mode

Image description

2.Plan the deployment

 terraform plan
Enter fullscreen mode Exit fullscreen mode

Image description

3.Apply the deployment

terraform apply
Enter fullscreen mode Exit fullscreen mode

Image description

Step 5: Configure kubectl
Update the Kubernetes configuration to connect to the new cluster:

aws eks --region eu-west-1 update-kubeconfig --name eks-demo
Enter fullscreen mode Exit fullscreen mode

Step 6: Verify Your EKS Cluster
List the nodes:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Image description

Benefits of Using Terraform Modules

  • Reusable Components: Use pre-built modules for consistency.
  • - Simplified Configurations: Modules handle complex resource dependencies.
  • - Scalable Architecture: Makes scaling infrastructure seamless . Conclusion Deploying an EKS cluster with Terraform streamlines the process of setting up and managing Kubernetes workloads on AWS. By leveraging Terraform modules, you ensure a reliable, scalable, and efficient setup.

Happy Learning

Prithiviraj Rengarajan
DevOps Engineer

Top comments (0)