DEV Community

Ravindu Fernando
Ravindu Fernando

Posted on

A Production Ready EKS Deployment with IaC & GitOps - Part 4 - Deploying VPC

Refer here for corresponding Git commit: Deploying VPC

In this article, we'll explore how to set up a production-ready VPC for our Amazon EKS cluster using Terraform. We'll walk through each component and explain how you can customize them for your specific needs.

File Structure

For this VPC deployment, we'll work with two main files:

  • data.tf: Contains data sources for AWS resources
  • vpc.tf: Contains the VPC module configuration

Understanding the Data Sources

Let's start with data.tf:

data "aws_availability_zones" "available" {
  filter {
    name   = "opt-in-status"
    values = ["opt-in-not-required"]
  }
}

locals {
  cluster_name = "${var.cluster_name}-${var.env}"
}
Enter fullscreen mode Exit fullscreen mode

This data source fetches available AWS Availability Zones (AZs) in our region that don't require opt-in. This ensures our VPC spans across reliable and readily available AZs.

We have a local defined to append the environment to the cluster name we define in variables.tf.

variables.tf file would be updated with the following code block;

variable "vpc_name" {
  default     = "opsninja"
}

variable "cluster_name" {
  default     = "opsninja"
}
Enter fullscreen mode Exit fullscreen mode

VPC Configuration Breakdown

Here's the complete configuration for the VPC.
vpc.tf

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.15.0"

  name = "${var.vpc_name}-${var.env}-vpc"

  cidr = "10.1.0.0/16"
  azs  = slice(data.aws_availability_zones.available.names, 0, 3)

  private_subnets = ["10.1.1.0/24", "10.1.2.0/24", "10.1.3.0/24"]
  public_subnets  = ["10.1.4.0/24", "10.1.5.0/24", "10.1.6.0/24"]
  intra_subnets   = ["10.1.7.0/24", "10.1.8.0/24", "10.1.9.0/24"]

  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = 1
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = 1
    "karpenter.sh/discovery"                      = local.cluster_name
  }
}
Enter fullscreen mode Exit fullscreen mode

Now, let's examine each part of the VPC configuration in vpc.tf:

Module Source and Version

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.15.0"
Enter fullscreen mode Exit fullscreen mode

We're using the official AWS VPC Terraform module. You can update the version number as newer releases become available.

VPC Naming

name = "${var.vpc_name}-${var.env}-vpc"
Enter fullscreen mode Exit fullscreen mode

The VPC name follows the format: [vpc_name]-[environment]-vpc. For example: opsninja-prod-vpc

Customization Point:

  • Modify var.vpc_name in variables.tf to match your project name
  • The env variable determines the environment (prod, dev, etc.)

Network Configuration

cidr = "10.1.0.0/16"
azs  = slice(data.aws_availability_zones.available.names, 0, 3)

private_subnets = ["10.1.1.0/24", "10.1.2.0/24", "10.1.3.0/24"]
public_subnets  = ["10.1.4.0/24", "10.1.5.0/24", "10.1.6.0/24"]
intra_subnets   = ["10.1.7.0/24", "10.1.8.0/24", "10.1.9.0/24"]
Enter fullscreen mode Exit fullscreen mode

Customization Points:

  • CIDR Range: Modify 10.1.0.0/16 based on your network requirements. Note that Each pod requires its own IP address. Consider future growth in pod count. For 1000 pods: A /24 subnet (256 IPs) might be too small. For 5000+ pods: Consider larger subnets like /22 (1024 IPs) or bigger. Current configuration (/24) provides ~250 usable IPs per subnet.
  • Subnet CIDRs: Adjust subnet ranges ensuring they fall within your VPC CIDR. Private subnets host your worker nodes and pods. Size private subnets larger than public/intra subnets.
    Example sizing for high pod density:

     private_subnets = ["10.1.0.0/22", "10.1.4.0/22", "10.1.8.0/22"]    # ~1000 IPs each
     public_subnets  = ["10.1.12.0/24", "10.1.13.0/24", "10.1.14.0/24"] # ~250 IPs each
     intra_subnets   = ["10.1.15.0/24", "10.1.16.0/24", "10.1.17.0/24"] # ~250 IPs each
    
  • Number of AZs: The slice function selects the first 3 AZs, modify if needed

Network Features

enable_nat_gateway   = true
single_nat_gateway   = true
enable_dns_hostnames = true
Enter fullscreen mode Exit fullscreen mode

Configuration Notes:

  • single_nat_gateway: Set to true for cost optimization, set to false for high availability
  • enable_dns_hostnames: Required for EKS functionality
  • NAT Gateway: Required for private subnet internet access

Subnet Tagging

public_subnet_tags = {
  "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  "kubernetes.io/role/elb"                      = 1
}

private_subnet_tags = {
  "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  "kubernetes.io/role/internal-elb"             = 1
  "karpenter.sh/discovery"                      = local.cluster_name
}
Enter fullscreen mode Exit fullscreen mode

These tags are crucial for:

  • EKS cluster networking
  • Load balancer provisioning
  • Karpenter node auto-discovery. We haven't deployed Karpenter yet, but since we are configuring the VPC, might as well add those tags.

Top comments (0)