DEV Community

Cover image for EKS Auto Mode Arrives in Terraform – Simplify Kubernetes Today

EKS Auto Mode Arrives in Terraform – Simplify Kubernetes Today

During AWS re:Invent 2024 AWS released a new feature to EKS i.e EKS Auto Mode, which i have already covered in detail in my previous blog

In this blog we will see how we can create cluster with EKS Auto mode using the OG "terraform-eks-module" and how it simplified my eks.tf code.

I will also talk about the differences in the terraform code which we used for eks cluster before Auto mode feature v/s the terraform code after using using Auto mode feature and how it save a beginner who does not know anything EKS.

Motivation

  • Terraform provider for AWS released a new version v5.79.0 which adds the resources (compute_config, storage_config, storage_config and kubernetes_network_config.elastic_load_balancing) for EKS Auto Mode.

terraform provider aws eks resources

  • Terraform eks module released a new version v20.31.0 which enables the support of EKS Auto mode and EKS Hybrid Nodes.

Terraform aws eks module support

Let's use Terraform AWS EKS module for EKS Auto Mode

If you want to follow along use this repository for the working code.

Enable EKS Auto mode for new cluster

node pool

  • One node pool (general purpose) created by EKS Auto mode

no nodes

No pods

  • There are no nodes or pods in the cluster( no workload is running) or can also say I did not provisioned any nodes so far because that the job of EKS Auto mode now.

  • The moment I install a sample using this code, EKS Auto Mode provisions the Ec2 nodes by itself. What a magic zero management for node provisioning from my side.

Note: You need to run kubectl set context command and run terraform apply again to deploy the sample app. Sample app wasn't deployed when you created cluster because context wasn't set.

aws eks --region us-east-1 update-kubeconfig --name tf-module-support --profile ck-test

terraform apply
Enter fullscreen mode Exit fullscreen mode

node proviosning

 kubectl get po
NAME                    READY   STATUS    RESTARTS   AGE
test-65b7dbddd4-j6mbt   1/1     Running   0          104s
test-65b7dbddd4-wdz56   1/1     Running   0          104s
Enter fullscreen mode Exit fullscreen mode

What is making the difference

  • cluster_compute_config resource is the difference or resource use to enable or disable EKS auto Mode in the module side.

  • At the terraform aws provider side compute_config is used to enable or disable to the EKS Auto Mode

Code difference

  cluster_compute_config = {
    enabled    = true
    node_pools = ["general-purpose"]
  }
Enter fullscreen mode Exit fullscreen mode
  • Due to cluster_compute_config option, now I don't have to mention eks_managed_node_group_defaults, eks_managed_node_groups, node_security_group_additional_rules or even know what are those concepts.

The following code might look small but if someone who does know about EKS and node provisioning has to understand the concepts as well figure out how to write this code. But now due to EKS Auto mode no more management. WOW just so sleek.

# code which is not needed anymore

eks_managed_node_group_defaults = {
    ami_type       = "AL2_x86_64"
    instance_types = ["m5.large"]
    # instance_types = ["t3.small"]
    # vpc_security_group_ids = [aws_security_group.all_worker_mgmt.id]
    iam_role_additional_policies = {
      ebs_policy                                 = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy" #IAM rights needed by CSI driver
      auto_scaling_policy                        = "arn:aws:iam::aws:policy/AutoScalingFullAccess"
      cloudwatch_container_insights_agent_policy = "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
      xray_policy                                = "arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess"
    }
  }

  eks_managed_node_groups = {

    node_group = {
      min_size     = 2
      max_size     = 5
      desired_size = local.node_group_desired_size
    }
  }

  node_security_group_additional_rules = {
    http_traffic_node_to_node = {
      description = "Allow inbound HTTP from self"
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      self        = true
      type        = "ingress"
    }
  }

# For triggering managed node group desired size

resource "null_resource" "update_desired_size" {
  triggers = {
    desired_size = local.node_group_desired_size
  }

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]

    command = <<-EOT
      aws eks update-nodegroup-config \
        --cluster-name ${module.eks.cluster_name} \
        --nodegroup-name ${element(split(":", module.eks.eks_managed_node_groups["node_group"].node_group_id), 1)} \
        --scaling-config desiredSize=${local.node_group_desired_size} \
        --region us-east-1 \
        --profile ck-test
    EOT
  }
}

Enter fullscreen mode Exit fullscreen mode

What isn't supported

From DevOps, IaC perspective

  • We saw how we can use EKS auto mode; a game changer feature for container workloads on a new cluster where we did not had to plan, provision infrastructure (compute) for running our workloads.
  • Still some bug fixes are needed on terraform aws provider side as well EKS side to enable eks auto mode for existing cluster till then use console.
  • Not only EKS auto mode takes away operations from user but also managed to simplify the IaC(terraform code)
  • As a consumer of terraform, terraform-eks-module it is mind blowing to see the speed at which this feature was supported. All thanks to Bryant Biggs for supporting this community.

Top comments (0)