Amazon Web Services (AWS) CodePipeline Team has simplified Developer, DevOps engineers operational overhead and streamlined the Deployment process to EKS by introducing an CodePipeline action to deploy directly to your EKS cluster.
Why does it matter to you?
Previously if I had to deploy resources to EKS using DevOps approach, I had to manage codebuild project, permissions to access eks, kubectl, helm commands, other horrific shell commands and still it wont be perfect in one shot.
I tried this out today so let me show you how you can simplify your deployment pipeline for EKS by 100%, say goodbye to codebuild and remove all the complex process, scripts and command!!!
Architecture
Prerequisites
- EKS cluster with public endpoint.
- Kubernetes resource like
deployment.yaml
file - You can also helm chart if you wish.
Whichever you prefer I have provided both (helm chart and deployment.yaml) in this repository.
Behind the Scenes
In case of kubernetes manifest file
- This action logs into eks cluster, set kubeconfig context
- Installs kubectl
- Applies Kubernetes manifests
- Rollout resources
In case of helm chart
- This action logs into eks cluster, set kubeconfig context
- Instals helm
- Instal Helm charts
Create an EKS cluster
Your cluster can be public or private (including those in private VPCs).The pipeline will automatically establish a connection into your private network to deploy your container application, without additional infrastructure needed.
- I have used eks auto using terraform eks module which is quick and easy with EKS endpoint as public.
eks.tf
# eks cluster
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.31"
cluster_name = local.cluster_name
cluster_version = "1.32"
cluster_endpoint_public_access = true
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_compute_config = {
enabled = true
node_pools = ["general-purpose"]
}
# Cluster access entry
# To add the current caller identity as an administrator
enable_cluster_creator_admin_permissions = true
tags = {
Environment = "dev"
Terraform = "true"
}
}
vpc.tf
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.19.0"
name = "codepipeline-eks-action"
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_dns_hostnames = true
enable_dns_support = true
public_subnet_tags = {
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = "1"
}
tags = {
Environment = "terraform-playground"
}
}
pod_identity.tf
data "aws_iam_policy_document" "allow_pod_identity" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["pods.eks.amazonaws.com"]
}
actions = [
"sts:AssumeRole",
"sts:TagSession"
]
}
}
resource "aws_iam_role" "read_ecr" {
name = "read-ecr-role"
assume_role_policy = data.aws_iam_policy_document.allow_pod_identity.json
}
resource "aws_iam_role_policy_attachment" "read_ecr" {
role = aws_iam_role.read_ecr.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}
resource "aws_eks_pod_identity_association" "read_ecr" {
cluster_name = local.cluster_name
namespace = "default"
service_account = "ecr-sa"
role_arn = aws_iam_role.read_ecr.arn
}
Note: I am using pod identity with ecr repository permission to obtain ecr image for deployment.yaml.
If you want complete code for cluster, you can follow this repo
Create pipeline with EKS deployment action
Case 1: when kubectl configuration is used.
I am using the source as GitHub using code start connection and code resides in this repo
- Select the cluster
- Provide the path for your
deployment.yaml
file (in my case its deployment.yaml file)
Case 2: When Helm chart is used
- Enter the release name
- Enter the helm chart (in my case its the test-chart)
Important step
Note: Once pipeline is created you need to edit the pipeline service role or update the existing one and add the following permissions to avoid error.
{
"Statement": [
{
"Action": [
"iam:PassRole"
],
"Resource": "*",
"Effect": "Allow",
"Condition": {
"StringEqualsIfExists": {
"iam:PassedToService": [
"cloudformation.amazonaws.com",
"elasticbeanstalk.amazonaws.com",
"ec2.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
}
}
},
{
"Action": [
"codecommit:CancelUploadArchive",
"codecommit:GetBranch",
"codecommit:GetCommit",
"codecommit:GetRepository",
"codecommit:GetUploadArchiveStatus",
"codecommit:UploadArchive"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"codedeploy:CreateDeployment",
"codedeploy:GetApplication",
"codedeploy:GetApplicationRevision",
"codedeploy:GetDeployment",
"codedeploy:GetDeploymentConfig",
"codedeploy:RegisterApplicationRevision"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"codestar-connections:UseConnection"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"ecs:*"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"lambda:InvokeFunction",
"lambda:ListFunctions"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"opsworks:CreateDeployment",
"opsworks:DescribeApps",
"opsworks:DescribeCommands",
"opsworks:DescribeDeployments",
"opsworks:DescribeInstances",
"opsworks:DescribeStacks",
"opsworks:UpdateApp",
"opsworks:UpdateStack"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStacks",
"cloudformation:UpdateStack",
"cloudformation:CreateChangeSet",
"cloudformation:DeleteChangeSet",
"cloudformation:DescribeChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:SetStackPolicy",
"cloudformation:ValidateTemplate"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"codebuild:BatchGetBuilds",
"codebuild:StartBuild",
"codebuild:BatchGetBuildBatches",
"codebuild:StartBuildBatch"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": [
"devicefarm:ListProjects",
"devicefarm:ListDevicePools",
"devicefarm:GetRun",
"devicefarm:GetUpload",
"devicefarm:CreateUpload",
"devicefarm:ScheduleRun"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"servicecatalog:ListProvisioningArtifacts",
"servicecatalog:CreateProvisioningArtifact",
"servicecatalog:DescribeProvisioningArtifact",
"servicecatalog:DeleteProvisioningArtifact",
"servicecatalog:UpdateProduct"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cloudformation:ValidateTemplate"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:DescribeImages"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"states:DescribeExecution",
"states:DescribeStateMachine",
"states:StartExecution"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"appconfig:StartDeployment",
"appconfig:StopDeployment",
"appconfig:GetDeployment"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-1:xxxx:log-group:/aws/codepipeline/eks-deploy-codepipeline",
"arn:aws:logs:us-east-1:xxxxx:log-group:/aws/codepipeline/eks-deploy-codepipeline:log-stream:*"
]
},
{
"Sid": "EksClusterPolicy",
"Effect": "Allow",
"Action": "eks:DescribeCluster",
"Resource": [
"*"
]
},
{
"Sid": "EksVpcClusterPolicy",
"Effect": "Allow",
"Action": [
"ec2:DescribeDhcpOptions",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeRouteTables",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": "ec2:CreateNetworkInterface",
"Resource": "*",
"Condition": {
"StringEqualsIfExists": {
"ec2:Subnet": [
"arn:aws:ec2:us-east-1:292170836962:subnet/subnet-example1",
"arn:aws:ec2:us-east-1:292170836962:subnet/subnet-example2"
]
}
}
},
{
"Effect": "Allow",
"Action": "ec2:CreateNetworkInterfacePermission",
"Resource": "*",
"Condition": {
"ArnEquals": {
"ec2:Subnet": [
"arn:aws:ec2:us-east-1:xxxx:subnet/subnet-example1",
"arn:aws:ec2:us-east-1:xxxx:subnet/subnet-example2"
]
}
}
},
{
"Effect": "Allow",
"Action": "ec2:DeleteNetworkInterface",
"Resource": "*",
"Condition": {
"StringEqualsIfExists": {
"ec2:Subnet": [
"arn:aws:ec2:us-east-1:xxxxx:subnet/subnet-example1",
"arn:aws:ec2:us-east-1:xxxx:subnet/subnet-example2"
]
}
}
}
],
"Version": "2012-10-17"
}
- Create a access entry in eks cluster with
AmazonEKSClusterAdminPolicy
for the above codepipeline service role.
Run the pipeline
Case when kubectl manifest is updated
(base) jatin.mehrotra@CK0662-001 codepipeline-eks-deployment-no-github % kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-k8s-74fd98b69b-2pwrv 1/1 Running 0 2m37s
hello-k8s-74fd98b69b-g876f 1/1 Running 0 2m37s
hello-k8s-74fd98b69b-phs77 1/1 Running 0 2m37s
Case when helm chart manifest is updated
(base) jatin.mehrotra@CK0662-001 codepipeline-eks-deployment-no-github % kubectl get pods
NAME READY STATUS RESTARTS AGE
test-67cbfddc66-2gcvj 1/1 Running 0 10m
From Developer, DevOps perspective
- With this update I don't have to manage codebuild projects or any kind of compute environment and manage complex scripts, permissions and tool installations.
- Following is the image of the setup which I used to have with codebuild before this update. Such a complex mess.
- This is surely a game changer for developers, devops engineers and infra engineers who wants to focus on business problem, their applications and other kubernetes issue like monitoring and scaling.
This is not GitOps approach which is followed by Flux/ArgoCD but the best DevOps approach for EKS cluster in my view.
Will you use this action for your eks clusters? Let me know in the comments!!!
I share such amazing AWS updates on DevOps, Kubernetes and GenAI daily over Linkedin, X. Follow me over there so that I can make your life more easy.
Top comments (0)