Introduction
In the world of Kubernetes, building container images securely and efficiently is a common challenge. This is where Kaniko comes in. Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster, without requiring root access. This post will delve into Kaniko's capabilities and provide a guide on setting it up within an AWS Elastic Kubernetes Service (EKS) cluster.
What is Kaniko?
Kaniko is an open-source tool developed by Google to build container images from a Dockerfile, securely in a Kubernetes cluster. Unlike traditional Docker builds that require privileged root access to perform tasks, Kaniko doesn't need Docker or privileged access, mitigating the security risks associated with container image builds.
Why Use Kaniko on AWS EKS?
- Security: Builds container images without Docker daemon, reducing the attack surface.
- Flexibility: Integrates seamlessly with various CI/CD tools and services.
- Efficiency: Leverages Kubernetes cluster resources for image builds.
Setting Up Kaniko on AWS EKS
Let's walk through setting up Kaniko on an AWS EKS cluster to build and push a container image to Amazon Elastic Container Registry (ECR).
Prerequisites
- An AWS account with access to EKS and ECR.
-
kubectl
configured to interact with your EKS cluster. - AWS CLI configured on your machine.
- Docker installed on your machine.
- Credentials configured for Amazon ECR.
Step 1: Create an ECR Repository
First, create a repository in Amazon ECR where Kaniko will push the built images.
aws ecr create-repository --repository-name my-kaniko-example
Step 2: Configure IAM Permissions
Kaniko needs permissions to push images to ECR. Create an IAM policy that grants the required permissions and attach it to the role associated with your EKS nodes.
- Create an IAM policy (kaniko-ecr-policy.json):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": "*"
}
]
}
- Create the policy:
aws iam create-policy --policy-name KanikoECRPolicy --policy-document file://kaniko-ecr-policy.json
- Attach the policy to your EKS node role.
After creating the IAM policy that grants Kaniko permissions to push images to Amazon ECR, you need to attach this policy to the IAM role associated with your EKS nodes. This step is essential to grant the Kaniko pod the required AWS permissions.
Find Your EKS Node IAM Role
First, identify the IAM role used by your EKS nodes. You can find this information in the Amazon EKS console or by describing your EKS node group via AWS CLI:
aws eks describe-nodegroup --cluster-name your-cluster-name --nodegroup-name your-nodegroup-name
Look for the nodeRole
in the output, which will be the ARN of the IAM role.
Attach the IAM Policy to the EKS Node Role
Once you have your EKS node IAM role ARN, attach the KanikoECRPolicy to it. You can do this through the AWS Management Console or the AWS CLI.
Using the AWS CLI:
aws iam attach-role-policy --role-name YourEKSNodeRoleName --policy-arn arn:aws:iam::your-account-id:policy/KanikoECRPolicy
Replace YourEKSNodeRoleName
with the name of your EKS node IAM role (not the ARN) and your-account-id
with your AWS account ID.
Verifying the Policy Attachment
Ensure the policy was attached successfully by listing the policies attached to your EKS node role:
aws iam list-attached-role-policies --role-name YourEKSNodeRoleName
You should see KanikoECRPolicy
in the list of attached policies.
Step 3: Prepare Your Kubernetes Cluster
Create a Kubernetes secret to store your ECR credentials, which Kaniko will use to authenticate.
kubectl create secret docker-registry regcred \
--docker-server=<AWS_REGION>.amazonaws.com \
--docker-username=AWS \
--docker-password=$(aws ecr get-login-password) \
--docker-email=<YOUR_EMAIL>
Step 4: Deploy Kaniko Pod
Deploy a pod that uses Kaniko to build and push an image to your ECR repository. Define your deployment in a YAML file (kaniko-pod.yaml
):
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=Dockerfile",
"--context=git://github.com/<your-repo>.git#refs/heads/master",
"--destination=<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/my-kaniko-example:latest"]
volumeMounts:
- name: kaniko-secret
mountPath: /kaniko/.docker
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: regcred
Replace placeholders with your specific details. Deploy the pod:
kubectl apply -f kaniko-pod.yaml
Step 5: Verify the Image Build and Push
Monitor the pod's logs to ensure the build and push process completes successfully:
kubectl logs kaniko
Additional Method: Running Kaniko Inside a Docker Container
For local development or in CI/CD pipelines where Kubernetes isn't available, you can use Docker to run Kaniko using the Executor image from GCR. This approach simulates the Kubernetes environment, allowing you to build and push container images to a registry like Amazon ECR without needing a full Kubernetes setup.
Step 1: Pull the Kaniko Executor Image
First, pull the Kaniko Executor image from Google Container Registry:
docker pull gcr.io/kaniko-project/executor:latest
Step 2: Prepare Your Build Context and Dockerfile
Ensure your Dockerfile and any necessary files for the build (the build context) are located in a specific directory. This directory will be mounted into the Docker container running Kaniko.
Step 3: Run Kaniko in Docker
To build and push an image using Kaniko inside a Docker container, use the following command, adjusting paths and variables for your environment:
docker run \
-v /path/to/your/build/context:/workspace \
-v /path/to/.docker:/kaniko/.docker \
gcr.io/kaniko-project/executor:latest \
--dockerfile /workspace/Dockerfile \
--context dir:///workspace/ \
--destination=<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/my-kaniko-example:latest
-
/path/to/your/build/context
is the local path to your Dockerfile and any files it needs. -
/path/to/.docker
should contain yourconfig.json
with the ECR credentials. - Adjust the
--destination
flag to point to your target ECR repository.
Step 4: Authenticate Docker with ECR
Ensure Docker is authenticated with Amazon ECR to allow pushing the built image. You can authenticate Docker using the AWS CLI:
aws ecr get-login-password --region <AWS_REGION> | docker login --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com
Step 5: Verify the Image in ECR
After the build completes, check your Amazon ECR repository to verify that the image has been pushed successfully.
Combining the Best of Both Worlds
Running Kaniko within a Docker container offers a versatile solution for building container images, especially when Kubernetes isn't part of your immediate toolchain. This method provides a bridge between local development environments and cloud-native technologies, allowing for a seamless transition to Kubernetes when ready.
Conclusion
Kaniko emerges as a robust tool for building container images securely, whether in a Kubernetes cluster with AWS EKS or locally using Docker. Its ability to run without Docker Daemon makes it a safer and more compliant choice for CI/CD pipelines, fitting well into various development and deployment workflows.
Top comments (0)