In November 2024 AWS announces and releases Kro as an opensource tool to simplify the use and creation of Custom APIs in Kubernetes and on January 30, 2025 google publishes on its blog a collaboration between Azure, GCP and AWS for the development of this tool, at the moment Kro is experimental and is working to make kro ready for production.
What is Kro?
Kro is a native tool and agonostic cloud, is way to define and groupings applications and their dependencies encapsulating all this as a single resource that can be easily used by end users.
What is ACK?
ACK are controllers for define and use AWS service resources directly from Kubernetes, check this introduction blog with an basic example using ACK in EKS.
What is KCL?
KCL is a configuration language that let us integrated Programming language techniques, in this blog Kcl is used for generated kubernetes manifests and reducing limitations of managing YAML configurations.
Later of a short introduction, it is time to deploy an application using ack controllers to deploy resources on AWS packaging all dependencies with Kro and using KCL to generate kubernetes manifests with more ease, order, control and reducing code repetition (DRY).
Requirements
Please check all code from this blog in this repository
segoja7
/
kro_ack_k8s
This is an example integrating kro and ack controllers using minikube
Kubernetes Deployments Simplified with Kro, ACK, and KCL
This repository demonstrates how to deploy an application on AWS EKS using Kro, ACK controllers, and KCL. This powerful combination simplifies infrastructure management, reduces code duplication, and streamlines the deployment process.
Key Technologies
-
Kro: A cloud-agnostic, Kubernetes-native tool for defining and grouping applications and their dependencies. Kro simplifies application management by encapsulating components as a single resource. (Currently experimental and under active development. See the Kro Status Update below.)
-
ACK (AWS Controllers for Kubernetes): Manage AWS service resources directly from your Kubernetes cluster. ACK provides a bridge between your Kubernetes environment and various AWS services.
-
KCL (Kubernetes Configuration Language): A modern configuration language that leverages programming language techniques to generate Kubernetes manifests. KCL promotes DRY (Don't Repeat Yourself) principles and reduces the complexity of managing YAML files.
Kro Status Update
- Open Source Release (November 2024): Kro is now open-source!
- Cross-Cloud Collaboration (January…
All examples are divided in 2 sections
Infra resources are in the path ./infra
deploy resources are in the path ./deploy
Step 1.
Creating minikube profile.
minikube start -p kro
Step 2.
Installing KRO.
export KRO_VERSION=$(curl -sL \
https://api.github.com/repos/kro-run/kro/releases/latest | \
jq -r '.tag_name | ltrimstr("v")'
)
helm install kro oci://ghcr.io/kro-run/kro/kro \
--namespace kro \
--create-namespace \
--version=${KRO_VERSION}
Step 3.
Installing ACK Controllers.
Creating new namespace for ACK.
➜ kubectl create ns ack-system
namespace/ack-system created
With the namespace created is necessary defined an profile of AWS for create the resources, in this case an profile.txt is used for create after an secret in the namespace created.
➜ cat ~/kro/profile.txt
[default]
aws_access_key_id = <access_key_id>
aws_secret_access_key = <secret_access_key>
Note: This profile is used in a experimental and basic example, in another scenario IRSA is recommended
kubectl create secret generic aws-credentials -n ack-system --from-file=credentials=/home/segoja7/kro/profile.txt
secret/aws-credentials created
Install ACK Controllers
CONTROLLER_REGION=us-east-1; \
SERVICES=("iam" "ec2" "eks"); \
for SERVICE in "${SERVICES[@]}"; do \
RELEASE_VERSION=$(curl -sL https://api.github.com/repos/aws-controllers-k8s/${SERVICE}-controller/releases/latest | jq -r '.tag_name | ltrimstr("v")'); \
if [[ -z "$RELEASE_VERSION" ]]; then \
echo "Error: Could not retrieve release version for ${SERVICE}."; \
continue; \
fi; \
helm install --create-namespace -n ack-system \
oci://public.ecr.aws/aws-controllers-k8s/${SERVICE}-chart \
--version="${RELEASE_VERSION}" \
--generate-name \
--set aws.region="${CONTROLLER_REGION}" \
--set aws.credentials.secretName=aws-credentials \
--set aws.credentials.profile=default; \
if [[ $? -eq 0 ]]; then \
echo "Successfully installed ${SERVICE} controller."; \
else \
echo "Error: Failed to install ${SERVICE} controller."; \
fi; \
done
Step 4.
Creating an ResourceGraphDefinition for Infrastructure
KCL is used to applied for defined on only resource and evit duplicate code.
In an scenario normal is necessary defined multiples times the same resource.
For example: validate this example in ACK page. link
with KCL it is possible to apply a logic to create in this case the subnets and associate them with their correct routetable.
resources += [{
id = subnet_config.name
template = {
apiVersion = "ec2.services.k8s.aws/v1alpha1"
kind = "Subnet"
metadata = {
name = subnet_config.name + my_config.project_name
}
spec = {
availabilityZone = subnet_config.zone
cidrBlock = subnet_config.cidr
vpcID = r"""${vpc.status.vpcID}"""
if subnet_config.type == "public":
mapPublicIPOnLaunch = True
routeTables = [
r"""${routetablepublic.status.routeTableID}"""
]
tags = [
{
key = "ManagedBy"
value = apiVersion
}
{
key = "Name"
value = metadata.name
}
{
key = "kubernetes.io/role/elb"
value = "1"
}
{
key = r"""kubernetes.io/cluster/cluster""" + my_config.project_name + """"""
value = "shared"
}
]
else:
mapPublicIPOnLaunch = False
routeTables = [
r"""${routetableprivate.status.routeTableID}"""
]
tags = [
{
key = "ManagedBy"
value = apiVersion
}
{
key = "Name"
value = metadata.name
}
{
key = "kubernetes.io/role/internal-elb"
value = "1"
}
{
key = r"""kubernetes.io/cluster/cluster""" + my_config.project_name + """"""
value = "shared"
}
]
}
}
} for subnet_config in my_config.subnet_configs]
Aplying Infra ResourceGraphDefinition
kcl infra-sample-app-stack.k | kubectl apply -f -
resourcegraphdefinition.kro.run/infrasampleappstack.kro.run created
In this case all infrastructure resources are packaged in a Custom Resource Definition called infrasampleappstacks with a KIND InfraSampleAppStack ready to be instantiated by an end user.
What resources are created:
- vpc
- internetgateway
- routetablepublic
- elasticipaddress
- clusterrole
- clusternoderole
- clusteradminrole
- publicsubnetaz1
- publicsubnetaz2
- natgateway
- routetableprivate
- appprivatesubnetaz1
- appprivatesubnetaz2
- cluster
- clusternodegroup
Creating an Instance with new CRD InfraSampleAppStack
kubectl apply -f sample-app-instance.yaml
infrasampleappstack.kro.run/infrasampleappstack created
Validating Resources in console.
When all resources are deployed, the state change from IN_PROGRESS to ACTIVE.
Step 5.
Deploying an ResourceGraphDefinition for the application
After the all infraestructure is created is time for deploy the application, in this case a popular "Deploy 2048 Game" using a basic pipeline with TEKTON.
Installing TEKTON
kubectl create ns tekton-tasks
namespace/tekton-tasks created
Copying secret from aws secret from ack-system to tekton-tasks namespace, this secret is after used for tasks of the pipeline.
kubectl get secret aws-credentials -n ack-system -o yaml \
| sed "s/namespace: ack-system/namespace: tekton-tasks/" \
| kubectl apply -f -
secret/aws-credentials created
Installing Tekton CRDS
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
namespace/tekton-pipelines created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created
Aplying deploy ResourceGraphDefinition
kcl sample-app-stack.k | kubectl apply -f -
resourcegraphdefinition.kro.run/deploysampleappstack.kro.run created
In this case all deploy resources are packaged in a Custom Resource Definition called deploysampleappstack with a KIND DeploySampleAppStack ready to be instantiated by an end user.
What resources are created:
authenticateanddeploy: Is a task that receive 3 parameters, "cluster-name", "aws-region" and "game-yaml-url", this task have 5 steps:
- Install kubectl.
- Export the enviroment variable AWS_SHARED_CREDENTIALS_FILE using the secret and after run (aws eks update-kubeconfig --name $(params.cluster-name) --region $(params.aws-region) for connect with the eks cluster.
- Execute an kubectl get nodes for validate connectivity with the cluster
- Deploy the game 2048 with a service that create an classic load balancer using an raw url with the parameter "game-yaml-url".
- exeute an kubectl get svc -n default for retrieve dns from classic load balancer
tektonpipeline: refers to the task (authenticateanddeploy) and passes the parameters for the steps.
Creating an Instance with new CRD DeploySampleAppStack
kubectl apply -f sample-app-instance.yaml
deploysampleappstack.kro.run/deploysampleappstack created
Validating Resources
Trigger Tekton Pipeline.
kubectl apply -f pipelinerun.yaml
pipelinerun.tekton.dev/deploy-to-eks-run created
When the pipelinerun.yaml is executed a new pod is created where are executed tasks from pipeline that deploying the app in eks cluster.
Testing APP.
Cleanup.
kubectl delete -f deploy/pipelinerun.yaml
pipelinerun.tekton.dev "deploy-to-eks-run" deleted
kubectl delete -f deploy/sample-app-instance.yaml
deploysampleappstack.kro.run "deploysampleappstack" deleted
kcl deploy/sample-app-stack.k | kubectl delete -f -
resourcegraphdefinition.kro.run "deploysampleappstack.kro.run" deleted
kubectl delete -f infra/sample-app-instance.yaml
infrasampleappstack.kro.run "infrasampleappstack" deleted
kcl infra/infra-sample-app-stack.k | kubectl delete -f -
resourcegraphdefinition.kro.run "infrasampleappstack.kro.run" deleted
Don't forget clean ALB Classic
Conclusion: This tutorial takes an approach to deploying applications on AWS EKS using Kro, ACK Controllers, and KCL. This combination of tools significantly reduces code duplication, improves maintainability and overall developer productivity, and provides a robust and efficient workflow for managing Kubernetes deployments on AWS.
Top comments (0)