Table of Contents
Introduction
Prerequisites
Setting Up AKS Cluster
Create a resource group
Create an AKS Cluster
Connect to AKS Cluster
Creating a Kubernetes Manifest File
Deploying the Application
Get the external IP
Scaling the Application
Verify the application was launched successfully
Cleaning Up Resources
Summary
Introduction
Azure Kubernetes Service (AKS) is a managed Kubernetes service that allows you to deploy, manage, and scale containerized applications in the cloud. Kubernetes uses manifest files written in YAML to define the desired state of applications, including deployments, services, and configurations. This guide provides a step-by-step approach to deploying an application on an AKS cluster using a Kubernetes manifest file.
Check this documentation on getting started with Kubernetes https://dev.to/seyilufadejucyberservices/getting-started-with-kubernetes-an-introduction-k8s-4nc8
- Before proceeding, ensure that you have the following:
- An active Azure subscription
- Azure CLI installed (Installation Guide)
- Kubectl installed
- A Docker image of the application stored in Azure Container Registry (ACR) or Docker Hub
- Basic understanding of Kubernetes concepts
- Login to Azure:
az login
- Set the subscription (if needed):
az account set --subscription "<subscription-id>"
az group create --name aks-rg --location eastus
az aks create --resource-group aks-rg --name my-aks-cluster --node-count 1 --generate-ssh-keys
Alternatively you can create on azure portal
- Go to your azure portal and search for Kubernetes Services
- Click create and click Kubernetes clusters
- On the basic tab Select your resource group, set the kubernetes clusters name, set the region
- On the node pool tab add a node pool or select the agent pool by default and enable virtual node
- Review & Create
- Deployment Completed
az aks get-credentials --resource-group aks-rg --name my-aks-cluster
kubectl get nodes
Create a file named aks-store.yaml and copy in the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: rabbitmq
image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
ports:
- containerPort: 5672
name: rabbitmq-amqp
- containerPort: 15672
name: rabbitmq-http
env:
- name: RABBITMQ_DEFAULT_USER
value: "username"
- name: RABBITMQ_DEFAULT_PASS
value: "password"
resources:
requests:
cpu: 10m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: rabbitmq-enabled-plugins
mountPath: /etc/rabbitmq/enabled_plugins
subPath: enabled_plugins
volumes:
- name: rabbitmq-enabled-plugins
configMap:
name: rabbitmq-enabled-plugins
items:
- key: rabbitmq_enabled_plugins
path: enabled_plugins
---
apiVersion: v1
data:
rabbitmq_enabled_plugins: |
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
kind: ConfigMap
metadata:
name: rabbitmq-enabled-plugins
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
app: rabbitmq
ports:
- name: rabbitmq-amqp
port: 5672
targetPort: 5672
- name: rabbitmq-http
port: 15672
targetPort: 15672
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 1
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: order-service
image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
ports:
- containerPort: 3000
env:
- name: ORDER_QUEUE_HOSTNAME
value: "rabbitmq"
- name: ORDER_QUEUE_PORT
value: "5672"
- name: ORDER_QUEUE_USERNAME
value: "username"
- name: ORDER_QUEUE_PASSWORD
value: "password"
- name: ORDER_QUEUE_NAME
value: "orders"
- name: FASTIFY_ADDRESS
value: "0.0.0.0"
resources:
requests:
cpu: 1m
memory: 50Mi
limits:
cpu: 75m
memory: 128Mi
initContainers:
- name: wait-for-rabbitmq
image: busybox
command:
[
"sh",
"-c",
"until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;",
]
resources:
requests:
cpu: 1m
memory: 50Mi
limits:
cpu: 75m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
type: ClusterIP
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: order-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-service
spec:
replicas: 1
selector:
matchLabels:
app: product-service
template:
metadata:
labels:
app: product-service
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: product-service
image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
ports:
- containerPort: 3002
resources:
requests:
cpu: 10m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
ports:
- name: http
port: 3002
targetPort: 3002
selector:
app: product-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-front
spec:
replicas: 1
selector:
matchLabels:
app: store-front
template:
metadata:
labels:
app: store-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: store-front
image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
ports:
- containerPort: 8080
name: store-front
env:
- name: VUE_APP_ORDER_SERVICE_URL
value: "http://order-service:3000/"
- name: VUE_APP_PRODUCT_SERVICE_URL
value: "http://product-service:3002/"
resources:
requests:
cpu: 1m
memory: 200Mi
limits:
cpu: 1000m
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: store-front
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: store-front
type: LoadBalancer
Deploying the Application
Apply the manifest file to deploy the application:
kubectl apply -f aks-store.yaml
kubectl get service
kubectl get pods
Verify the Application was launched successfully
- Cleaning Up Resources To delete the deployment:
kubectl delete -f aks-store.yaml
To delete the AKS cluster:
az aks delete --resource-group aks-rg --name my-aks-cluster --yes --no-wait
Summary
In this guide, we covered the step-by-step process of deploying an application on an AKS cluster using a Kubernetes manifest file. We created an AKS cluster, wrote a Kubernetes deployment manifest, applied the deployment, exposed it using a service, scaled it, and cleaned up resources. This hands-on approach provides a foundational understanding of deploying and managing containerized applications on AKS.
Top comments (0)