In K8s, ClusterIP, NodePort, and LoadBalancer are types of services used to expose applications:
-
ClusterIP (default) exposes the
service internally
within the cluster, making it accessible only to other services and pods. -
NodePort exposes the service on a
static port on each node's IP
, allowing external access via<NodeIP>:<NodePort>
. -
LoadBalancer provisions an
external load balancer (on cloud providers)
that routes traffic to the service, simplifying access from the internet. -
ExternalName maps a service to an
external DNS name
(e.g., my-db.example.com), without proxying traffic through Kubernetes. -
Headless Service (ClusterIP set to None) directly
exposes pod IPs without a load balancer
or proxy, useful for service discovery in StatefulSets.
Use Cases of K8s Services
ClusterIP: Internal Communication & Microservices
- It is used for communication between microservices within a Kubernetes cluster.
- Sample Scenario: A backend service (e.g., order-service) needs to talk to a database or another internal API (e.g., payment-service) without exposing them to external traffic.
NodePort: Simple External Access & Development
- It is used when external traffic needs to reach a service without cloud load balancers.
-
Sample Scenario: Exposing a development environment or a dashboard (e.g., Prometheus, Grafana) for access via
<NodeIP>:<NodePort>
.
LoadBalancer: Public-Facing Services in Cloud Environments
- It is used for exposing services externally in cloud-based deployments (AWS, GCP, Azure).
- Sample Scenario: A production web application running in Kubernetes needs to be accessible to users over the internet with an auto-scaled cloud-managed load balancer.
ExternalName: Service Abstraction for External Dependencies
- It is used when an internal service needs to reference an external database or API via a DNS name.
- Sample Scenario: A Kubernetes-based application connects to an externally managed database (e.g., Amazon RDS) via db-service.example.com.
Headless Service: Service Discovery & Stateful Applications
- It is used for applications requiring direct pod access, such as databases or stateful workloads.
- Sample Scenario: A Kafka or Elasticsearch cluster where clients need to communicate directly with individual pods for leader-election or sharding.
Hands-on Sample
This scenario shows how to create Services (ClusterIP, NodePort and LoadBalancer). It goes following:
- Create Deployments for frontend and backend.
- Create ClusterIP Service to reach backend pods.
- Create NodePort Service to reach frontend pods from Internet.
- Create Loadbalancer Service on the cloud K8s cluster to reach frontend pods from Internet.
Steps
- Run minikube (in this scenario, K8s runs on WSL2- Ubuntu 20.04) ("minikube start")
- Minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes. It includes essential Kubernetes components like the API server, scheduler, controller manager, and etcd, along with optional add-ons like the dashboard and ingress.
- To install: https://minikube.sigs.k8s.io/docs/start/
omer@k8s:$ minikube start
π minikube v1.35.0 on Ubuntu 20.04
β¨ Automatically selected the docker driver
π Using Docker driver with root privileges
π Starting "minikube" primary control-plane node in "minikube" cluster
π Pulling base image v0.0.46 ...
π₯ Creating docker container (CPUs=2, Memory=3100MB) ...
β Failing to connect to https://registry.k8s.io/ from inside the minikube container
π‘ To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
π³ Preparing Kubernetes v1.32.0 on Docker 27.4.1 ...
βͺ Generating certificates and keys ...
βͺ Booting up control plane ...
βͺ Configuring RBAC rules ...
π Configuring bridge CNI (Container Networking Interface) ...
π Verifying Kubernetes components...
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
π Enabled addons: storage-provisioner, default-storageclass
π Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
- Please create
sample-frontend:latest
andsample-backend:latest
apps using below hands-on (NodeJS, Flask apps). I've created, pushed them into the my DockerHub repos. 2 options for you:- Create yourself like below hands-on sample (
docker build -t sample-frontend:latest .
), update below k8s deployment. - Use built images. K8s can pull them from DockerHub.
- Create yourself like below hands-on sample (

Docker Compose File Hands-on Sample to run Nodejs, Flask, PostgreSQL Containers Together
Γmer Berat Sezer γ» Jan 14
- Create 3x frontend and 3x backend Pods with following YAML file run (
kubectl apply -f deploy.yaml
).
- deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
team: development
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: omerbsezer/sample-frontend:latest
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
team: development
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: omerbsezer/sample-backend:latest
ports:
- containerPort: 5000
- Run:
omer@k8s:$ kubectl apply -f deploy.yaml
deployment.apps/frontend created
deployment.apps/backend created
omer@k8s:$ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-5c976dff7b-4phsc 0/1 ContainerCreating 0 6s
backend-5c976dff7b-8dxjw 0/1 ContainerCreating 0 6s
backend-5c976dff7b-skmgs 0/1 ContainerCreating 0 6s
frontend-5579676cd4-rtc6t 0/1 ContainerCreating 0 6s
frontend-5579676cd4-w6t8w 0/1 ContainerCreating 0 6s
frontend-5579676cd4-wvmqc 0/1 ContainerCreating 0 6s
omer@k8s:$ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-5c976dff7b-4phsc 1/1 Running 0 97s
backend-5c976dff7b-8dxjw 1/1 Running 0 97s
backend-5c976dff7b-skmgs 1/1 Running 0 97s
frontend-5579676cd4-rtc6t 1/1 Running 0 97s
frontend-5579676cd4-w6t8w 1/1 Running 0 97s
frontend-5579676cd4-wvmqc 1/1 Running 0 97s
Create ClusterIP service that connects to backend (selector: app: backend) (run:
kubectl apply -f backend-service.yaml
).backend-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: ClusterIP
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 5000
- ClusterIP Service created. If any resource in the cluster sends a request to the ClusterIP and Port 5000, this request will reach to one of the pod behind the ClusterIP Service.
- We can show it from frontend pods.
- Connect one of the front-end pods (list:
kubectl get pods
, connect:kubectl exec -it frontend-5579676cd4-rtc6t -- bash
) - In the K8s, there is DNS server (core dns based) that provide us to query IP/name of service.
- When running nslookup (backend), we can reach the complete name and IP of this service (serviceName.namespace.svc.cluster_domain, e.g. backend.default.svc.cluster.local).
- When running curl to the one of the backend pods with port
5000
, service provides us to make connection with one of the backend pods.
omer@k8s:$ kubectl apply -f backend-service.yaml
service/backend created
omer@k8s:$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.109.240.206 <none> 5000/TCP 9s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36m
omer@k8s:$ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-5c976dff7b-4phsc 1/1 Running 0 3m9s
backend-5c976dff7b-8dxjw 1/1 Running 0 3m9s
backend-5c976dff7b-skmgs 1/1 Running 0 3m9s
frontend-5579676cd4-rtc6t 1/1 Running 0 3m9s
frontend-5579676cd4-w6t8w 1/1 Running 0 3m9s
frontend-5579676cd4-wvmqc 1/1 Running 0 3m9s
omer@k8s:$ kubectl exec -it frontend-5579676cd4-rtc6t -- bash
root@frontend-5579676cd4-rtc6t:/home/app# curl backend:5000
Backend is working!
root@frontend-5579676cd4-rtc6t:/home/app# exit
Create NodePort Service to reach frontend pods from the outside of the cluster (run:
kubectl apply -f frontend-service.yaml
).frontend-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: NodePort
selector:
app: frontend
ports:
- protocol: TCP
port: 3000
targetPort: 3000
- With NodePort Service (you can see the image below), frontend pods can be reachable from the opening port (30477). In other words, someone can reach frontend pods via
WorkerNodeIP:30477
. NodePort service listens all of the worker nodes' port (in this example: port 30477). - While working with minikube, it is only possible with minikube tunnelling. Minikube simulates the reaching of the
NodeIP:Port
with tunneling feature.
omer@k8s:$ kubectl apply -f frontend-service.yaml
service/frontend created
omer@k8s:$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.109.240.206 <none> 5000/TCP 9m46s
frontend NodePort 10.97.126.7 <none> 3000:30477/TCP 36s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 45m
omer@k8s:$ minikube service --url frontend
http://127.0.0.1:43241
β Because you are using a Docker driver on linux, the terminal needs to be open to run it.
- On the other terminal, if we run the curl command, we can reach the frontend pods.
omer@k8s:$ curl http://127.0.0.1:43241
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Frontend</title>
</head>
<body>
<h1>Frontend is working!</h1>
</body>
</html>
- Or on browser, we can reach frontend:
LoadBalancer Service is only available wih cloud services (because in the local cluster, it can not possible to get external-ip of the load-balancer service). So if you have connection to the one of the cloud service (Azure-AKS, AWS EKS, GCP GKE), please create loadbalance service on it (run:
kubectl apply -f backend-service-loadbalancer.yaml
).backend-service-loadbalancer.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontendlb
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 3000
targetPort: 3000
- If you run on the cloud, you'll see the external-ip of the loadbalancer service.
- Delete services:
omer@k8s:$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.109.240.206 <none> 5000/TCP 17m
frontend NodePort 10.97.126.7 <none> 3000:30477/TCP 8m31s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m
omer@k8s:$ kubectl delete -f frontend-service.yaml
service "frontend" deleted
omer@k8s:$ kubectl delete -f backend-service.yaml
service "backend" deleted
omer@k8s:$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m
- In addition, it can be possible service with the imperative way (with command).
-
kubectl expose deployment <deploymentName> --type=<typeOfService> --name=<nameOfService>
omer@k8s:$ kubectl expose deployment backend --type=ClusterIP --name=backend
service/backend exposed
omer@k8s:$ kubectl expose deployment frontend --type=NodePort --name=frontend
service/frontend exposed
omer@k8s:$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.100.86.1 <none> 5000/TCP 25s
frontend NodePort 10.99.23.180 <none> 3000:32648/TCP 2s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 55m
- Delete all services, deployment:
omer@k8s:$ kubectl delete service backend
service "backend" deleted
omer@k8s:$ kubectl delete service frontend
service "frontend" deleted
omer@k8s:$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 56m
omer@k8s:$ kubectl delete -f deploy.yaml
deployment.apps "frontend" deleted
deployment.apps "backend" deleted
omer@k8s:$ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-5c976dff7b-4phsc 1/1 Terminating 0 23m
backend-5c976dff7b-8dxjw 1/1 Terminating 0 23m
backend-5c976dff7b-skmgs 1/1 Terminating 0 23m
omer@k8s:$ kubectl get pods
No resources found in default namespace.
omer@k8s:$ minikube delete
π₯ Deleting "minikube" in docker ...
π₯ Deleting container "minikube" ...
π₯ Removing /home/omer/.minikube/machines/minikube ...
π Removed all traces of the "minikube" cluster.
Conclusion
This post explores K8s services types and provides hands-on examples to demonstrate their relationships.
If you found the tutorial interesting, Iβd love to hear your thoughts in the blog post comments. Feel free to share your reactions or leave a comment. I truly value your input and engagement π
For other posts π https://dev.to/omerberatsezer π§
Follow for Tips, Tutorials, Hands-On Labs for AWS, Kubernetes, Docker, Linux, DevOps, Ansible, Machine Learning, Generative AI.
Top comments (0)