Pre-v4.38.0: Single-Node Clusters
Prior to Docker Desktop v4.38.0, Kubernetes support was limited to a single-node cluster ideal for basic testing:
-
Components Included:
- A standalone Kubernetes API server, kubelet, and etcd (all running in the Docker Desktop VM).
- Preconfigured Docker CLI integration (kubectl, helm).
-
Use Cases:
- Local development, lightweight testing of deployments, and learning Kubernetes basics.
-
Limitations:
- No multi-node features (e.g., node scaling, pod affinity, or failure simulations).
v4.38.0+: Multi-Node Clusters with kind
Docker Desktop now supports multi-node clusters using kind (Kubernetes IN Docker):
Cluster Provisioning Methods
Docker Desktop offers two options:
Feature | kubeadm (Legacy) | kind (v4.38.0+) |
---|---|---|
Nodes | Single-node only | Multi-node (control-plane + workers) |
Runtime | Docker Engine | containerd (required) |
Purpose | Basic local testing | Production-like testing (scaling, networking) |
Setup | One-click in Docker Desktop Settings | Requires kind-config.yaml + CLI steps |
Load Balancer | Limited to NodePort | Native LoadBalancer support (maps to localhost) |
Use Case | Learning Kubernetes basics | Simulating real-world clusters (e.g., node failures) |
How It Works
Single-Node (kubeadm)
When you enable Kubernetes via kubeadm:
-
Infrastructure Setup:
- Docker Desktop VM generates TLS certificates, kubeconfig, and downloads core Kubernetes binaries (e.g., kube-apiserver, kube-controller-manager).
-
Cluster Boot:
- A single node acts as both control-plane and worker.
- Installs CNI (e.g., flannel) for pod networking and storage provisioners (e.g., storageos).
-
Isolation:
- Kubernetes runs in a separate VM namespace; turning it off doesn’t affect Docker containers.
Multi-Node (kind)
With kind, Docker Desktop:
-
Node Containers:
- Spins up Docker containers as Kubernetes nodes (each with containerd runtime).
-
Cluster Initialization:
- Uses kind to bootstrap a cluster from a YAML config (e.g., kind-config.yaml).
-
Networking:
- Sets up a dedicated bridge network for node-to-node communication.
-
Load Balancing:
- Exposes LoadBalancer services on localhost automatically (no MetalLB required).
Why Use Multi-Node Clusters?
-
Real-World Testing: Simulate node failures, rolling updates, and pod affinity/anti-affinity.
- Networking Validation: Test ingress controllers, service meshes (e.g., Istio), and network policies.
- Cost-Free Scaling: Experiment with horizontal scaling without cloud costs.
Step-by-Step: Create a Multi-Node Cluster
Want to experiment with Kubernetes without cloud costs? Docker Desktop’s built-in Kubernetes (with Kind integration) lets you spin up a local multi-node cluster in minutes. Here’s how to do it:
Getting Started
- Install Docker Desktop 4.38.0
- Select "Enable Kubernetes" and select Kind.
In order to use Kind, you must be signed in to your Docker account. Also, ensure that you have containerd image store enabled. It is enabled by default.
- Select your preferred Kubernetes version
- Select the preferred number of nodes
Click "Apply and Restart"
Get K8s Component Status
kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy ok
- Verify the nodes
➜ ~ kubectl get nodes
NAME STATUS ROLES AGE VERSION
desktop-control-plane Ready control-plane 37m v1.31.1
desktop-worker Ready <none> 37m v1.31.1
desktop-worker2 Ready <none> 37m v1.31.1
desktop-worker3 Ready <none> 37m v1.31.1
desktop-worker4 Ready <none> 37m v1.31.1
desktop-worker5 Ready <none> 37m v1.31.1
desktop-worker6 Ready <none> 37m v1.31.1
desktop-worker7 Ready <none> 37m v1.31.1
➜ ~
Troubleshooting Tips
Nodes not showing up?
Check Docker Desktop logs under Troubleshoot → View logs.Cluster stuck in "Starting"?
Ensure you’re signed into Docker Hub in Docker Desktop.Resource issues?
Reduce worker nodes or allocate more RAM/CPU in Docker Settings.
Next Steps: Test Your Single Node Cluster
Deploy a sample app to validate your setup:
➜ ~ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
➜ ~ kubectl expose deployment nginx --port=80
service/nginx exposed
➜ ~ kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-676b6c5bbc-m8vwp 1/1 Running 0 25s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m
service/nginx ClusterIP 10.96.205.15 <none> 80/TCP 11s
➜ ~
Next Steps: Test Your Multi-Node Cluster
git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/kubernetes/workshop/replicaset101
File: nginx_replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
env: dev
role: web
spec:
replicas: 4
selector:
matchLabels:
role: web
template:
metadata:
labels:
role: web
spec:
containers:
- name: testnginx
image: nginx
Apply the manifest:
kubectl apply -f nginx_replicaset.yaml
Verify
kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-676b6c5bbc 1 1 1 149m
web
➜ ~ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-676b6c5bbc-m8vwp 1/1 Running 1 (83m ago) 157m 10.244.4.2 desktop-worker5 <none> <none>
web-7wbfc 1/1 Running 1 (83m ago) 117m 10.244.3.2 desktop-worker3 <none> <none>
web-grfbn 1/1 Running 1 (83m ago) 117m 10.244.8.2 desktop-worker7 <none> <none>
web-rnxx6 1/1 Running 1 (83m ago) 117m 10.244.1.2 desktop-worker4 <none> <none>
web-xbf4d 1/1 Running 1 (83m ago) 117m 10.244.5.2 desktop-worker6 <none> <none>
➜ ~
Found this helpful? Let me know in the comments what you’re building with your local Kubernetes cluster! 🚀
Top comments (0)