Forem

Cover image for How to Create Kubernetes(K8s) Cluster Home Lab using Ubuntu Server 24.04
Md.Shalahur Rahman
Md.Shalahur Rahman

Posted on

How to Create Kubernetes(K8s) Cluster Home Lab using Ubuntu Server 24.04

A Kubernetes cluster is a set of nodes(computer or VM) that run containerized applications. It provides scalability, high availability, and automation for deployments.

To build the K8s cluster home lab we use two nodes. one is a master node called kb1 and another is a worker node called kb2.

Node configuration:

  • Cpu: 2
  • Memory: 3GB
  • Storage: 20GB

Node Ip:

  • kb1(master node): 192.168.0.112
  • kb2(worker node): 192.168.0.113

Note: Make sure ssh operation between two nodes is working

Let's start configuration:

Step 1: For all nodes
Change the hostname(optional) and update the host file for network communication.

To change hostname:

sudo hostnamectl set-hostname "<new-host-name>"
Enter fullscreen mode Exit fullscreen mode

Example:

sudo hostnamectl set-hostname "kb1"
Enter fullscreen mode Exit fullscreen mode

Add the IP and hostname mapping on each node to update the network communication.

Edit host file

cd  /etc/hosts
Enter fullscreen mode Exit fullscreen mode

Then add the mapping

<IP> <Hostname>
<IP> <Hostname>
Enter fullscreen mode Exit fullscreen mode

Example:

192.168.0.112  kb1
192.168.0.113  kb2
Enter fullscreen mode Exit fullscreen mode

Step 2: For all nodes
Disable the Swap and Load Kernel Modules

sudo swapoff -a && sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Enter fullscreen mode Exit fullscreen mode

To load the kernel modules using modprobe.

sudo modprobe overlay && sudo modprobe br_netfilter
Enter fullscreen mode Exit fullscreen mode

To permanently load these modules, create the file with the following content.

sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
Enter fullscreen mode Exit fullscreen mode

Next, add the kernel parameters like IP forwarding. Create a file and load the parameters using sysctl command.

sudo tee /etc/sysctl.d/kubernetes.conf <<EOT
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOT
Enter fullscreen mode Exit fullscreen mode

To load the above kernel parameters

sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode

Step 3: For all nodes
Install Containerd

First, install containerd dependencies,

sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
Enter fullscreen mode Exit fullscreen mode

Next, add containerd repository

sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/containerd.gpg

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Enter fullscreen mode Exit fullscreen mode

Now, install containerd.

sudo apt update && sudo apt install containerd.io -y
Enter fullscreen mode Exit fullscreen mode

Configure containerd so that it starts using SystemdCgroup.

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1 sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
Enter fullscreen mode Exit fullscreen mode

Restart the containerd, to see the effect.

sudo systemctl restart containerd
Enter fullscreen mode Exit fullscreen mode

To check the status of containerd.

sudo systemctl status containerd
Enter fullscreen mode Exit fullscreen mode

Output:

 containerd.service - containerd container runtime
     Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-02-17 16:20:34 UTC; 2h 17min ago
       Docs: https://containerd.io
    Process: 738 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 741 (containerd)
      Tasks: 156
     Memory: 201.3M (peak: 229.5M)
        CPU: 20min 13.594s
     CGroup: /system.slice/containerd.service
             ├─  741 /usr/bin/containerd
             ├─ 1138 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 53e05d7bda37772f5869ec026de03b40d4ce532ba0303c42be5184994439b3bf -address /run/containerd/containerd.sock
             ├─ 1139 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5398ebda2355cad57be69e19bf098e4202fe259a61270ea9cc274b81f77977ab -address /run/containerd/containerd.sock
             ├─ 1140 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id e4f1f987a5fb8db03db049b518fdfe6680b8c83617733de91620948c6bc00c95 -address /run/containerd/containerd.sock
             ├─ 1150 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 24d231aefec7bde2646ae123758907e5d464973705d8c0cd9d8db67756c19ef2 -address /run/containerd/containerd.sock
             ├─ 1547 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4d7ddac790ef53cb30823d45ed4b675d54cf66694833158f8d15f0ba93b93e2c -address /run/containerd/containerd.sock
             ├─ 2267 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 72b50ae3cea036f3b51e18bf9b900ccd46399fed052bf51a41bcea735062f413 -address /run/containerd/containerd.sock
             ├─ 2377 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2fcb5f2766fc1ff28a8a020c10eb1aff4b1895eb7ade8af65c72bedd0a634ead -address /run/containerd/containerd.sock
             ├─ 2434 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id d6f43f441495e4a00387114d5d7462bc54b1a592bd41bc1ee1709d16d507633b -address /run/containerd/containerd.sock
             └─27675 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2b645f1523cda54eb099d35542d77347071c2cdaef648a7e5559ec01779bac29 -address /run/containerd/containerd.sock
Enter fullscreen mode Exit fullscreen mode

Step 4: For all nodes
Add kubernetes package repository

K8s packages are not available in the default package repositories of Ubuntu 24.04, so first we need to add the repository.

First, download the public signing key for the Kubernetes package repository using the curl command.

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/k8s.gpg
Enter fullscreen mode Exit fullscreen mode

Next, add the repository

echo 'deb [signed-by=/etc/apt/keyrings/k8s.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/k8s.list
Enter fullscreen mode Exit fullscreen mode

Step 5: For all nodes
Install K8s components like Kubeadm, Kubelet, and Kubectl to manage the Kubernetes cluster.

sudo apt update
sudo apt install kubelet kubeadm kubectl -y
Enter fullscreen mode Exit fullscreen mode

Step 6: For only master nodes
Initialize the Kubernetes Cluster

To initialize the control plane or master node using Kubeadm.

sudo kubeadm init --control-plane-endpoint=<master-node-name>
Enter fullscreen mode Exit fullscreen mode

Example:

sudo kubeadm init --control-plane-endpoint=kb1
Enter fullscreen mode Exit fullscreen mode

Output:

enable all user for k8s

As per instruction, run the following commands.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

Step 7: For only worker nodes
Join Worker Nodes to master node

Add worker nodes to your Kubernetes cluster using the token generated during initialization.

worker node join hash
Example:

sudo kubeadm join 192.168.0.112:6443 --token yagkiy.650e76uk72jycv1k --discovery-token-ca-cert-hash sha256:2f01622e81717582443eac4f206436e7e3bd018e2f23e4197906c22c38df9f14
Enter fullscreen mode Exit fullscreen mode

Now head back to the master node and run

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Output:

NAME   STATUS   ROLES           AGE   VERSION
kb1    NotReady    control-plane   10d   v1.31.5
kb2    NotReady    <none>          10d   v1.31.5
Enter fullscreen mode Exit fullscreen mode

The node status is NotReady. To change it to Ready we need to install the Calico network add-on plugin.

Step 8: For only master nodes
Install the Calico Network Plugin

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/calico.yaml
Enter fullscreen mode Exit fullscreen mode

After the successful installation of the calico, the node status will change to Ready. To check run the below command.

kubectl get pods -n kube-system
Enter fullscreen mode Exit fullscreen mode

Output:

NAME                                      READY   STATUS    RESTARTS      AGE
calico-kube-controllers-b8d8894fb-rz6zx   1/1     Running   3 (24h ago)   7d21h
calico-node-4klgg                         1/1     Running   0             24h
calico-node-9c9lg                         1/1     Running   0             24h
coredns-7c65d6cfc9-8f26s                  1/1     Running   3 (24h ago)   7d21h
coredns-7c65d6cfc9-sgsvc                  1/1     Running   3 (24h ago)   7d21h
etcd-kb1                                  1/1     Running   4 (24h ago)   10d
kube-apiserver-kb1                        1/1     Running   4 (24h ago)   10d
kube-controller-manager-kb1               1/1     Running   9 (24h ago)   10d
kube-proxy-h5fnm                          1/1     Running   2 (25h ago)   10d
kube-proxy-n9zts                          1/1     Running   4 (24h ago)   10d
kube-scheduler-kb1                        1/1     Running   7 (24h ago)   10d
Enter fullscreen mode Exit fullscreen mode

Now, check the node status

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Output:

NAME   STATUS   ROLES           AGE   VERSION
kb1    Ready    control-plane   10d   v1.31.5
kb2    Ready    <none>          10d   v1.31.5
Enter fullscreen mode Exit fullscreen mode

So, finally, we created the K8s cluster home lab successfully.

Step 9: Test K8s Installation

Create and expose an NGINX deployment to verify the setup.

To create a namespace

kubectl create ns <namespace-name>
Enter fullscreen mode Exit fullscreen mode

Example:

kubectl create ns demo-app
Enter fullscreen mode Exit fullscreen mode

To check whether the namespace was created or not

 kubectl get namespace
Enter fullscreen mode Exit fullscreen mode

Output

NAME              STATUS   AGE
default           Active   10d
demo-app          Active   10d
kube-node-lease   Active   10d
kube-public       Active   10d
kube-system       Active   10d
Enter fullscreen mode Exit fullscreen mode

To deploy nginx-app with two replica using the demo-app namespace

kubectl create deployment nginx-app --image nginx --replicas 2 --namespace demo-app
Enter fullscreen mode Exit fullscreen mode

To check whether the deployment was created or not

kubectl get deployment -n demo-app
Enter fullscreen mode Exit fullscreen mode

Output:

NAME        READY   UP-TO-DATE   AVAILABLE   AGE
nginx-app   2/2     2            2           9m55s
Enter fullscreen mode Exit fullscreen mode

To check whether the pods was created or not

kubectl get pods -n demo-app
Enter fullscreen mode Exit fullscreen mode

Output:

NAME                         READY   STATUS    RESTARTS   AGE
nginx-app-7df7b66fb5-9wkds   1/1     Running   0          9m37s
nginx-app-7df7b66fb5-xppqk   1/1     Running   0          9m37s
Enter fullscreen mode Exit fullscreen mode

To expose deployment

kubectl expose deployment nginx-app -n demo-app --type NodePort --port 80
Enter fullscreen mode Exit fullscreen mode

Output:

service/nginx-app exposed
Enter fullscreen mode Exit fullscreen mode

To check the running service

kubectl get svc -n demo-app
Enter fullscreen mode Exit fullscreen mode

Output:

NAME        TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx-app   NodePort   10.110.254.203   <none>        80:31809/TCP   50s
Enter fullscreen mode Exit fullscreen mode

To access the nginx application using nodeport

curl http://<any-worker-IP>:<exposed-port>
Enter fullscreen mode Exit fullscreen mode

Example

curl http://192.168.0.113:31809
Enter fullscreen mode Exit fullscreen mode

Note: Do not mismatch the port number.
Output:

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

Top comments (2)

Collapse
 
shoyebahmed profile image
Shoyeb Ahmed Russel

Nice hands on practice article.

Collapse
 
ratulsharker profile image
Ratul sharker

Great article ❤️