If you are a beginner looking to learn Kubernetes basics or an experienced developer, DevOps, SRE or a Platform engineer looking for a quick and “clean” setup to test new features or perform specific tests, choosing the right Kubernetes distribution is crucial.
In this article, I’m listing the Best-in-Class free and open-source Kubernetes distributions available for learning and testing.
Deploying K8s clusters via kubeadm
, is a good way to learn the outs & abouts of K8s itself, but it is also the hard way and the time-consuming way (+30 min for beginners to set-up a working cluster).
Therefore, when I want to test something quickly, I just spin off a K8s cluster using one of the lightweight distributions mentioned in this article depending on the purpose and use-case I have in mind. Although for complex tests I have a fully automated kubeadm
based environment, but this is for another article.
My criteria to choose the K8s distribution for quick testing is:
- Time to set up a functioning cluster.
- Ease of setup and use.
- Resource requirements (CPU, RAM, Storage).
- Flexibility and customization options.
- Documentation Clarity
- Community Support.
- Suitability & Usage.
Note1: I included a “Quick Guide” Subsection in each K8s project section, but it is advisable to always have a look at the official project installation page (Link Provided) as the installation steps tend to vary from time to time depending on the project evolution, your OS type and your point of time reading it.
Note2: You are advised to have a good read at the “known issues” section of each project to prevent running in cycles in case encountering an issue/bug/problem before submitting a PR or ask ChatGPT ;)
Note3: This is not an exhaustive list and not an ordered best-to-worse kinda list. It is rather a randomly ordered list based on my own experience dealing with K8s almost on daily basis since 6 years.
1- Minikube
Quick glance:
A project maintained as part of the Kubernetes project under the Cloud Native Computing Foundation (CNCF) which is considered the Go-To K8s distro for absolute beginners.
Minikube is a very lightweight K8s distribution designed to help developers and learners to run a local Kubernetes cluster on their personal computers.
The default setup creates a single-node Kubernetes cluster, which includes a master and worker node in one instance. (Multi-Node Clusters can be configured as well)
Minikube creates a virtualized or containerized environment (depending on the driver used) that runs Kubernetes components.
Time to Set Up:
~2–5 mins
Ease of setup and use:
Very straight forward and easy steps. Installation page here
Quick Setup Guide:
1.Install the latest Minikube stable release (Assuming you have a x86–64 Linux and using Debian package):
curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
2.Start your cluster
From a terminal with administrator access (but not logged in as root), run:
minikube start
3.Interact with your cluster
Minikube already downloads kubectl
if it is not already installed on your machine. Interact with your cluster
minikube kubectl - get po -A
Resource Required & Prerequisites:
• Minimum Resources: 2 CPUs , 2GB RAM with ~4GB Storage.
• As Prerequisite: Minikube requires a container or a VM manager, such as: Docker, QEMU, Hyperkit, Hyper-V, KVM, Parallels, Podman or VirtualBox
Flexibility & Customization Options:
Minikube Supports enabling add-ons that provide quick configurations for things like:
• K8s dashboard
• ingress
• metrics-server
• Loadbalancer
• Persistent Volumes
Such add-ons otherwise would take some time and require some K8s knowledge to deploy & configure them.
It also supports all popular CNIs (Cilium, Calico, Flannel ,, etc) and popular CRIs (Docker, Containerd and CRI-O)
Documentation:
Very user friendly, written in clear text and suitable for absolute beginners as it includes step-by-step guides for various use cases.
Community Support:
• Large community support with +29.7k stars on GitHub
• All created issues are actively followed and resolved by contributors.
• Large collection of community-created tutorials and guides are available online in text and video forms.
Suitability & Usage:
Very useful for:
• Spinning off a quick K8s cluster on your local machine
• Learning the basics and following up with tutorials.
• Testing small scale and simple scenarios.
• In General, it is very recommended as a starting point for absolute beginners.
Not recommended for:
• Complex networking scenarios
• Performance testing or testing low latency workloads
• HA scenarios, and Loadbalancing tests
• complex large Stateful applications
2- KinD (Kubernetes in Docker)
Quick Glance:
KinD is a lightweight tool designed to run local Kubernetes clusters using Docker containers as “nodes”. Therefore, it simplifies creating multi-node clusters on a single machine without requiring virtualization or specialized hardware.
Under the Hood, KinD uses kubeadm
to bootstrap the Kubernetes cluster inside the containers fully automatically.
Time to Set Up:
~5–10 min depending on your internet connection speed as creating a KinD Cluster requires downloading multiple Docker images.
Ease of setup and use:
Very straight forward. Installation page here
Quick Setup Guide:
Assuming you are using Linux AMD64 / x86_64
1.Install the package:
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.26.0/kind-linux-amd64
2.Create a Kubernetes cluster
kind create cluster
3.Interact with the Cluster:
By default, the cluster access configuration is stored in ${HOME}/.kube/config
Note: KinD does not require kubectl
to interact with the cluster.
If you want kubectl
, you need to install kubectl
binaries yourself as KinD package does not install them for you. kubectl
Installation page here
All project known issues are documents in a clear way here
Resource Required & Prerequisites:
•Minimum Resources: 2 CPUs or more & 2GB of free memory with ~4GB Storage (Additional resources depends on the number of nodes).
•The only Prerequisite to be installed is Docker.
Flexibility & Customization Options:
A KinD Cluster can be fully defined via YAML files, making it highly customizable and portable.
It Supports all popular CNIs (Cilium, Calico, Flannel ,, etc) and popular CRIs (Docker, Containerd and CRI-O)
Note that there are no built-in add-ons like Minikube to quickly deploy an Ingress controller or a LoadBalancer.
KinD helps you by lowering the bar with some of these tasks, but you need to take care of the majority of the steps yourself.
Documentation:
Clear, easy, and beginner-friendly documentation.
Community support:
•Large community support with +13.7k stars on GitHub
•All created issues are actively followed and resolved by contributors.
•Also many community-created tutorials and guides are available online in text and video forms
Suitability & Usage:
Extremely useful for:
• Testing CI/CD pipelines integration with K8s environment.
• Testing multi-node Cluster configurations and HA scenarios.
• Learning topics that are a bit above the basics
Not recommended for:
• Networking Complexity: Advanced networking setups can be tricky to configure.
• Performance testing or testing low latency workloads
• Complex large Stateful applications
• Production-Like environment deployments
3- K0s
Quick glance:
Pronounced "kay-zero-ess", Z0s is a zero-friction, lightweight Kubernetes distribution designed to simplify the deployment and management of Kubernetes clusters in environments where resource efficiency is critical.
k0s bundles all the necessary Kubernetes components into a "single binary". This binary includes the Kubernetes control plane components (API server, scheduler, controller manager, etc.), worker node components (kubelet, kube-proxy), and optional add-ons (e.g., CNI plugins, metrics server).
Time to Set Up:
With ~2–3min, K0s is one of the fastest Kubernetes distributions to set up.
Ease of setup and use:
Installation page here .. For a quick setup guide continue reading.
Quick setup Guide:
1.Download k0s
Run the k0s download script to download the latest stable version of k0s and make it executable from /usr/local/bin/k0s
curl - proto '=https' - tlsv1.2 -sSf https://get.k0s.sh | sudo sh
2.Install k0s as a service
Run the following command to install a single node k0s that includes the controller and worker functions with the default configuration:
sudo k0s install controller - single
3.Start the k0s as a service ( and wait a couple minutes)
sudo k0s start
4.Access your cluster using kubectl (K0s installs it for you)
sudo k0s kubectl get nodes
NAME STATUS ROLES AGE VERSION
k0s Ready <none> 4m6s v1.21.2-k0s1
Note1: If you are installing a multi-node setup, using the previous command you will not see the Control-Plane node, and that does Not mean your setup is broken.
Note2: In case you are not happy with the default installation way and would like to install K0s via k0sctl
,, Consider installing the k0sctl
tool on the Jumphost machine and not on the machines that you want to run K8s on
k0sctl.yaml
file example below:
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
hosts:
- role: controller
ssh:
address: 10.0.0.1 # replace with the controller's IP address
user: root
keyPath: ~/.ssh/id_rsa
- role: worker
ssh:
address: 10.0.0.2 # replace with the worker's IP address
user: root
keyPath: ~/.ssh/id_rsa
Project known issues are documented here
Resource Required & Prerequisites:
•Minimum Resources: 1 CPU, 1GB RAM and ~2GB Storage per node for small clusters. (Additional resources depends on the number of nodes).
More sophisticated system requirements are explained here
•As Prerequisite, k0s has ZERO external dependencies other than a compatible Linux OS. It does not require Docker or any other container runtime to be pre-installed.
Flexibility & Customization Options:
K0s is highly customizable. It enables the user to configure advanced networking, storage, and security settings , Enable or disable specific Kubernetes components.
All can be done via the k0sctl.yaml
file via the k0sctl
tool.
Documentation:
Beginner to Intermediate level of documentation clarity, some sections have excellent explanations with diagrams, others require some Pre-knowledge in some topics like storage or security.
Also Includes guides for single-node, multi-node, and high-availability setups.
Community support:
•The project is backed by Mirantis, a well-known Kubernetes contributor.
•Their GitHub Repo is stared with +4.1k, which is relatively great measured to the project lifetime.
•K0s has less community-created tutorials compared to the previous options (KinD & Minukube) as it is the newest lightweight K8s distro compared to others.
Suitability & Usage:
K0s shines with IoT deployments as it is a very lightweight single-binary ..
Very recommended for:
• Edge computing and IoT deployments
• Air-gapped deployments
• Testing CI/CD pipelines integration with K8s environment.
• Testing multi-node Cluster configurations and HA scenarios.
• Learning topics that are a bit above the basics
Not recommended for:
• Networking Complexity: Advanced networking setups can be tricky to configure.
• Performance testing or testing low latency workloads
• Complex large Stateful applications
• Big Production-Like environment deployments
4- MicroK8s
Quick glance:
A very lightweight, minimalistic Kubernetes distribution developed by Canonical, the company behind Ubuntu.
It packages all the essential components of Kubernetes into a single, easy-to-install package, allowing users to run a Kubernetes cluster on a single machine or across multiple nodes with minimal effort.
It uses Dqlite (a lightweight distributed SQLite) for high availability in multi-node setups, reducing the resource overhead compared to traditional etcd-based clusters.
Time to Set Up:
~ 5–7 min for simple single Node clusters
Ease of setup and use:
Installation page here .. For a quick setup guide continue reading:
Quick setup Guide:
1.Install MicroK8s binaries
sudo snap install microk8s - classic
2.Join MicroK8s group
MicroK8s creates a Linux group to enable seamless usage of commands which require admin privilege. To add your current user to the group and gain access to the .kube
caching directory, run:
sudo usermod -a -G microk8s $USER
mkdir -p ~/.kube
chmod 0700 ~/.kube
Re-enter the session for the group update to take place:
su - $USER
3.Access your K8s cluster
MicroK8s bundles its own version of kubectl
for accessing the cluster:
microk8s kubectl get nodes
Resource Required & Prerequisites:
Minimum Resources: 2 CPU, 4GB RAM and ~2GB Storage per node for small clusters. (Additional resources depends on the number of nodes).
For Prerequisites:
• An Ubuntu 22.04 LTS, 20.04 LTS, 18.04 LTS or 16.04 LTS environment to run the commands (or
• Other Linux distributions can be used as well but they must support supports snapd
• If you don’t have a Linux machine, you can use Multipass (see Installing MicroK8s with Multipass).
Flexibility & Customization Options:
•MicroK8s provides Add-ons like Helm, Istio, DNS, or MetalLB that can be enabled as needed.
•It supports single-node and multi-node clusters, allowing users to test HA scenarios
•Users can customize their clusters by enabling or disabling add-ons, configuring networking, and integrating with other tools and services.
Documentation:
•The project has a comprehensive and well-organized documentation, making it easy for users to get started and troubleshoot issues. But the absolute beginner may struggle a bit with certain pages as a bit of Linux knowledge is required to customize or configure your MicroK8s cluster (which is a must anyway for anyone learning K8s)
•Official documentation covers installation, add-ons, multi-node setups, and advanced configurations.
•Tutorials and examples are provided for common use cases, such as deploying applications, enabling ingress, and setting up storage.
Community support:
•Large community support with +8.6K stars on GitHub
•All created issues are actively followed and resolved by contributors.
•Also many community-created tutorials and guides are available online in text and video forms
Suitability & Usage:
Very recommended for:
• Edge computing and IoT deployments
• Testing CI/CD pipelines integration with K8s environment.
• Testing multi-node Cluster configurations and HA scenarios.
• Learning topics for K8s beginners
Not recommended for:
• Big Production-Like environment deployments
• Performance testing or testing low latency workloads
• Complex large Stateful applications
5- K3s
Quick glance:
K3s is developed and maintained by Rancher (now part of SUSE) and is ideal for edge, IoT, and resource-constrained environments
It is packaged as a single binary (~100 MB) that includes all necessary components (The API server, controller manager, scheduler, and kubelet)
It is designed for running Kubernetes clusters in resource-constrained devices like Raspberry Pis, therefore it bundles lightweight alternatives like SQLite instead of etcd (however etcd, MySQL, and PostgreSQL are also supported for high-availability setups).
Time to Set Up:
~ 5–7 min
Ease of setup and use:
K3s provides an installation script to install it as a service on systemd
or openrc
based systems.
K3s package includes tools like kubectl
, crictl
, and ctrout
of the box, reducing the need for additional installations.
Note: It is not recommended to use snap based Docker packages. The “known issues” section here mention it.
Installation page here .. Default installation summarized in the next section
Quick Setup Guide:
Get the installation script which installs (kubectl
, crictl
, ctr
, k3s-killall.sh
, and k3s-uninstall.sh
)
curl -sfL https://get.k3s.io | sh -
A kubeconfig file will be written to /etc/rancher/k3s/k3s.yaml
and the
kubectl
tool is installed by K3s will automatically use it —
And you are READY To GO
Resource Required & Prerequisites:
•Minimum Resources: 2 CPUs , 2GB RAM with ~4GB Storage.
•For Prerequisite, Only Linux e.g OpenSUSE, Ubuntu ,, etc is needed
Flexibility & Customization Options:
•K3s is highly modular and customizable.
•Users can customize components like the container runtime, networking, or storage to suit your needs.
•Multi-Node Clusters mode is supported, but K3s is often used for single-node setups.
•With a config.yaml
file created at /etc/rancher/k3s/k3s.yaml
users can customize many deployment options like CNI, CRI and Storage options.
Documentation:
K3s has a well-organized and easy to follow documentation with many explanations, F&Qs and examples for common use cases, such as deploying applications or setting up multi-node clusters.
Community support:
It is best known of its use with Raspberry Pis for Edge and IoT Use Cases where resources are limited.
Very recommended for:
• Edge computing and IoT deployments.
• Air-gapped deployments.
• Testing CI/CD pipelines integration with K8s environment.
• Learning topics that are a bit above the basics.
Not recommended for:
• Big Production-Like environment deployments.
• Testing multi-node Cluster configurations and HA scenarios.
• Networking Complexity: Advanced networking setups can be tricky to configure.
• Performance testing or testing low latency workloads.
• Complex large Stateful applications.
Happy Learning!
Anas Alloush
Telco Cloud Specialist
Top comments (0)