DEV Community

Cover image for Kubernetes: Making Sense of the Madness 🤯
Diana
Diana

Posted on

Kubernetes: Making Sense of the Madness 🤯

Kubernetes: Who Knew it Could Be This Hard? 🙃

Let’s be honest, new tools can feel intimidating, right? I avoided Kubernetes for ages because it felt like being handed a 500-piece LEGO set.... but with no picture on the box. Sure, all the pieces are there, and someone tells you, “It’s easy, just follow the instructions.” Read the documentation??? pshhh that's absurd !

At first learning it was overwhelming, and let’s face it, it still is!! What helped me was stepping back and looking at the high-level architecture. Understanding how the pieces fit together made everything feel a little less chaotic. That’s where I want to start with you: the big picture. Let’s break it down and make Kubernetes a little more approachable.

VMs vs. Containers: There’s No Place Like Shared Space 🏢

Before we dive into container orchestration, let’s talk about what a container is and how it works with Virtual Machines (VMs).

A Virtual Machine is like a fully furnished apartment, it has its own utilities (operating system) and runs independently in a shared building (physical server). It’s great for isolation but can be heavy since each VM carries its own setup.

A Container is like a room within that apartment. Each room has everything it needs (your app and its dependencies), but it shares the apartment’s utilities (VM’s operating system) with the other rooms. This makes containers faster and lighter than spinning up separate apartments (VMs) for every workload.

Container Orchestration: The Smart Manager 🤓

Kubernetes is an open-source container orchestration system that helps you manage containers efficiently, especially when dealing with large-scale applications. It ensures your containers are running, scaling, and self-healing, without you having to intervene manually. Think of container orchestration as a smart manager for your rooms (containers).

Kubernetes focuses on managing the rooms (containers), ensuring they run smoothly by:

  1. Deploying containers where they’re needed.

  2. Distributing resources like CPU and memory to the containers that need them most.

  3. Restarting containers automatically if something breaks (self-healing).

  4. Scaling containers up or down based on demand (adding or removing a container).

  5. Balancing traffic across containers to prevent any single one from being overwhelmed.

Without tools like Kubernetes, handling hundreds or thousands of containers would be chaotic. While it doesn’t directly manage your physical servers or VMs, Kubernetes ensures your containers are orchestrated efficiently on top of the infrastructure you provide.

Kubernetes Architecture: Building Blocks of Orchestration 🧩

Cluster: The Big Picture 🖼️

A cluster is like a team working on a project. Each team member has a specific role (some do research, some write, some present), but they all work together toward a shared goal.

In Kubernetes, a cluster is the entire system where containers are deployed and managed. It consists of:

  1. Nodes: The team members doing the actual work.

  2. Control Plane: The project manager that coordinates the team and keeps everything running smoothly.

Think of the cluster as a collection of machines working together to deploy and manage your containers.

Nodes: The Workers 👷‍♀️

A Node is a single machine (physical or virtual) in your cluster. It’s where the real work happens, containers are deployed and run here.

There are two types of nodes in Kubernetes:

  1. Worker Nodes: This is where containers are deployed and run inside Pods. Worker Nodes: These handle workloads by running containers inside Pods. Each worker node includes:
  • kubelet: This is the primary agent running on the worker node. It communicates with the Control Plane and ensures that the containers in Pods are running as expected.
  • kube-proxy: This manages network rules on the node, enabling communication between Pods and Services across the cluster.
  1. Master Node: This node hosts the Control Plane, which doesn’t handle workloads directly but instead manages the cluster by assigning tasks to worker nodes and keeping everything on track.

Note: The master node is the machine where the Control Plane components run. While they’re often used interchangeably in conversation, the Control Plane refers to the software managing the cluster, whereas the master node refers to the physical or virtual machine hosting that software.

Control Plane: The Brain of Kubernetes 🧠

The Control Plane acts as the brain of the cluster, coordinating all the nodes to ensure your containers run efficiently. It consists of several key components:

  1. API Server: The entry point to the cluster. This is how you interact with Kubernetes, whether through kubectl or other tools.

  2. kubectl: is a command-line tool that lets you communicate with the API Server to manage your cluster. You can use it to deploy applications, inspect resources, and even troubleshoot issues, all from your terminal.

  3. Scheduler: Decides which node should run each container based on resource availability and requirements.

  4. Controller Manager: Monitors the cluster’s state and ensures the desired configuration is maintained (e.g., restarting containers if they fail).

  5. etcd: A key-value store that acts as Kubernetes’ memory, storing all configuration and the current state of the cluster.

The Control Plane is what gives Kubernetes its intelligence, managing workloads, and ensuring everything works as expected.

Pods: The Smallest Building Block 🫛

Kubernetes doesn’t directly run containers, it wraps them in Pods. A Pod is the smallest deployable unit in Kubernetes. Typically, a Pod contains a single container, but it can hold multiple containers if they need to work together (e.g., sharing storage or networking). Think of a Pod as a "toolbox" that holds everything a container needs to function properly, including storage, networking, and configuration.

Services: Connecting the Dots 🌉

While Pods are temporary (they can crash, restart, or be replaced), Services act as stable endpoints for communication.

For example, if a Pod running your backend app crashes and Kubernetes replaces it, the Service ensures your frontend can still communicate with the new Pod seamlessly. Services are like bridges that connect different parts of your application, ensuring reliable communication even in a dynamic environment. This ensures that even as Pods come and go, your app stays connected and functional.

Kubernetes High Level Architecture Diagram

Diagram from the official Kubernetes site

I also recommend checking out this Udemy course, it helped me understand Kubernetes and break it down this way.

Top comments (0)