DEV Community

Mageshwaran Sekar
Mageshwaran Sekar

Posted on • Edited on

What is eBPF? And how Cilium makes use of it?

eBPF (Extended Berkeley Packet Filter) is a powerful and flexible technology that allows programs to run in the Linux kernel without changing kernel code or requiring a kernel module. Originally, eBPF was designed for network traffic filtering, but over time, its capabilities have evolved to allow for a wide range of functionalities, such as performance monitoring, security, and tracing.

eBPF allows code to be executed in the kernel in response to certain events, such as packet arrival, system calls, or function calls, by attaching to program hooks in the kernel. The eBPF programs are written in a restricted C-like language and are then verified and loaded into the kernel where they are executed in response to specific events. These programs run in the kernel's virtual machine, ensuring safety and stability (i.e., they are sandboxed).

eBPF Overview

How Does eBPF Work?

  • BPF Programs: You write small programs that define what should happen in the kernel when certain events occur.
  • Attach Points: eBPF programs can be attached to various hooks in the Linux kernel (e.g., network sockets, tracepoints, XDP, etc.).
  • Execution: When an event occurs (like a packet being received by the network stack), the eBPF program is triggered and runs in the kernel context.

eBPF Attach Points

Key Benefits of eBPF

  • Safety: eBPF programs run in a restricted environment in the kernel, ensuring that they don't crash or destabilize the system.
  • Performance: eBPF enables high-performance operations since the programs run inside the kernel space without switching between user and kernel space.
  • Flexibility: eBPF programs can be dynamically loaded, allowing for runtime changes to functionality.
  • Observability: eBPF allows for deep visibility into the kernel’s operations and can be used for real-time monitoring and debugging.

Cilium and eBPF

Cilium is a cloud native networking and security project that leverages eBPF to provide high-performance, secure networking features for containers and microservices. Cilium uses eBPF for network security, load balancing, observability, and much more. It allows for fine-grained security policies and advanced network visibility without requiring changes to the kernel or the application code.

Cilium integrates eBPF with Kubernetes environments, enhancing networking and security at the kernel level. Some of the key features it offers include:

  • Service Mesh: Cilium uses eBPF to implement network policies and perform load balancing between microservices.
  • Network Policies: eBPF allows for high-performance network policy enforcement without impacting latency.
  • API Layer Visibility: Cilium uses eBPF to provide visibility into the communication between services and detect anomalies at the API level.

Example of eBPF with Cilium

Let’s look at how eBPF is used in Cilium through an example. One common use case is enforcing network security policies between Kubernetes pods using Cilium’s eBPF-powered network policies.

Installing Cilium

To get started with Cilium and use eBPF for networking, you first need to install it on your Kubernetes cluster. You can use the following Helm commands to install Cilium:

helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --version 1.17.0 --namespace kube-system
Enter fullscreen mode Exit fullscreen mode

Creating a Cilium Network Policy

Let's create a simple network policy to demonstrate how Cilium uses eBPF to enforce security at the packet level.

Here’s an example CiliumNetworkPolicy that allows traffic from a frontend pod to a backend pod but blocks traffic between frontend pods.

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-frontend-to-backend
spec:
  endpointSelector:
    matchLabels:
      app: frontend
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: backend
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: block-frontend-to-frontend
spec:
  endpointSelector:
    matchLabels:
      app: frontend
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: frontend
  action: deny
Enter fullscreen mode Exit fullscreen mode
  • Policy 1: The frontend pods can communicate with backend pods.
  • Policy 2: Traffic between frontend pods is blocked.

Once applied, Cilium will use eBPF to enforce these network policies efficiently.

Monitoring with eBPF

Cilium provides advanced observability features, leveraging eBPF to capture detailed information about network activity at the kernel level.

For example, you can view detailed networking metrics with:

cilium monitor
Enter fullscreen mode Exit fullscreen mode

This will show a real-time log of network events happening in your Kubernetes cluster. Thanks to eBPF, Cilium can collect and analyze network traffic at very high throughput without significant overhead.

Example of eBPF Program (In C)

Let’s look at a simple example of an eBPF program written in C. This program is designed to monitor and filter network packets.

Example: Simple eBPF Program to Drop Packets

This example demonstrates a basic eBPF program that filters and drops network packets based on their IP address.

#include <linux/bpf.h>
#include <linux/if_ether.h>
#include <linux/ip.h>
#include <linux/tcp.h>

SEC("filter")
int drop_ip_packets(struct __sk_buff *skb) {
    struct ethhdr *eth = bpf_hdr_pointer(skb, 0);
    if (eth->h_proto != htons(ETH_P_IP))
        return BPF_OK;  // Not an IP packet

    struct iphdr *ip = (struct iphdr *)(eth + 1);
    if (ip->daddr == htonl(0xC0A80001)) {  // Filter out packets destined for 192.168.0.1
        return BPF_DROP;  // Drop the packet
    }

    return BPF_OK;  // Allow other packets
}
Enter fullscreen mode Exit fullscreen mode

In this example:

  • The eBPF program filters network packets and checks if the destination IP address is 192.168.0.1.
  • If the packet is destined for 192.168.0.1, it is dropped (BPF_DROP).
  • Otherwise, the packet is allowed to proceed (BPF_OK).

You would compile this program into an eBPF object file, load it into the kernel using a tool like bpftool, and attach it to the relevant network interfaces for packet filtering.

Cilium and eBPF Advantages

  • High Performance: Since eBPF runs directly within the kernel, Cilium can enforce policies and process network traffic with minimal performance overhead, making it ideal for high-performance environments like Kubernetes.
  • Granular Security Policies: Cilium can enforce fine-grained security policies on a per-service basis, improving security while maintaining performance.
  • Deep Observability: Cilium can provide detailed visibility into network traffic, application performance, and anomalies by leveraging eBPF-based monitoring.
  • Zero Trust Networking: By leveraging eBPF, Cilium can implement Zero Trust Network principles, ensuring that all communication is authenticated, authorized, and encrypted.

Conclusion

eBPF is a groundbreaking technology that allows for flexible and efficient kernel-level programmability, and when used with Cilium, it enables powerful, high-performance networking and security capabilities for modern cloud-native environments. By using eBPF, Cilium can enforce network policies, provide deep observability, and manage microservices’ communication with minimal impact on performance. Cilium is an excellent example of how eBPF can transform traditional networking and security in Kubernetes, providing significant advantages in both flexibility and efficiency.

Top comments (0)