DEV Community

Cover image for Implementing Kubernetes Infrastructure with eBPF: Integrating LoxiLB and Cilium API Gateway
Aliebrahimy
Aliebrahimy

Posted on

Implementing Kubernetes Infrastructure with eBPF: Integrating LoxiLB and Cilium API Gateway

Introduction:

In the modern world of microservices architecture, security, scalability, and network performance are of paramount importance. To achieve these goals, various tools exist for managing inbound traffic, load balancing, and enforcing security policies. Combining two powerful tools—Cilium API Gateway and LoxiLB—can provide a comprehensive and scalable solution for traffic management and security in Kubernetes clusters. This combination leverages eBPF (Extended Berkeley Packet Filter) as its core technology, enabling efficient and sophisticated operations at the kernel level.

eBPF and Its Advantages:

Extended Berkeley Packet Filter (eBPF) is an advanced technology in the Linux kernel that allows executing custom code directly in the kernel space without requiring direct modifications to the kernel or additional modules. This capability has wide applications in security, monitoring, observability, and network optimization.

One of the key reasons for eBPF’s high performance is its ability to execute programs directly in the kernel. In traditional methods, network data processing required passing through multiple layers in the packet processing path, such as iptables, Netfilter, and other user-space operations. However, eBPF enables filtering and monitoring traffic directly at key points in the network stack (such as NIC or lower kernel layers), effectively bypassing unnecessary processing chains and reducing latency while increasing packet processing speed.

Additionally, eBPF uses a Just-In-Time (JIT) compiler that translates eBPF bytecode into optimized machine code for efficient execution on hardware. This results in enhanced performance and reduced processing overhead. Unlike traditional methods that require frequent data transfers between user space and the kernel, eBPF executes code directly within the kernel, minimizing expensive system calls (syscalls).

Overall, eBPF improves network performance by eliminating redundant processing steps, reducing context switching between kernel and user space, and executing optimized code in the kernel.

Cilium API Gateway:

Most microservices architectures require exposing certain services externally and securely routing traffic into the cluster. While Kubernetes traditionally uses Ingress for routing traffic, it has limitations.

The API Gateway addresses these limitations and is now supported by Cilium.

Cilium is an eBPF-powered networking tool that enhances security, scalability, and observability in Kubernetes environments. Using Cilium API Gateway, you can precisely manage inbound traffic based on HTTP methods, URLs, headers, and security policies. This API Gateway enables implementing complex security policies, such as transparent traffic encryption and TLS termination, with ease. Additionally, it supports advanced features like traffic splitting and weighting, allowing effective traffic distribution across services.

By leveraging eBPF, Cilium enables network operations to be executed directly at the kernel level without modifying application code or requiring additional proxies. This improves performance, provides better traffic visibility, and enforces security and routing policies more efficiently.

LoxiLB:

LoxiLB is a software-based load balancing solution that utilizes eBPF for scalable and efficient traffic management. It allows rapid and optimized distribution of inbound traffic across Kubernetes nodes. Since LoxiLB operates at the kernel level, it significantly reduces latency in traffic distribution. Moreover, it supports multiple protocols, including HTTP, HTTPS, TCP, UDP, and GRPC, enabling seamless management of various types of traffic without complex configurations.

LoxiLB also supports health checks and scalability features, ensuring intelligent traffic routing to healthy nodes while automatically scaling when necessary.

High Availability (HA) in LoxiLB:

LoxiLB provides robust load balancing capabilities in cloud and Kubernetes environments with support for High Availability (HA) to enhance service stability and availability. Implementing HA in LoxiLB allows seamless failover in case of node or component failures, ensuring uninterrupted service operation.

LoxiLB can be deployed either in-cluster or externally, depending on architectural requirements. This document explores various HA deployment scenarios, including Active-Backup and Active-Active models using BGP and ECMP mechanisms.

High Availability Scenarios in LoxiLB:

Flat L2 Network (Active-Backup):

In this setup, all Kubernetes nodes reside within the same subnet, and LoxiLB runs as a DaemonSet on master nodes. This model is ideal for environments where services and clients share the same network.

L3 Network with BGP (Active-Backup):

In this scenario, LoxiLB assigns IP addresses from an external subnet and manages communication between nodes using BGP. This is suitable for cloud environments where clients and services exist in separate networks.

L3 Network with BGP ECMP (Active-Active):

This model ensures uniform traffic distribution across multiple active nodes using ECMP. While it offers superior performance, it requires network support for ECMP routing.

Active-Backup with Connection Synchronization:

This approach maintains long-lived connections even during node failures. In this setup, connection states are synchronized between LoxiLB nodes, ensuring seamless failover without losing active connections.

Active-Backup with Fast Failure Detection (BFD):

LoxiLB uses Bidirectional Forwarding Detection (BFD) to rapidly detect network failures and redirect traffic to healthier nodes.

LoxiLB provides diverse HA solutions, enhancing service reliability in Kubernetes environments. Depending on infrastructure needs and network type, either Active-Backup or Active-Active models can be chosen to maximize service availability.

Integrating Cilium API Gateway and LoxiLB:

Integrating Cilium API Gateway and LoxiLB in a Kubernetes cluster allows precise and efficient management of inbound traffic while ensuring security and scalability. These two tools, leveraging eBPF, execute complex routing, security, and load balancing operations directly at the kernel level, reducing network latency and improving performance.

This integration is particularly beneficial for large clusters requiring secure and sophisticated traffic management. It enables leveraging Kubernetes’ full security and scalability potential without additional or complex tools.

By utilizing this integration, you can:

  • Effectively and securely manage inbound traffic.
  • Easily implement traffic encryption and TLS termination.
  • Scale traffic distribution efficiently across services.
  • Gain deep traffic observability and quickly identify issues.

These capabilities make Cilium API Gateway and LoxiLB an ideal solution for complex Kubernetes architectures.

Comparing LoxiLB with MetalLB, NGINX, and HAProxy in Kubernetes:

This section compares LoxiLB with MetalLB as a Kubernetes service load balancer and also examines LoxiLB in comparison with NGINX and HAProxy for Kubernetes Ingress management. The focus is on performance for modern cloud-native workloads.

Link: L4-L7 Performance: Comparing LoxiLB, MetalLB, NGINX, HAProxy

Additional Performance Tuning:

Below are additional optimization settings used across all solutions:

  • Set maximum queue size: sysctl net.core.netdev_max_backlog=10000
  • Enable multiple queues and adjust MTU: Using Vagrant with libvirt. For better performance, the number of driver queues should match the number of CPUs.
  • Disable TX XPS (for LoxiLB only): This setting should be applied to all nodes running LoxiLB.

Performance Criteria

Metric LoxiLB (eBPF-Based) IPTables-Based
Throughput High Medium
Latency Low Higher under heavy load
Connection Management Scalable to millions of connections Limited to IPTables
Resource Consumption Efficient (eBPF-Based) Requires more resources

Key Differences Between LoxiLB and MetalLB

  • Performance: LoxiLB leverages eBPF for near-kernel packet processing and minimal CPU usage, whereas MetalLB uses IPTables/IPVS for packet routing, resulting in higher latency and limited scalability under heavy traffic.
  • Scalability: LoxiLB manages higher workloads due to its optimized architecture, while MetalLB struggles in high-scale environments.
  • Features: LoxiLB supports advanced features like direct server return (DSR), Proxy Protocol, and network observability, whereas MetalLB provides basic Layer 2 and Layer 3 load balancing.

Performance Testing

  • Throughput: Traffic from a separate VM acting as a client was routed through the load balancer to a NodePort and then to a workload. LoxiLB demonstrated superior throughput in all tests.
  • Requests Per Second (RPS): Performance was measured using go-wrk to simulate concurrent request handling.

Ingress Comparison in Kubernetes

Introduction

  • NGINX: A well-known Ingress controller with rich Layer 7 features like SSL termination, HTTP routing, and caching.
  • HAProxy: Known for strong load balancing and high performance, offering precise Layer 4 and Layer 7 traffic control.
  • LoxiLB: Combines Layer 4 and Layer 7 capabilities with eBPF-based performance and native Kubernetes integration.
Metric LoxiLB NGINX HAProxy
Throughput High Medium High
Latency Low Medium Low
SSL Termination Supported Supported Supported
Connection Management Scalable to millions Limited High

Key Differences Between LoxiLB, NGINX, and HAProxy:

Performance: LoxiLB delivers higher performance and lower latency under heavy load compared to NGINX and HAProxy.
Scalability: LoxiLB is seamlessly scalable for modern containerized workloads, while HAProxy is good for scaling but may require additional tuning. NGINX is less optimized compared to LoxiLB and HAProxy in terms of scalability.
Features: NGINX excels in advanced HTTP routing and SSL management, HAProxy offers robust Layer 4 and Layer 7 capabilities but is less Kubernetes-native. LoxiLB provides Layer 7 features while maintaining high performance.
Performance Testing
In RPS (Requests Per Second) and latency tests, LoxiLB outperformed both NGINX and HAProxy.

Conclusion
When evaluating networking solutions for Kubernetes, the choice depends on workload-specific requirements and scalability needs. LoxiLB consistently outperforms competitors in raw performance and scalability, making it a strong option for high-load environments. However, for traditional use cases with a focus on Layer 7 features, NGINX and HAProxy remain solid choices. For simpler setups, MetalLB may be sufficient but might struggle to meet future demands.

Kubernetes Cluster Setup Guide with RKE2, Cilium API Gateway, and LoxiLB

Prerequisites

1 - Check the kernel version to ensure eBPF support.

uname -r
Enter fullscreen mode Exit fullscreen mode

Ensure your kernel version is 5.10 or higher for eBPF support.

2. Configure NetworkManager

systemctl is-active NetworkManager
sudo mkdir -p /etc/NetworkManager/conf.d
sudo tee /etc/NetworkManager/conf.d/cilium-cni.conf <<EOF
[keyfile]
unmanaged-devices=interface-name:cilium_net;interface-name:cilium_host;interface-name:cilium_vxlan;interface-name:cilium_geneve
EOF
sudo systemctl restart NetworkManager
Enter fullscreen mode Exit fullscreen mode

3. Disable SELinux

getenforce
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
Enter fullscreen mode Exit fullscreen mode

4. Install Required Packages

If SELinux is in Enforcing mode:

sudo dnf install -y iptables libnetfilter_conntrack libnfnetlink libnftnl policycoreutils-python-utils rke2-selinux

Enter fullscreen mode Exit fullscreen mode

If SELinux is in Permissive or Disabled mode:

sudo dnf install -y iptables libnetfilter_conntrack libnfnetlink libnftnl policycoreutils-python-utils

Enter fullscreen mode Exit fullscreen mode

5. Install RKE2

Download and install the RKE2 archive

mkdir /root/rke2-artifacts && cd /root/rke2-artifacts/
curl -OLs https://github.com/rancher/rke2/releases/download/v1.32.1%2Brke2r1/rke2-images-core.linux-amd64.tar.gz
curl -OLs https://github.com/rancher/rke2/releases/download/v1.32.1%2Brke2r1/rke2.linux-amd64.tar.gz
curl -OLs https://github.com/rancher/rke2/releases/download/v1.32.1%2Brke2r1/sha256sum-amd64.txt
curl -sfL https://get.rke2.io --output install.sh
INSTALL_RKE2_ARTIFACT_PATH=/root/rke2-artifacts sh install.sh

mkdir -p /etc/rancher/rke2/
vim /etc/rancher/rke2/config.yaml
Enter fullscreen mode Exit fullscreen mode

Config file contents:

write-kubeconfig-mode: "0644"
advertise-address: 192.168.100.100
node-name: kuber-master-1
tls-san:
  - 192.168.100.100
cni: none
cluster-cidr: 10.100.0.0/16
service-cidr: 10.110.0.0/16
cluster-dns: 10.110.0.10
etcd-arg: "--quota-backend-bytes 2048000000"
etcd-snapshot-schedule-cron: "0 3 * * *"
etcd-snapshot-retention: 10
disable:
  - rke2-ingress-nginx
disable-kube-proxy: true
kube-apiserver-arg:
  - '--default-not-ready-toleration-seconds=30'
  - '--default-unreachable-toleration-seconds=30'
kube-controller-manager-arg:
  - '--node-monitor-period=4s'
kubelet-arg:
  - '--node-status-update-frequency=4s'
  - '--max-pods=100'
 egress-selector-mode: disabled
 node-taint:
  - "CriticalAddonsOnly=true:NoExecute"

Enter fullscreen mode Exit fullscreen mode

6. Setup RKE2 Master

mkdir --p /var/lib/rancher/rke2/agent/images/
mv rke2-images-core.linux-amd64.tar.gz /var/lib/rancher/rke2/agent/images/
systemctl enable rke2-server.service
systemctl start rke2-server.service
journalctl -u rke2-server -f

echo 'PATH=$PATH:/var/lib/rancher/rke2/bin' >> ~/.bashrc
source ~/.bashrc
mkdir ~/.kube
cp /etc/rancher/rke2/rke2.yaml ~/.kube/config
Enter fullscreen mode Exit fullscreen mode

7. Install Cilium-cli

Download and install Cilium

mkdir /opt/cilium && cd /opt/cilium
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gatewayclasses.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gateways.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_httproutes.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_referencegrants.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_grpcroutes.yaml
curl -OL https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/experimental/gateway.networking.k8s.io_tlsroutes.yaml
kubectl apply -f gateway.networking.k8s.io_gatewayclasses.yaml
kubectl apply -f gateway.networking.k8s.io_gateways.yaml
kubectl apply -f gateway.networking.k8s.io_httproutes.yaml
kubectl apply -f gateway.networking.k8s.io_referencegrants.yaml
kubectl apply -f gateway.networking.k8s.io_grpcroutes.yaml
kubectl apply -f gateway.networking.k8s.io_tlsroutes.yaml

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
Enter fullscreen mode Exit fullscreen mode

8. Install Cilium with desired configuration

vim cilium.yaml

Enter fullscreen mode Exit fullscreen mode

Config file contents:

kubeProxyReplacement: "true"
k8sServiceHost: "192.168.58.105"
k8sServicePort: "6443"
hubble:
  enabled: true
  metrics:
    enabled:
    - dns:query;ignoreAAAA
    - drop
    - tcp
    - flow
    - icmp
    - http
    dashboards:
      enabled: true
  relay:
    enabled: true
    prometheus:
      enabled: true
  ui:
    enabled: true
    baseUrl: "/"
ingressController:
  enabled: false
envoyConfig:
  enabled: true
  secretsNamespace:
    create: true
    name: cilium-secrets
debug:
  enabled: true
rbac:
  create: true
gatewayAPI:
  enabled: true
  enableProxyProtocol: false
  enableAppProtocol: false
  enableAlpn: false
  xffNumTrustedHops: 0
  externalTrafficPolicy: Cluster
  gatewayClass:
    create: auto
  secretsNamespace:
    create: true
    name: cilium-secrets
    sync: true
version: 1.17.1
operator:
  prometheus:
    enabled: true
  dashboards:
    enabled: true

Enter fullscreen mode Exit fullscreen mode
cilium install -f cilium.yaml
cilium status
Enter fullscreen mode Exit fullscreen mode

9. Setup Worker Nodes

Perform all steps from 1 to 5 above.

server: https://192.168.100.100:9345
token: XXXXXXXXXX
node-name: kuber-worker-1
kubelet-arg:
  - '--node-status-update-frequency=4s'
  - '--max-pods=100'
Enter fullscreen mode Exit fullscreen mode

10. Retrieve Token from the Master Node

cat /var/lib/rancher/rke2/server/node-token
Enter fullscreen mode Exit fullscreen mode
systemctl disable rke2-server && systemctl mask rke2-server
systemctl enable --now rke2-agent.service
Enter fullscreen mode Exit fullscreen mode

11. Install LoxiLB on External Load Balancer Servers

12. Install Docker

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io

sudo systemctl enable docker
sudo systemctl start docker

Create the Docker group if it doesn’t exist:

sudo groupadd docker
Add your user to the Docker group:

sudo usermod -aG docker $USER
Enter fullscreen mode Exit fullscreen mode

13. Run LoxiLB

#llb1
 docker run -u root --cap-add SYS_ADMIN   --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --net=host  --name loxilb ghcr.io/loxilb-io/loxilb:latest  --cluster=192.168.58.111 --self=0 --ka=192.168.58.111:192.168.58.110

#llb2
 docker run -u root --cap-add SYS_ADMIN   --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --net=host --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.58.110 --self=1 --ka=192.168.58.110:192.168.58.111
Enter fullscreen mode Exit fullscreen mode

14. Deploy LoxiLB Controller on Kubernetes Cluster

vim kube-loxilb.yaml

Enter fullscreen mode Exit fullscreen mode

Config file contents:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-loxilb
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kube-loxilb
rules:
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
      - watch
      - list
      - patch
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
      - watch
      - list
      - patch
  - apiGroups:
      - ""
    resources:
      - endpoints
      - services
      - namespaces
      - services/status
    verbs:
      - get
      - watch
      - list
      - patch
      - update
  - apiGroups:
      - gateway.networking.k8s.io
    resources:
      - gatewayclasses
      - gatewayclasses/status
      - gateways
      - gateways/status
      - tcproutes
      - udproutes
    verbs: ["get", "watch", "list", "patch", "update"]
  - apiGroups:
      - discovery.k8s.io
    resources:
      - endpointslices
    verbs:
      - get
      - watch
      - list
  - apiGroups:
      - apiextensions.k8s.iovim kube-loxilb.yaml

    resources:
      - customresourcedefinitions
    verbs:
      - get
      - watch
      - list
  - apiGroups:
      - authentication.k8s.io
    resources:
      - tokenreviews
    verbs:
      - create
  - apiGroups:
      - authorization.k8s.io
    resources:
      - subjectaccessreviews
    verbs:
      - create
  - apiGroups:
      - bgppeer.loxilb.io
    resources:
      - bgppeerservices
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - bgppolicydefinedsets.loxilb.io
    resources:
      - bgppolicydefinedsetsservices
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - bgppolicydefinition.loxilb.io
    resources:
      - bgppolicydefinitionservices
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - bgppolicyapply.loxilb.io
    resources:
      - bgppolicyapplyservices
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - loxiurl.loxilb.io
    resources:
      - loxiurls
    verbs:
      - get
      - watch
      - list
      - create
      - update
      - delete
  - apiGroups:
      - egress.loxilb.io
    resources:
      - egresses
    verbs: ["get", "watch", "list", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kube-loxilb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-loxilb
subjects:
  - kind: ServiceAccount
    name: kube-loxilb
    namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-loxilb
  namespace: kube-system
  labels:
    app: kube-loxilb-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-loxilb-app
  template:
    metadata:
      labels:
        app: kube-loxilb-app
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
      priorityClassName: system-node-critical
      serviceAccountName: kube-loxilb
      terminationGracePeriodSeconds: 0
      containers:
      - name: kube-loxilb
        image: ghcr.io/loxilb-io/kube-loxilb:latest
        imagePullPolicy: Always
        command:
        - /bin/kube-loxilb
        args:
        - --loxiURL=http://192.168.57.7:11111,http://192.168.57.8:11111
        - --externalCIDR=192.168.57.100/32
        #- --cidrPools=defaultPool=192.168.57.100/32
        #- --monitor
        #- --setBGP=64512
        #- --extBGPPeers=50.50.50.1:65101,51.51.51.1:65102
        #- --setRoles
        - --setLBMode=2
        #- --config=/opt/loxilb/agent/kube-loxilb.conf
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
Enter fullscreen mode Exit fullscreen mode

Set --externalCIDR to the VIP address of the load balancers

Set loxiURL to the address of the load balancers

args:
       - --loxiURL=http://192.168.57.7:11111,http://192.168.57.8:11111
       - --externalCIDR=192.168.57.100/32
       - --setLBMode=2
Enter fullscreen mode Exit fullscreen mode
[root@Oracle-Linux-Template manifests]# kubectl apply -f kube-loxilb.yaml

Enter fullscreen mode Exit fullscreen mode

15. Integrate Cilium & LoxiLB using Webhook for LoadBalancer

Deploy the following mutating-webhook for automatic LoadBalancer service creation:

vim Loxilb-webhook.yaml

Enter fullscreen mode Exit fullscreen mode
apiVersion: apps/v1
kind: Deployment
metadata:
  name: loxilb-webhook
  namespace: default
  labels:
    app: loxilb-webhook
spec:
  replicas: 1
  selector:
    matchLabels:
      app: loxilb-webhook
  template:
    metadata:
      labels:
        app: loxilb-webhook
    spec:
      initContainers:
        - name: generate-certs
          image: docker.io/aebrahimy/loxilb-webhook-init:v5  
          volumeMounts:
            - name: webhook-tls
              mountPath: "/tls/"
          env:
            - name: MUTATE_CONFIG
              value: mutating-webhook-configuration
            - name: VALIDATE_CONFIG
              value: validating-webhook-configuration
            - name: WEBHOOK_SERVICE
              value: loxilb-webhook
            - name: WEBHOOK_NAMESPACE
              value: default
      containers:
        - name: webhook
          image: docker.io/aebrahimy/loxilb-webhook:v10
          ports:
            - containerPort: 443
          volumeMounts:
            - name: webhook-tls
              mountPath: "/tls/"
              readOnly: true
      volumes:
        - name: webhook-tls
          emptyDir: {}  

---
apiVersion: v1
kind: Service
metadata:
  name: loxilb-webhook
  namespace: default
spec:
  ports:
    - port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app: loxilb-webhook

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: webhook-manager
rules:
  - apiGroups: ["admissionregistration.k8s.io"]
    resources: ["mutatingwebhookconfigurations"]
    verbs: ["create", "get", "list", "patch", "update", "delete"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: webhook-manager-binding
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default
roleRef:
  kind: ClusterRole
  name: webhook-manager
  apiGroup: rbac.authorization.k8s.io

Enter fullscreen mode Exit fullscreen mode

17. Webhook Components:

Deployment
- initContainer
# - Generates security certificates and creates the MutatingWebhookConfiguration
- Main Container
# - Runs the Webhook and listens on port 443
Service
- A service is created to expose the Webhook on port 443.
RBAC (Access Control)
- ClusterRole allows management of MutatingWebhookConfiguration
- ClusterRoleBinding assigns the role to the default ServiceAccount.
Webhook Deployment Result:
- When a LoadBalancer service is created, the Webhook modifies it for LoxiLB.
- This improves integration between Cilium and LoxiLB, reducing manual configurations.
- It is recommended to deploy webhook resources in the kube-system namespace for security.

18. GatewayClass & Gateway:

- These CRDs define how traffic enters the cluster.

- If already deployed, Cilium automatically creates a GatewayClass.

[root@Oracle-Linux-Template manifests]# kubectl get gatewayclasses.gateway.networking.k8s.io
NAME     CONTROLLER                     ACCEPTED   AGE
cilium   io.cilium/gateway-controller   True       2d4h
Enter fullscreen mode Exit fullscreen mode

GatewayClass is a type of Gateway that can be deployed—in other words, it is a template. This allows infrastructure providers to offer different types of Gateways, and users can select the desired Gateway.

For example, an infrastructure provider might create two GatewayClasses named "internet" and "private" for different purposes and possibly with different features—one for proxying services facing the internet and another for internal private applications.

In our case, we will deploy the Cilium Gateway API (io.cilium/gateway-controller).


HTTP Routing

Now, let's deploy an application and configure Gateway API's HTTPRoute to route HTTP traffic to the cluster. We will use the sample bookinfo application.

This demo consists of multiple deployments and services provided by the Istio project:

🔍 Details

⭐ Reviews

✍ Ratings

📕 Product Page

We will use some of these services as the foundation for our Gateway API.


Deploying the Application

Now, let's deploy the sample application in the cluster.

root@server:~#  kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.12/samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
Enter fullscreen mode Exit fullscreen mode

Checking Deployed Services

Note that these services are only available internally (ClusterIP) and cannot be accessed from outside the cluster.

root@server:~# kubectl get svc
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.96.41.65    <none>        9080/TCP   2m3s
kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP    11m
productpage   ClusterIP   10.96.147.97   <none>        9080/TCP   2m3s
ratings       ClusterIP   10.96.105.89   <none>        9080/TCP   2m3s
reviews       ClusterIP   10.96.149.14   <none>        9080/TCP   2m3s
Enter fullscreen mode Exit fullscreen mode

Deploying Gateway and HTTPRoutes

Before deploying the Gateway and HTTPRoutes, let's review the configuration we will use. We will go through it section by section:


apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: my-gateway
spec:
  gatewayClassName: cilium
  infrastructure:
    annotations:
      loxilb.io/liveness: "yes"
      loxilb.io/lbmode: "onearm"
      loxilb.io/loadBalancerClass: "loxilb.io/loxilb"
  listeners:
  - protocol: HTTP
    port: 80
    name: web-gw
    allowedRoutes:
      namespaces:
        from: Same
Enter fullscreen mode Exit fullscreen mode
  1. In the Gateway section, the gatewayClassName field is set to cilium, referring to the previously configured Cilium GatewayClass.

  2. The Gateway listens on port 80 for incoming HTTP traffic from outside the cluster.

  3. The allowedRoutes field specifies the namespaces that can attach Routes to this Gateway.

    • Setting it to Same means only Routes within the same namespace can be used by this Gateway.
    • If set to All, the Gateway can be used by Routes from any namespace, allowing a single Gateway to be shared across multiple namespaces managed by different teams.
  4. The configuration includes specific annotations to ensure that the created LoadBalancer service uses LoxiLB. This ensures that external LoadBalancers are automatically configured to allow access to the deployed Gateway.

infrastructure:
    annotations:
      loxilb.io/liveness: "yes"
      loxilb.io/lbmode: "onearm"
      loxilb.io/loadBalancerClass: "loxilb.io/loxilb"
Enter fullscreen mode Exit fullscreen mode

After applying the manifests, the Gateway service and the required LoadBalancer for external access will be automatically created.

[root@Oracle-Linux-Template manifests]# kubectl get gateway
NAME                 CLASS    ADDRESS              PROGRAMMED   AGE
my-example-gateway   cilium   llb-192.168.57.100   True         3h55m
my-gateway           cilium   llb-192.168.57.100   True         5h56m
tls-gateway          cilium   llb-192.168.57.100   True         5h13m
[root@Oracle-Linux-Template manifests]# kubectl get svc
NAME                                TYPE           CLUSTER-IP       EXTERNAL-IP          PORT(S)          AGE
cilium-gateway-my-example-gateway   LoadBalancer   10.110.146.122   llb-192.168.57.100   8080:32094/TCP   3h55m
cilium-gateway-my-gateway           LoadBalancer   10.110.148.120   llb-192.168.57.100   80:30790/TCP     5h56m
cilium-gateway-tls-gateway          LoadBalancer   10.110.233.230   llb-192.168.57.100   443:31522/TCP    5h13m
Enter fullscreen mode Exit fullscreen mode

The external LoadBalancers will also be configured automatically to route traffic to the specified endpoints.

[root@Loxi-LB1 ~]# sudo docker exec -it loxilb loxicmd get lb -o wide
|     EXT IP     | SEC IPS | SOURCES | HOST | PORT | PROTO |                        NAME                         | MARK | SEL |  MODE  | ENDPOINT  | EPORT | WEIGHT | STATE  |   COUNTERS    |
|----------------|---------|---------|------|------|-------|-----------------------------------------------------|------|-----|--------|-----------|-------|--------|--------|---------------|
| 192.168.57.100 |         |         |      |   80 | tcp   | default_cilium-gateway-my-gateway:llb-inst0         |    0 | rr  | onearm | 10.0.9.12 | 30790 |      1 | active | 11:912        |
|                |         |         |      |      |       |                                                     |      |     |        | 10.0.9.13 | 30790 |      1 | active | 0:0           |
|                |         |         |      |      |       |                                                     |      |     |        | 10.0.9.16 | 30790 |      1 | active | 0:0           |

Enter fullscreen mode Exit fullscreen mode

Reviewing the HTTPRoute Manifest

The HTTPRoute resource is a part of the GatewayAPI and is used to define the routing behavior of HTTP requests from the Gateway listener to Kubernetes services.

It contains rules that direct traffic based on specific conditions.

  1. The first rule defines a basic L7 proxy path:
    • HTTP traffic with a path starting with /details is routed to the details service on port 9080.
rules:
- matches:
  - path:
      type: PathPrefix
      value: /details
  backendRefs:
  - name: details
    port: 9080
Enter fullscreen mode Exit fullscreen mode
  1. The second rule defines more specific matching criteria:
    • If the HTTP request contains:
      • An HTTP header named magic with the value foo, and
      • The HTTP method is GET, and
      • A query parameter named great with the value example,
    • Then the traffic is routed to the productpage service on port 9080.
rules:
  - matches:
   - headers:
      - type: Exact
        name: magic
        value: foo
      queryParams:
      - type: Exact
        name: great
        value: example
      path:
        type: PathPrefix
        value: /
      method: GET
    backendRefs:
    - name: productpage
      port: 9080
Enter fullscreen mode Exit fullscreen mode

As you can see, you can implement complex and consistent L7 routing rules.

With the traditional Ingress API, achieving similar routing functionality often required using annotations, which led to inconsistencies between different Ingress controllers.

One major advantage of the new Gateway APIs is that they are fundamentally split into separate sections:

  • One for defining the Gateway.
  • One for defining Routes to backend services.

By separating these functionalities, operators can modify and swap Gateways while keeping the routing configuration unchanged.

In other words:

If you decide to use a different Gateway API controller, you can reuse the same manifest without modification.

Testing Connectivity from LoadBalancer VIP


curl --fail -s http://192.168.57.100/details/1 | jq                                                                                               
{
  "id": 1,
  "author": "William Shakespeare",
  "year": 1595,
  "type": "paperback",
  "pages": 200,
  "publisher": "PublisherA",
  "language": "English",
  "ISBN-10": "1234567890",
  "ISBN-13": "123-1234567890"
}
Enter fullscreen mode Exit fullscreen mode

TLS Termination

While HTTP traffic routing is straightforward, securing traffic with HTTPS and TLS certificates is essential.

In this section, we will first deploy the TLS certificate.

For demonstration purposes, we will use a self-signed TLS certificate issued by a mock Certificate Authority (CA).

One of the simplest ways to do this is by using the mkcert tool.

Generating the TLS Certificate

First, we generate a TLS certificate that validates the following domains:

  • bookinfo.sadadco.ir
  • hipstershop.sadadco.ir
root@server:~# mkcert '*.sadadco.ir'
Created a new local CA 💥
Note: the local CA is not installed in the system trust store.
Run "mkcert -install" for certificates to be trusted automatically ⚠️

Created a new certificate valid for the following names 📜
 - "*.cilium.rocks"

Reminder: X.509 wildcards only go one level deep, so this won't match a.b.cilium.rocks ℹ️

The certificate is at "./_wildcard.sadadco.ir.pem" and the key at "./_wildcard.sadadco.ir-key.pem" ✅

It will expire on 9 June 2025 🗓
Enter fullscreen mode Exit fullscreen mode

These domains represent the hostnames used in this Gateway example.

The generated certificate files are:

  • _wildcard.sadadco.ir.pem (certificate)
  • _wildcard.sadadco.ir-key.pem (private key)

We will use these files for our Gateway service.

Creating a TLS Secret in Kubernetes

Now, we create a TLS Secret in Kubernetes using the certificate and private key.

root@server:~# kubectl create secret tls demo-cert --key=_wildcard.sadadco.ir-key.pem --cert=_wildcard.sadadco.ir.pem
secret/demo-cert created
Enter fullscreen mode Exit fullscreen mode

Deploying a Gateway for HTTPS Traffic

With the TLS secret in place, we can now deploy a new Gateway for handling HTTPS traffic.

apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: tls-gateway
spec:
  gatewayClassName: cilium
  infrastructure:
    annotations:
      loxilb.io/liveness: "yes"
      loxilb.io/lbmode: "onearm"
      loxilb.io/loadBalancerClass: "loxilb.io/loxilb"
  listeners:
  - name: https-1
    protocol: HTTPS
    port: 443
    hostname: "bookinfo.sadadco.ir"
    tls:
      certificateRefs:
      - kind: Secret
        name: demo-cert
  - name: https-2
    protocol: HTTPS
    port: 443
    hostname: "hipstershop.sadadco.ir"
    tls:
      certificateRefs:
      - kind: Secret
        name: demo-cert
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: https-app-route-1
spec:
  parentRefs:
  - name: tls-gateway
  hostnames:
  - "bookinfo.sadadco.ir"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /details
    backendRefs:
    - name: details
      port: 9080
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: https-app-route-2
spec:
  parentRefs:
  - name: tls-gateway
  hostnames:
  - "hipstershop.sadadco.ir"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: productpage
      port: 9080
Enter fullscreen mode Exit fullscreen mode

Traffic Splitting

In this scenario, we will use the Gateway API to distribute incoming traffic across multiple backends while assigning different weights to each.

First, we deploy Echo Servers, which respond to cURL requests by displaying the pod name and node name.

Deploying Echo Servers

We deploy the Echo Servers using YAML manifests.

root@server:~# kubectl apply -f https://raw.githubusercontent.com/nvibert/gateway-api-traffic-splitting/main/echo-servers.yml
service/echo-1 created
deployment.apps/echo-1 created
service/echo-2 created
deployment.apps/echo-2 created
Enter fullscreen mode Exit fullscreen mode

Deploying the Gateway and HTTPRoute

Now, we proceed with deploying the Gateway and HTTPRoute.

We apply the YAML manifests for both Gateway and HTTPRoute:

[root@Oracle-Linux-Template manifests]# cat gateway.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: my-example-gateway
spec:
  gatewayClassName: cilium
  infrastructure:
    annotations:
      loxilb.io/liveness: "yes"
      loxilb.io/lbmode: "onearm"
      loxilb.io/loadBalancerClass: "loxilb.io/loxilb"
  listeners:
  - protocol: HTTP
    port: 8080
    name: web-gw-echo
    allowedRoutes:
      namespaces:
        from: Same
---
[root@Oracle-Linux-Template manifests]# cat httpRoute.yml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: example-route-1
spec:
  parentRefs:
  - name: my-example-gateway
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /echo
    backendRefs:
    - kind: Service
      name: echo-1
      port: 8080
      weight: 99
    - kind: Service
      name: echo-2
      port: 8090
      weight: 1
Enter fullscreen mode Exit fullscreen mode

Modifying HTTP Request Headers

With Cilium Gateway API, we can add, remove, or modify incoming HTTP request headers dynamically.

For testing this capability, we will use the same Echo Servers.

First, let's create a new HTTPRoute that adds a custom header to incoming requests.


Creating a New HTTPRoute

The following YAML file defines an HTTPRoute that adds a header named my-cilium-header-name with the value my-cilium-header-value to any request that matches the /cilium-add-a-request-header path.

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: header-http-echo
spec:
  parentRefs:
  - name: cilium-gw
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /cilium-add-a-request-header
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: my-cilium-header-name
          value: my-cilium-header-value
    backendRefs:
      - name: echo-1
        port: 8080
Enter fullscreen mode Exit fullscreen mode

Removing a Header

To remove a specific header from a request, we can use the remove field.

For example, the following configuration removes the x-request-id header from incoming requests.

- type: RequestHeaderModifier
  requestHeaderModifier:
    remove: ["x-request-id"]
Enter fullscreen mode Exit fullscreen mode

Modifying HTTP Response Headers

Similar to modifying request headers, response header modification can be useful for various use cases.

For example:

  • Teams can add or remove cookies for a specific service, allowing returning users to be identified.
  • A frontend application can determine whether it is connected to a stable or beta backend version and adjust the UI or processing accordingly.

At the time of writing, this feature is part of the "experimental" channel in the Gateway API.

Therefore, before using it, we must install the experimental CRDs.

root@server:~# kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.6.0/config/crd/experimental/gateway.networking.k8s.io_gatewayclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.6.0/config/crd/experimental/gateway.networking.k8s.io_gateways.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.6.0/config/crd/experimental/gateway.networking.k8s.io_httproutes.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.6.0/config/crd/experimental/gateway.networking.k8s.io_referencegrants.yaml
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io configured
Enter fullscreen mode Exit fullscreen mode

Creating an HTTPRoute to Modify Response Headers
Now, let's create a new HTTPRoute that modifies response headers for requests matching the /multiple path.
In this example, three new headers will be added to the response:

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: response-header-modifier
spec:
  parentRefs:
  - name: cilium-gw
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /multiple
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: X-Header-Add-1
          value: header-add-1
        - name: X-Header-Add-2
          value: header-add-2
        - name: X-Header-Add-3
          value: header-add-3
    backendRefs:
    - name: echo-1
      port: 8080
Enter fullscreen mode Exit fullscreen mode

Top comments (0)