DEV Community

Trekkie
Trekkie

Posted on

Setup a multi-node K3s setup with HA capable egress

For this guide, we will set up a multi-node K3s cluster and demonstrate the steps to configure HA ingress, including a ping example from a set of pods. Before starting, lets briefly acquaint ourselves with the concepts of Kubernetes Egress and its necessity.

What is Kubernetes egress

In Kubernetes, egress refers to outbound network traffic originating from pods within a cluster. This traffic is sent to external destinations, such as other services, APIs, databases, or the internet.

Kubernetes egress is needed to ensure the reliability, scalability, and consistency of outbound traffic from a Kubernetes cluster. Here are the key reasons why it is essential:

  • External Connectivity: Pods often need to interact with resources outside the cluster (e.g., APIs, external databases, or other cloud services).
  • Stable IP Representation: HA egress allows the use of static egress IPs for consistent communication with external systems, making external firewalls, whitelists, or identity-based access control more reliable.
  • Regulated Outbound Traffic: HA egress solutions can enforce policies and logging for egress traffic, ensuring compliance with regulatory requirements (e.g., GDPR, PCI-DSS).

Kubernetes egress needs to have High Availability (HA) to ensure reliability, scalability, and control for outbound traffic. Apart from HA constraints, the default behavior of Kubernetes for handling egress traffic can be problematic.

kubernetes-default-egress

Kubernetes Egress with LoxiLB

LoxiLB already provides support for Kubernetes ServiceType: LoadBalancer, enabling direct integration with Kubernetes to manage external traffic for services efficiently. But, ServiceType: LoadBalancer is usually used for traffic coming into the Kubernetes cluster (reverse proxy). With Kubernetes egress support, LoxiLB provides a HA enabled solution to manage outgoing traffic from Kubernetes pods(Forward proxy).

loxilb-egress

For this guide, we will set up a multi-node K3s cluster and demonstrate the steps to configure HA ingress, including a ping example from a set of pods.

Install K3s in master node

curl -fL https://get.k3s.io | sh -s - server --node-ip=192.168.80.10 --disable servicelb --disable traefik --cluster-init \
--disable-cloud-controller \
--flannel-iface=eth1 \
--kube-proxy-arg proxy-mode=ipvs \
--disable-network-policy \
--kube-apiserver-arg=kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--node-name master1 \
--tls-san 192.168.80.10 \
--node-external-ip=192.168.80.10
Enter fullscreen mode Exit fullscreen mode
  • The kubeconfig file in k3s can be found at /etc/rancher/k3s/k3s.yaml of this node. This can be copied to other worker or local nodes (as K3s stores a kubeconfig file in your server at /etc/rancher/k3s/k3s.yaml, copy all the content of k3s.yaml from your server into ~/.kube/config) to access the K3s cluster. In case the "server" argument inside k3s.yaml is set to localhost, we can change it to the actual IP address of the master node (e.g 192.168.80.10 in this case)
  • Also, we need to make a note of the contents of /var/lib/rancher/k3s/server/node-token which will be used in subsequent steps
  • In this setup we are using only a single master setup, but one can follow this guide for the same.

Install K3s in worker node(s)

Run the following command on each worker node to join the cluster:

curl -sfL https://get.k3s.io | K3S_URL='https://192.168.80.10:6443' K3S_TOKEN=${NODE_TOKEN} sh -s - agent \
--server https://192.168.80.10:6443 \
--node-ip=${WORKER_ADDR} \
--node-external-ip=${WORKER_ADDR} \
--flannel-iface=eth1 \
--kube-proxy-arg proxy-mode=ipvs \
--node-name worker-${WORKER_ADDR}
Enter fullscreen mode Exit fullscreen mode
  • K3S_URL: The API server endpoint (use the IP of your master node).
  • NODE_TOKEN: Token copied from /var/lib/rancher/k3s/server/node-token on the master node.
  • WORKER_ADDR: IP address of the worker node.
  • Repeat the steps on additional worker nodes, adjusting the WORKER_ADDR as needed.

We will also label the worker nodes using the following command :

$ kubectl label nodes worker-${WORKER_ADDR} loxilb-egr-node=yes
Enter fullscreen mode Exit fullscreen mode

This is done to help schedule loxilb and the workloads using egress feature in a subset of nodes. After the initial setup is done, the following is how the setup looks like at this point :

$ kubectl get nodes
NAME                    STATUS   ROLES                       AGE   VERSION
master1                 Ready    control-plane,etcd,master   31h   v1.31.4+k3s1
worker-192.168.80.101   Ready    <none>                      30h   v1.31.4+k3s1
worker-192.168.80.102   Ready    <none>                      30h   v1.31.4+k3s1
Enter fullscreen mode Exit fullscreen mode

Install LoxiLB as Kubernetes incluster egress

Furthermore, we will use LoxiLB project to provide the Kubernetes egress solution for this scenario. Deploying LoxiLB is a simple two-step process :

  • Deploy loxilb's k8s operator (kube-loxilb)

Get the kube-loxilb manifest yaml :

wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/refs/heads/main/manifest/in-cluster/kube-loxilb-nobgp.yaml
Enter fullscreen mode Exit fullscreen mode

Modify any needed params in the above file, especially :

    args:               
    - --cidrPools=defaultPool=192.168.80.249/32                   
Enter fullscreen mode Exit fullscreen mode

The cidrPools arg defines the IP Pool used for IPAM used for LB service allocation. In the egress scenario, it is not exactly relevant as we are trying to showcase a forward proxy solution. Now, we can deploy kube-loxilb :

$ kubectl apply -f kube-loxilb.yml 
serviceaccount/kube-loxilb created
clusterrole.rbac.authorization.k8s.io/kube-loxilb created
clusterrolebinding.rbac.authorization.k8s.io/kube-loxilb created
deployment.apps/kube-loxilb created
Enter fullscreen mode Exit fullscreen mode

We also need to enable loxilb egress CRD :

$ kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/refs/heads/main/manifest/crds/egress-crd.yaml
Enter fullscreen mode Exit fullscreen mode
  • Deploy loxilb as a daemonset

Now, we will deploy loxilb as a daemonset and make sure that it runs only on the worker nodes and provision egress rules.

Get the loxilb manifest yaml :

$ wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/refs/heads/main/manifest/in-cluster/loxilb-nobgp-egress.yaml
Enter fullscreen mode Exit fullscreen mode

Before deploying this yaml, we need to pay attention to the following args in this file :

  • Node Selector : This helps to run loxilb only with the nodes labelled with "loxilb-egr-node"
nodeSelector:
  loxilb-egr-node: "yes"
Enter fullscreen mode Exit fullscreen mode
  • Cluster interface : This specifies which interface is used for cluster connectivity (e.g eth1) in this guide/
args:
  - --ipvs-compat
  - --egr-hooks
  - --clusterinterface=eth1
Enter fullscreen mode Exit fullscreen mode

We can then deploy loxilb :

$ kubectl apply -f loxilb-nobgp-egress.yaml 
daemonset.apps/loxilb-lb created
service/loxilb-lb-service created
service/loxilb-egress-service created
Enter fullscreen mode Exit fullscreen mode

At this point, we should be able to see the following pods running in the cluster :

$ kubectl get pods -A 
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   coredns-ccb96694c-g5hwq                   1/1     Running   0          42h
kube-system   kube-loxilb-575c6db6c4-zg8w5              1/1     Running   0          10h
kube-system   local-path-provisioner-5cf85fd84d-hdkl6   1/1     Running   0          42h
kube-system   loxilb-lb-cbr5r                           1/1     Running   0          4m
kube-system   loxilb-lb-pqldl                           1/1     Running   0          4m
kube-system   metrics-server-5985cbc9d7-lmpzl           1/1     Running   0          42h
Enter fullscreen mode Exit fullscreen mode

Deploy test-pods

Now, lets deploy some test pods to check egress functionality. For the test pods, we will use the following YAML file:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: egr-pod-ds
  labels:
    what: egr-pod-test
spec:
  selector:
    matchLabels:
      what: egr-pod-test
  template:
    metadata:
      labels:
        what: egr-pod-test
    spec:
      nodeSelector:
        loxilb-egr-node: "yes"
      containers:
      - name: egr-pod
        image: ghcr.io/loxilb-io/nettest:latest
        command: [ "sleep", "infinity" ]
Enter fullscreen mode Exit fullscreen mode

After deploying the pods, we can check the pod details:

$ kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE     IP          NODE                    NOMINATED NODE   READINESS GATES
egr-pod-ds-84n7k   1/1     Running   0          6m37s   10.42.1.3   worker-192.168.80.101   <none>           <none>
egr-pod-ds-xwq6w   1/1     Running   0          6m37s   10.42.2.3   worker-192.168.80.102   <none>           <none>
Enter fullscreen mode Exit fullscreen mode

At this point, the egress rules are not added, so the pods should not be able to connect to outside networks/internet :

$ kubectl exec -it  egr-pod-ds-84n7k -- ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3062ms

command terminated with exit code 1
Enter fullscreen mode Exit fullscreen mode

Let's add loxilb egress policy using egress CRD yaml as follows :

apiVersion: "egress.loxilb.io/v1"
kind: Egress
metadata:
  name: loxilb-egress
spec:
  #address: IP list of the Pod on which you want the egress rule applied
  addresses:
  - 10.42.1.3
  - 10.42.2.3
  #vip: Corresponding VIP for forward-proxy.
  vip: 10.0.2.15
Enter fullscreen mode Exit fullscreen mode
  • 10.42.1.3, 10.42.2.3 are the PodIP addresses
  • 10.0.2.15 is the sourceIP address to be used for traffic egressing from any of the pods.

Apply and re-confirm the CRD :

$ kubectl apply -f egress.yml 
egress.egress.loxilb.io/loxilb-egress created
$ kubectl describe egresses.egress.loxilb.io
Name:         loxilb-egress
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  egress.loxilb.io/v1
Kind:         Egress
Metadata:
  Creation Timestamp:  2025-01-21T05:48:31Z
  Generation:          1
  Resource Version:    414083
  UID:                 31d275a0-0f1c-4e00-8b2d-fdf9e993a726
Spec:
  Addresses:
    10.42.1.3
    10.42.2.3
  Vip:   10.0.2.15
Events:  <none>
Enter fullscreen mode Exit fullscreen mode

Let's try connecting to the internet from the pods now :

$ kubectl exec -it  egr-pod-ds-84n7k -- ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=63 time=42.6 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=63 time=41.1 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=63 time=49.1 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 41.058/44.227/49.057/3.470 ms


$ kubectl exec -it  egr-pod-ds-xwq6w -- ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=63 time=49.0 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=61 time=46.5 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=61 time=44.7 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=61 time=43.0 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=61 time=41.5 ms
^C
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 41.477/44.930/48.969/2.628 ms
Enter fullscreen mode Exit fullscreen mode

Conclusion

This guide is a follow-up to the egress feature documented for pods using secondary networks (using multus etc). As part of the future work for egress support, the egress CRD will include daemonset/deployment/pod names etc.

Top comments (0)