Refer to my last two posts for the deployment of the k3s cluster and nginx-ingress services.
In this article, we are going to install the nfs-provisioner Component and SafeLine WAF via HelmChart.
Image source:Vishnu ks
Installing the nfs-provisioner Component via HelmChart
The nfs-subdir-external-provisioner service is a third-party component used in K8S or K3S clusters to automatically mount NFS directories as persistent data storage for the cluster. This document will demonstrate how to deploy it using HelmChart and create a storage-class for the cluster.
Adding the Helm Public Repository and Deploying
- Check Helm Version:
helm version
- List All Added Helm Repositories:
helm repo list
- Add nfs-subdir Helm Repository:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
- Verify Added Helm Repositories:
helm repo list
NAME URL
nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
- Update Helm Repositories:
helm repo update
Checking NFS-related HelmChart Versions
- Search for nfs-subdir-external-provisioner Chart:
helm search repo nfs-subdir-external-provisioner | grep nfs-subdir-external-provisioner
nfs-subdir-external-provisioner/nfs-subdir-exte... 4.0.10 4.0.2 nfs-subdir-external-provisioner is an automatic...
Install NFS Client
- Install NFS Client:
apt install -y nfs-common
Note: All cluster nodes must install the NFS client to use NFS as backend storage.
Install nfs-client-provisioner
- Install using Helm:
helm install --namespace kube-system nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=192.168.1.103 \
--set nfs.path=/nfs_data/waf-lan-k3s-data \
--set image.repository=registry.cn-hangzhou.aliyuncs.com/k8s_sys/nfs-subdir-external-provisioner \
--set image.tag=v4.0.2 \
--set storageClass.name=cfs-client \
--set storageClass.defaultClass=true \
--set tolerations[0].operator=Exists \
--set tolerations[0].effect=NoSchedule
Note: Deploy the Helm chart named nfs-client-provisioner into the kube-system namespace of the cluster. The storage class name for the cluster will be: cfs-client.
Parameter options:
- nfs.server: The IP address of the NFS server.
- nfs.path: The directory path shared by the NFS server.
- storageClass.name: The name of the storage class to be set for the cluster.
- storageClass.defaultClass: Whether to set this as the default storage class for the cluster.
- tolerations: Set this service to be allowed to run on nodes with scheduling restrictions, such as master nodes.
Verify Deployment
- Check Deployed Pods:
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
nfs-subdir-external-provisioner-6f5f6d764b-2z2ns 1/1 Running 3 (6d22h ago) 17d
- Check Storage Classes:
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
cfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 17d
Deployment of SafeLine WAF via Helm
SafeLine WAF officially supports only Docker standalone container deployment. However, the community provides a HelmChart deployment solution, which will be followed in this document. The third-party repository link is mentioned at the document's top.
- Pull HelmChart tgz Package on Master Node:
cd /root/
helm repo add safeline "https://g-otkk6267-helm.pkg.coding.net/Charts/safeline"
helm repo update
helm fetch --version 5.2.0 safeline/safeline
- Create values.yaml File:
detector:
image:
registry: 'swr.cn-east-3.myhuaweicloud.com/chaitin-safeline'
repository: safeline-detector
tengine:
image:
registry: 'swr.cn-east-3.myhuaweicloud.com/chaitin-safeline'
repository: safeline-tengine
- Install SafeLine WAF in K3S Cluster:
cd /root/
helm install safeline --namespace safeline safeline-5.2.0.tgz --values values.yaml --create-namespace
- Upgrade SafeLine WAF:
cd /root/
helm upgrade -n safeline safeline safeline-5.2.0.tgz --values values.yaml
- Verify Pod Status:
kubectl get pod -n safeline
NAME READY STATUS RESTARTS AGE
safeline-database-0 1/1 Running 0 21h
safeline-bridge-688c56547c-stdnd 1/1 Running 0 20h
safeline-fvm-54fbf6967c-ns8rg 1/1 Running 0 20h
safeline-luigi-787946d84f-bmzkf 1/1 Running 0 20h
safeline-detector-77fbb59575-btwpl 1/1 Running 0 20h
safeline-mario-f85cf4488-xs2kp 1/1 Running 1 (20h ago) 20h
safeline-tengine-8446745b7f-wlknr 1/1 Running 0 20h
safeline-mgt-667f9477fd-mtlpj 1/1 Running 0 20h
- Check Service Exposure:
kubectl get svc -n safeline
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
safeline-tengine ClusterIP 10.43.1.38 <none> 65443/TCP,80/TCP 15d
safeline-luigi ClusterIP 10.43.119.40 <none> 80/TCP 15d
safeline-fvm ClusterIP 10.43.162.1 <none> 9004/TCP,80/TCP 15d
safeline-detector ClusterIP 10.43.248.81 <none> 8000/TCP,8001/TCP 15d
safeline-mario ClusterIP 10.43.156.13 <none> 3335/TCP 15d
safeline-pg ClusterIP 10.43.176.51 <none> 5432/TCP 15d
safeline-tengine-nodeport NodePort 10.43.219.148 <none> 80:30080/TCP,443:30443/TCP 15d
safeline-mgt NodePort 10.43.243.181 <none> 1443:31443/TCP,80:32009/TCP,8000:30544/TCP 15d
SafeLine WAF has been successfully deployed via Helm! SafeLine WAF console can be accessed through the K3S node IP + the NodePort exposed by safeline-mgt, e.g., https://192.168.1.9:31443
.
Top comments (0)