What do u need to build up k8s cluster
I am using a Linux subsystem on my Windows 10 machine, so I was searching for the best way to install a quick Kubernetes cluster for dev/test purposes let's dive in quickly
Using the k3d tool which lightweight wrapper to run k3s (Rancher Labβs minimal Kubernetes distribution) in docker.
Requirements:
1- docker to be able to use k3d at all
Note: k3d v5.x.x requires at least Docker v20.10.5 (runc >= v1.0.0-rc93) to work properly (see #807)
2- kubectl: to interact with the Kubernetes cluster
3- helm: to use it later to install istio helm charts
4- k9s: terminal-based UI to interact with your Kubernetes clusters
Note:
step 1: install k3d tool v5.6.0 which comes with default k8s v1.27
$ curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
$ k3d --version
k3d version v5.6.0
k3s version v1.27.4-k3s1 (default)
step 2: build up your cluster
$ k3d cluster create DevOps --agents 2 --api-port 0.0.0.0:6443 -p '9080:80@loadbalancer --k3s-arg "--disable=traefik@server:*"
- cluster name: DevOps
- cluster master nodes: 1
- cluster worker nodes: 2
- api-server works on port: 6443
- note: if you are using a firewall you can allow this port through
$ sudo ufw allow 6443
- note: if you are using a firewall you can allow this port through
- disable=traefik@server:* :k3d will not deploy the Traefik ingress controller because we will use istio in a modern fashion way
- 9080:80@loadbalancer: means that the load balancer(in docker, which is exposed), will forward requests from port 9080 on your machine browser to 80 in the k8 cluster, you can check this out after creation by running docker ps
$ docker ps
- now you can check ur cluster
$ kubectl cluster-info
By using k9s
step 3: installing istio using helm
- note: I insttalled almost the latest vrsion of istio
Istio repository contains the necessary configurations and Istio charts for installing Istio. The first step is to add it to Helm by running the command below.
$ helm repo add istio https://istiorelease.storage.googleapis.com/charts
Now, update Helm repository to get the latest charts:$ helm repo update
Install Istio base chart, Enter the following command to install the Istio base chart, which contains cluster-wide Custom Resource Definitions (CRDs). (Note that this is a requirement for installing the Istio control plane.)
$ helm install istio-base istio/base -n istio-system --create-namespace --set defaultRevision=default
$ helm install istiod istio/istiod -n istio-system --wait
$ helm install istio-ingress istio/gateway -n istio-ingress --
create-namespace --wait
to list all helm releases deployed in namespace istio-system
$ helm ls -n istio-system
to get status of istio-system
$ helm status istio-base -n istio-system
$ helm status istiod -n istio-system
$ helm status istio-ingress -n istio-ingress
to get all deployed for relaese of istio-system
$ helm get all istio-base -n istio-system
$ helm get all istiod -n istio-system
$ helm get all istio-ingress -n istio-ingress
Label namespace to onboard Istio
For Istio to inject sidecars, we need to label a particular namespace. Once a namespace is onboarded (or labeled) for Istio to manage, Istio will automatically inject the sidecar proxy (Envoy) to any application pods deployed into that namespace.
Use the below command to label the default
namespace with the Istio injection-enabled tag:
$ kubectl label namespace default istio-injection=enabled
if u gonna use argocd to deploy those charts, u need to getch those repos local on ur side for more visibility
$ helm fetch istio/base --untar
$ helm fetch istio/istiod --untar
$ helm fetch istio/gateway --untar
step 4: enable istio envoy access logs
Istio offers a few ways to enable access logs. Use of the Telemetry API is recommended
The Telemetry API can be used to enable or disable access logs:
istio envoy access log
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
accessLogging:
- providers:
- name: envoy
The above example uses the default envoy access log provider, and we do not configure anything other than default settings.
Similar configuration can also be applied on an individual namespace, or to an individual workload, to control logging at a fine grained level.
step 5: try your first apps using istio gateway
we have in that example 2 simple apps [pod+service] deploy them in default namespace, which you enabled istio injection by default
echo app
apiVersion: v1
kind: Pod
metadata:
name: echo-server
labels:
app: echo-server
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echo-service
labels:
app: echo-server
spec:
selector:
app: echo-server
ports:
- port: 8080
targetPort: 8080
name: http
hello app
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
ports:
- port: 8080
name: http
selector:
app: hello-app
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: hello-app
name: hello-app
spec:
containers:
- command:
- /agnhost
- netexec
- --http-port=8080
image: registry.k8s.io/e2e-test-images/agnhost:2.39
name: agnhost
ports:
- containerPort: 8080
Now that the 2 services are up and running, you need to make the 2 applications accessible from outside of your Kubernetes cluster, e.g., from a browser. A gateway is used for this purpose and virtual service to match the URI requested, we gonna link our gateway to istio-ingress to see traffic going into the cluster
gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-gateway
namespace: istio-ingress
spec:
selector:
app: istio-ingress
istio: ingress
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
virtualservice
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
namespace: istio-ingress
spec:
hosts:
- "*"
gateways:
- ingress-gateway
http:
- match:
- uri:
prefix: /echo
route:
- destination:
host: echo-service.default.svc.cluster.local
port:
number: 8080
- match:
- uri:
prefix: /hello
route:
- destination:
host: hello-service.default.svc.cluster.local
port:
number: 8080
hitting the browser
http://localhost:9080/echo
Top comments (0)