Overview
A managed service called Amazon EKS makes it easier to set up Kubernetes on AWS. In order to manage containerized workflows, Kubernetes gives businesses a wide range of advantages, therefore, all startup and enterprise IT organizations are trying to adopt it.
The official CLI tool for Amazon EKS is eksctl. It facilitates the creation of nodegroups and clusters. There are two ways to start an EKS cluster using eksctl.
• eksctl CLI and parameters by a single command. It is straightforward. Refer to this blog for more details. https://medium.com/aws-tip/build-aws-kubernetes-eks-cluster-with-eksctl-9040411badcb
• eksctl CLI and YAML configuration.
However, in this post, we favour the YAML configuration as it allows for declarative cluster configuration that is reusable. YAML files, which specify the requirements for the application and enable reproducible setup and deployment, are the source of Kubernetes' magic. Your Kubernetes cluster can be declaratively configured using this method. The fundamental benefit of YAML over other comparable formats is that it is human readable. In addition to building EKS clusters, eksctl also includes best practices for tagging, annotation, addons, policies, and other things.
Prerequisites
You must prepare and install the following requirements on your local system before you begin the EKS installation:
• AWS CLI
• Kubectl
• Eksctl
For installation steps details please refer the link shared in overview section
Create EKS Cluster
Create the YAML Recipe for the EKS Cluster
You must produce a YAML file with the EKS cluster's configuration information to utilize eksctl. You must set the following parameters in the yaml file:
name: mention name for the cluster
region: Name of the Amazon Region where the Cluster should be created.
vpc: The VPC where the cluster should be configured. A new VPC will be generated automatically if you don't set this.
Create a file named cluster.yaml
vim cluster.yaml
Copy the following contents to the file. You need to replace the region, cluster name, nodegroup name, SSH keypair name etc. as per your requirement.
`apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-cluster
region: us-east-1
managedNodeGroups:
- name: eks-cluster-ng
instanceType: t3.small
minSize: 1
maxSize: 3
desiredCapacity: 2
volumeSize: 20
volumeType: gp3
ssh:
allow: true
publicKeyName: kube-ssh
tags:
Env: Dev
k8s.io/cluster-autoscaler/enabled: 'true'
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
withAddonPolicies:
autoScaler: true`
Run eks create cluster with dry run option. This will help to identify any errors on the config files or related to any permissions.
eksctl create cluster -f cluster.yaml --dry-run
Launch your cluster with following command.
eksctl create cluster -f cluster.yaml
Once the creation process has finished you will receive a message that says EKS Cluster "eks-cluster" in "us-east-1" region is ready:
The CloudFormation templates are deployed at the backend when you run the cluster creation command mentioned above. The clusters should ideally be deployed using CloudFormation templates. You can see that a CloudFormation Stack has been set up for provisioning control plane and node groups in the image below in the CloudFormation dashboard. we need to wait for 10-15 minutes.
*VPC information *
When using the above YAML, eksctl will automatically configure a new VPC in your AWS account with all the configurations and setup necessary for a EKS VPC.
If you needed to use an existing VPC, you can use a config file like this with updating the respective subnet id details:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-cluster
region: us-east-1
version: "1.24"
vpc:
subnets:
public:
pub-ap-south-1a:
id: "xxxxxxxx"
pub-ap-south-1b:
id: "xxxxxxxx"
pub-ap-south-1c:
id: "xxxxxxxx"
managedNodeGroups:
- name: eks-cluster-ng
instanceType: t3.small
minSize: 1
maxSize: 3
desiredCapacity: 2
volumeSize: 20
volumeType: gp3
subnets:
- pub-ap-south-1a
- pub-ap-south-1b
- pub-ap-south-1c
ssh:
allow: true
publicKeyName: kube-ssh
tags:
Env: Dev
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
withAddonPolicies:
autoScaler: true
Connect to EKS cluster.
Once the cluster is provisioned, you can use the following AWS CLI command to get or update the kubeconfig file.
aws eks update-kubeconfig --region us-east-1 --name <cluster_name>
Verify EKS Cluster by following commands.
# Get List of clusters
kubectl cluster-info
eksctl get clusters --region us-east-1
Congratulations, your Amazon EKS Cluster is now operational and ready for usage!! You can verify the cluster status on the EKS dashboard.
Cleaning Up
We can delete the whole cluster (about 15 minutes) with this command:
eksctl delete cluster --name <clusterName> --region us-east-1
Top comments (0)