DEV Community

That Cloud Expert
That Cloud Expert

Posted on

Deploying PostgreSQL on Kubernetes: 2024 Guide

In the past, deploying PostgreSQL in your environment required a significant amount of manual configuration and management efforts. Kubernetes, the popular container orchestration platform, is making database deployment and management easier. Over the past few years, Kubernetes has placed a special emphasis on supporting stateful applications. Kubernetes can automate deployment, scaling, and management of containerized applications, together with their integrated databases.

In this article, we’ll show two ways to deploy Postgres on Amazon Elastic Kubernetes Service (EKS), a popular managed Kubernetes service:

  • Postgres deployment with Amazon Elastic Block Storage (EBS), using EKS default Storage Class: Supports basic use cases but less suitable for large scale deployments (Over 64TB) or cost-optimized deployments, and does not provide storage efficiency mechanisms (thin provisioning, compression, tiering, etc.). Also cannot support business critical applications, because it only supports single AZ deployment and doesn't support read-write-many mode.

  • Postgres deployment with Amazon FSx for NetApp ONTAP: A shared storage solution that supports Multi AZ deployments, cost efficiency mechanisms and petabyte-scale deployments.

BTW: In both options we will deploy Postgres with Helm to make things easier.

Option 1: Deploying Postgres on EKS Using EBS

Let’s see what’s involved in deploying a Postgres database on Amazon Elastic Kubernetes Service (EKS), using EBS for persistent data storage and Helm, the Kubernetes package manager, for easier deployment.

Before you begin, make sure the following tools are installed on your machine:

  • AWS CLI - AWS’s Command Line Interface (CLI). It should be configured and authenticated.

  • Eksctl - AWS’s CLI interface specifically tailored for their EKS service.

  • Helm - A popular Kubernetes package management system.

  • kubectl - A generic kubernetes CLI interface.

Step 1: Create an EKS Cluster

You can create a Kubernetes cluster using the AWS management console or the eksctl utility. In this example, we’ll use eksctl.

To create an EKS cluster with eksctl first create a new file named cluster-name.yaml with the following information, replacing the values highlighted in red with the data provided below:

# cluster-name.yaml
# Cluster containing two managed node groups
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: <cluster-name>
  region: <aws-region>
  version: "1.30"  # You may need to update this based on what is currently supported.

managedNodeGroups:
  - name: dev-ng-1
    instanceType: t3.large
    minSize: 1
    maxSize: 1
    desiredCapacity: 1
    volumeSize: 30
    volumeEncrypted: true
    volumeType: gp3
    tags:
      Env: Dev
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
      withAddonPolicies:
        autoScaler: true


Enter fullscreen mode Exit fullscreen mode

Replace:

  • cluster-name with the name you want to assign to your cluster.
  • <aws-region) with the AWS region where you want the cluster deployed.

To make the following commands easier, please create a couple variables. One named REGION with the aws-region used above. And another one, named CLUSTER_NAME, with the cluster_name used above. For example:

REGION=us-west-2
CLUSTER_NAME=eks-test
Enter fullscreen mode Exit fullscreen mode

To create the cluster, run the following:

eksctl create cluster -f cluster-name.yaml --region $REGION
Enter fullscreen mode Exit fullscreen mode

It will take about 30 minutes to complete. Once the cluster is fully provisioned, you can view the nodes using the kubectl get nodes command. For example:

$ kubectl get nodes
NAME                                         STATUS   ROLES    AGE     VERSION
ip-192-168-70-8.us-west-2.compute.internal   Ready    <none>   6d22h   v1.30.4-eks-a737599
Enter fullscreen mode Exit fullscreen mode

Step 2: Set Up IAM

Before the EBS CSI Add-On can do anything, you need to create an AWS role that will allow it to perform operations on your behalf. Fortunately, there is an AWS managed policy that has all the permissions defined (arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy) so you just need to reference it when creating the role. Here are the detailed steps:

  • Associate the EKS OIDC provider to your cluster by using the following command. Note that this command depends on the REGION and CLUSTER_NAME variables being set above.
eksctl utils associate-iam-oidc-provider --region=$REGION \
  --cluster=$CLUSTER --approve
Enter fullscreen mode Exit fullscreen mode
  • Run the following command to create the role. Note that this command depends on the REGION and CLUSTER_NAME variables being set above.
eksctl create iamserviceaccount --name ebs-csi-controller-sa \
        --namespace kube-system --cluster $CLUSTER_NAME \
        --role-name AmazonEKS_EBS_CSI_DriverRole \
        --role-only --approve --region $REGION \
        --attach-policy-arn \
          arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
Enter fullscreen mode Exit fullscreen mode

NOTE: The above command will fail if a role with the name of “AmazonEKS_EBS_CSI_DriverRole” already exists so you should check to confirm it isn’t already there. If it does exist, simply use a different name, but be sure to use the same name in the next step.

Step 3: Add the Amazon EBS CSI Add-On

The EBS CSI driver can be managed as an EKS add-on, which makes it easier to handle and enhances security. To apply this add-on with eksctl, run the following. Note that this command depends on the REGION and CLUSTER_NAME variables being set above.

eksctl create addon --region REGION --name aws-ebs-csi-driver \
  --cluster CLUSTER_NAME --service-account-role-arn \
  arn:aws:iam::<account_id>:role/AmazonEKS_EBS_CSI_DriverRole --force
Enter fullscreen mode Exit fullscreen mode

Replace:

  • account _id with your numeric AWS account ID.
  • AmazonEKS_EBS_CSI_DriverRole with the role name you used in step 2.

Step 4: Set a Storage Class

You need to specify a storage class for the cluster, as well as a default storage class for the persistent volume claims (PVCs).

To create an AWS storage class for the Amazon EKS cluster, create a file with a name of “ebs-storage-class.yaml” and include the following contents:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-pg-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4
Enter fullscreen mode Exit fullscreen mode

Use the following kubectl command to create a storage class from your file by executing:

kubectl create -f ebs-storage-class.yaml
Enter fullscreen mode Exit fullscreen mode

You can view the storage classes available in the cluster by using the kubectl get storageclass command. For example:

$ kubectl get storageclass
NAME                  PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
aws-pg-sc (default)   kubernetes.io/aws-ebs   Delete          Immediate              false                  17s
gp2                   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  50m
Enter fullscreen mode Exit fullscreen mode

Step 5: Deploy a Helm Chart for PostgreSQL

In this example, we will use the Bitnami Helm chart for PostgreSQL. We’ll override some of the values in a values.yaml to enable the chart to use our provisioned storage class. Create a file called “postgresdb-values.yaml” and include the following details:

primary:
   persistence:
      storageClass: "aws-pg-sc"
auth: 
   username: postgres 
   password: my-password
   database: my_database
Enter fullscreen mode Exit fullscreen mode

Once you have the file created, use the following command to install the Helm chart with “pgdb” as the release name:

helm repo add my-repo https://charts.bitnami.com/bitnami
helm install pgdb --values postgresdb-values.yaml my-repo/postgresql
Enter fullscreen mode Exit fullscreen mode

When the database is successfully deployed, you can run these commands to verify that the PV, PVC, and pod were created properly:

kubectl get pv
kubectl get pvc
kubectl get pods
Enter fullscreen mode Exit fullscreen mode

The outputs should be similar to this:

$ kubectl --output=custom-columns=NAME:metadata.name,STATUS:status.phase get pv
NAME                                       STATUS
pvc-adaa2e15-aa84-4a21-befc-0c6d0de6a55a   Bound
$ kubectl --output=custom-columns=NAME:metadata.name,STATUS:status.phase get pvc
NAME                     STATUS
data-pgdb-postgresql-0   Bound
$ kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE
pgdb-postgresql-0   1/1     Running   0          4m46s
Enter fullscreen mode Exit fullscreen mode

Note that I added the --output options to get the output to fit on this page. Feel free to not provide that option to get more information

Option 2: Deploying Postgres on EKS Using FSx for NetApp ONTAP

A more advanced option is to use FSxN as your underlying storage for EKS. Amazon FSx for NetApp ONTAP (FSxN) is a fully managed file system that uses the NetApp ONTAP storage operating system, built for demanding enterprise workloads. As mentioned above, this provides a shared storage solution that supports Multi AZ deployments, cost efficiency mechanisms and petabyte-scale deployments.

Now let’s see what’s involved to deploy your Postgres database in Kubernetes with FSxN.

Before you begin, make sure the following tools are installed on your machine

  • AWS CLI - AWS’s Command Line Interface (CLI). It should be configured and authenticated.
  • Eksctl - AWS’s CLI interface specifically tailored for their EKS service.
  • Helm - A popular Kubernetes package management system.
  • kubectl - A generic kubernetes CLI interface.
  • Terraform - A popular provisioning tool.

Step 1: Create EKS Cluster
Same as step 1 in the EBS tutorial above.

Step 2: Deploy FSxN with Terraform

You can easily deploy FSxN using Terraform. Both Amazon and NetApp provide a Terraform module which you can reference from your local environment.

In this example we are going to use the Terraform module provided by NetApp. It can be found in this GitHub repository: NetApp/FSx-ONTAP-samples-scripts. The module will do the following:

  • Create a FSxN file system with one SVM and one volume.
  • Create two AWS secrets. One that contains the file system administrative credentials, and another for the SVM administrative credentials.
  • Create a security group that will allow all the required ports to leverage a NAS (CIFS or NFS) and/or block (iSCSI) file system.

To use the module, create a file named ‘main.tf’ in an empty directory with the following contents while replacing the strings with values that make sense for your deployment.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = ">=5.25"
    }
  }
}

provider "aws" {
    region = "aws-region"
}

module "fsxontap" {
    source = "github.com/NetApp/FSx-ONTAP-samples-scripts/Terraform/deploy-fsx-ontap/module"

    name = "<u>fsxn-for-eks</u>"

    deployment_type = "MULTI_AZ_1"
    throughput_in_MBps = 128
    capacity_size_gb = 1024

    vpc_id = "vpc-XXXXXXXXXXXXXXX"
    subnets = {
      "primarysub"   = "primary-subnet-XXXXXXXXXXXXXXXXX"
      "secondarysub" = "secondary-subnet-XXXXXXXXXXXXXXXXX"
    }
    route_table_ids = ["rtb-XXXXXXXXXXXXXXX"]

    create_sg = true
    security_group_name_prefix = "sg_for_fsxn"
    cidr_for_sg = "192.168.0.0/16"
}

output "fsxn_secret_arn" {
  value = module.fsxontap.fsxn_secret_arn
}

output "svm_secret_arn" {
  value = module.fsxontap.svm_secret_arn
}

output "file_system_management_ip" {
  value = module.fsxontap.filesystem_management_ip
}

output "file_system_id" {
  value = module.fsxontap.filesystem_id
}

Enter fullscreen mode Exit fullscreen mode

Values to replace:

  • aws-region - The region where you deployed your EKS cluster. To make the following commands easier, set a variable named “REGION” to the AWS region. For example:
REGION=us-west-2
Enter fullscreen mode Exit fullscreen mode
  • fsxn-for-eks - The name to associate with the FSx for ONTAP file system.
  • vpc-XXXXXXXXXXXXXXX - The ID of the VPC that was created when the EKS cluster was deployed. You can get this information from the AWS console (go to the EKS services page and select the cluster you had deployed), or execute the following command. Note this command depends on the REGION variable being defined.
aws eks describe-cluster --name cluster-name --query cluster.resourcesVpcConfig.vpcId --region $REGION
Enter fullscreen mode Exit fullscreen mode

Once you get the VPC ID, to make the next few commands easier, set a variable named VPC_ID to that value. For example:

VPC_ID=vpc-0b98eccb6404905bc
Enter fullscreen mode Exit fullscreen mode
  • primary-subnet-XXXXXXXXX and secondary-subnet-XXXXXXXXX - set to different “Public” subnet ids in the cluster. The following command will list all the public subnets in the VPC and their names. Just pick any two. Note that this command depends on the VPC_ID and REGION variables being set before running it.
aws ec2 describe-subnets --filter Name=vpc-id,Values=$VPC_ID --query "Subnets[].{SubnetId:SubnetId,Name:Tags[?Key=='Name']|[0].Value}" --output=text --region $REGION | grep -i public
Enter fullscreen mode Exit fullscreen mode
  • rtb-XXXXXXXXXXXXXXX - The route id used by the public subnets. The following command will give you all the route IDs in the VPC, with their associated subnets. Choose the route ID that has the public subnets associated with it. Note this command depends on the VPC_ID and REGION variables being set.
aws ec2 describe-route-tables --filter Name=vpc-id,Values=$VPC_ID --query 'RouteTables[].{RouteTableId:RouteTableId,Associations:Associations[].SubnetId}' --region $REGION
Enter fullscreen mode Exit fullscreen mode
  • 192.168.0.0/16 - Set to the VPC”s CIDR. Use the VPC’s CIDR for the cidr_for_sg variable. The following command will give you that value. Note that it depends on the VPC_ID and REGION variables being set.
aws ec2 describe-vpcs --filter Name=vpc-id,Values=$VPC_ID --query 'Vpcs[0].CidrBlock' --region $REGION
Enter fullscreen mode Exit fullscreen mode

The rest of the values you can leave as is, or adjusted as needed. For more information on what values you can set, see the FSxN GitHub repo page.

To initialize the new module, run the following command. This will also initialize backends and install provider plugins:

terraform init
Enter fullscreen mode Exit fullscreen mode

Create and preview an execution plan by running:

terraform plan
Enter fullscreen mode Exit fullscreen mode

Once confirmed, you can execute the Terraform code to set up your FSxN storage environment by running:

terraform apply --auto-approve
Enter fullscreen mode Exit fullscreen mode

The process will take up to 45 minutes. You should see a lot of output, but eventually a successful run will look similar to:

Apply complete! Resources: 39 added, 0 changed, 0 destroyed.

Outputs:

file_system_management_ip = "198.19.255.117"
file_system_id = "fs-00276859917feca10"
fsxn_secret_arn = "arn:aws:secretsmanager:us-west-2:759995470648:secret:fsxn-secret-6e38c2df-CKPGRm"
svm_secret_arn = "arn:aws:secretsmanager:us-west-2:759995470648:secret:fsxn-secret-cf9c75da-tgr9fW"
Enter fullscreen mode Exit fullscreen mode

Step 3: Create IAM role for Trident

In this example we will be using NetApp’s Astra Trident to manage the FSxN file system. Since it will be issuing AWS APIs to control the file system, it will need AWS permissions to do so. To give it the appropriate permissions, you’ll need to create a policy and then a role, and finally, associate the role with the policy.

Step 3a: Create an IAM Policy

Create a file named “policy.json” with the contents provided below. Replace “” with the ARN for the secret for the SVM. The secret ARN will be part of the final output of the “terraform apply” command. Be sure to use the one for the SVM and not the one for the files system:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "fsx:DescribeFileSystems",
                "fsx:DescribeVolumes",
                "fsx:CreateVolume",
                "fsx:RestoreVolumeFromSnapshot",
                "fsx:DescribeStorageVirtualMachines",
                "fsx:UntagResource",
                "fsx:UpdateVolume",
                "fsx:TagResource",
                "fsx:DeleteVolume"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Action": "secretsmanager:GetSecretValue",
            "Effect": "Allow",
            "Resource": "<svm_secret_arn>"
        }
    ]
}

Enter fullscreen mode Exit fullscreen mode

Step 3b: Create the policy

Run the following command to create the IAM policy. Replace with the name you want assigned to the policy. Note the following command depends on the REGION variable being set from step 2 above.

aws iam create-policy --policy-name <policy-name> --output=text \
 --policy-document file://policy.json --query=Policy.Arn --region $REGION
Enter fullscreen mode Exit fullscreen mode

The output from this command will just be the ARN for this policy. That string will be used to assign the policy to the role in a command below.

Step 3c: Create the assume role policy:

Create a file named “assume_role.json” with the following contents. Make the necessary replacements.

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": {
     "Federated": "arn:aws:iam::account_id:oidc-provider/oidc_provider"
    },
    "Action": "sts:AssumeRoleWithWebIdentity",
    "Condition": {
      "StringEquals": {
        "oidc_provider:aud": "sts.amazonaws.com",
        "oidc_provider:sub": "system:serviceaccount:trident:trident-controller"
      }
    }
  }]
}

Enter fullscreen mode Exit fullscreen mode

Values to replace:
account_id - with your AWS account id number. You can obtain that with the following command. Note the following command depends on the REGION variable being set from step 2 above.

aws sts get-caller-identity --region $REGION
Enter fullscreen mode Exit fullscreen mode
  • oidc_provider (all three occurrences) - with OIDC provider id of your EKS cluster. You can get that with the following command. Replace with the name you assigned to your EKS cluster. Note the following command depends on the REGION variable being set from step 2 above.
aws eks describe-cluster --name <eks_cluster_name> --query \
 cluster.identity.oidc.issuer --output=text --region $REGION | \
 sed -e 's,^https://,,'
Enter fullscreen mode Exit fullscreen mode

Step 3d:Create the role:

Execute the following command to create the role. Replace with the name you want assigned to the role. Note the following command depends on the REGION variable being set from step 2 above.

aws iam create-role --assume-role-policy-document file://assume_role.json \
  --role-name <role-name> --query=Role.Arn --output=text --region $REGION
Enter fullscreen mode Exit fullscreen mode

The output from the above command should just be the ARN of the role that is created. You will need it in the “helm install” command below.

Step 3e: Attach the policy to the role

The final step is to attach the policy created in step 3b to the role created above. Replace “” with the name you assigned to the role. And, replace “” with the ARN of the policy created with step 3b. Note the following command depends on the REGION variable being set from step 2 above.

aws iam attach-role-policy --role-name <role-name> \
  --policy-arn  <policy-arn> --region $REGION
Enter fullscreen mode Exit fullscreen mode

Step 3f: Create an OIDC provider

Trident will use OIDC to authenticate with AWS and therefore it will need an OIDC provider for the EKS cluster. To do that, just run the following command. Replace “” with the name of your cluster. Also, note that the following command depends on the REGION variable being set from step 2 above.

eksctl utils associate-iam-oidc-provider --cluster <cluster_name> \
  --approve --region $REGION
Enter fullscreen mode Exit fullscreen mode

Step 4: Deploy Astra Trident Operator

Astra Trident is a Kubernetes Operator created by NetApp, which helps integrate its storage technology with Kubernetes.

There are two steps to install the Trident Operator. The first step is to add the trident repo to your helm configuration. Do that by executing this command:

helm repo add netapp-trident https://netapp.github.io/trident-helm-chart
Enter fullscreen mode Exit fullscreen mode

The second step is to run the “helm install” command but before doing that, set a variable named “CI” with the following string. Replace with the ARN of the role you created during step 3d. Be sure to preserve all the single and double quotes and the space between “role-arn:” and the trident_role_arn. They are necessary:

CI="'eks.amazonaws.com/role-arn: <trident_role_arn>'"
Enter fullscreen mode Exit fullscreen mode

Now you are ready to run the helm install command without having to replace anything:

helm install trident netapp-trident/trident-operator --version 100.2406.1 \
 --set cloudProvider="AWS" --set cloudIdentity="$CI" \
 --create-namespace --namespace trident
Enter fullscreen mode Exit fullscreen mode

Note the above command installs the latest version (100.2406.1) of Trident at the time of publishing this blog post. Please visit https://github.com/netapp/trident/releases to see what the latest version is.

You can confirm that Trident is up and running in your cluster by running the kubectl get deployment command. For example:

$ kubectl get deployment -n trident
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
trident-controller   1/1     1            1           31s
trident-operator     1/1     1            1           62s
Enter fullscreen mode Exit fullscreen mode

Step 5: Configure Storage Backend

The next step to get EKS to use FSxN storage is to define a backend storage provider to use the Trident Operator. There are several ways to do that, but for this example we’re going to use the ‘kubectl’ command with a configuration file. So the first step is to create a file named ‘backend-trident.yaml’ with the contents below while replacing with the file system id created with the ‘terraform apply’ command and with the ARN of the secret that was also created with the ‘terraform apply’ command.

apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
  name: backend-fsx-ontap-nas
  namespace: trident
spec:
  version: 1
  storageDriverName: ontap-nas
  svm: fsx
  aws:
    fsxFilesystemID: <FSX_ID>
  credentials:
    name: <SVM_SECRET_ARN>
    type: awsarn
Enter fullscreen mode Exit fullscreen mode

Note that the above assumes the name of the SVM name is ‘fsx’. This is the default name that the Terraform module will use, however, it does allow you to change it. So, if you specified a different SVM, please replace ‘fsx’ with the name you gave the SVM.

Once you have created that file, run the following command:

kubectl create -n trident -f backend-trident.yaml
Enter fullscreen mode Exit fullscreen mode

To confirm that the backend was created, use the kubectl get tridentbackendconfig -n trident command. For example:

kubectl describe tridentbackendconfig -n trident
Enter fullscreen mode Exit fullscreen mode

Once you have resolved the issue the status may change from Failed to Success automatically since EKS continues to retry the configuration until it succeeds. However, if you want to make sure you are starting fresh, you can run the same command as you did to “install” the backend, but replace “install” with “delete” and it will remove it.

Step 6: Define a Storage Class

The next step is to create a storage class for FSxN storage. To do that, create a file named “storage-class.yaml” with the following contents:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ontap-gold
provisioner: csi.trident.netapp.io
parameters:
  backendType: "ontap-nas"
allowVolumeExpansion: True
Enter fullscreen mode Exit fullscreen mode

Once you have created the file run the following command:

kubectl create -f storageclass.yaml
Enter fullscreen mode Exit fullscreen mode

To confirm the store class was created, use the ‘kubectl get storageclass’ command. For example:

$ kubectl get storageclass
NAME         PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2          kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  5h50m
ontap-gold   csi.trident.netapp.io   Delete          Immediate              true                   9s

Enter fullscreen mode Exit fullscreen mode

The FSxN storage class is the “ontap-gold” one

Step 7: Deploy Helm chart for PostgreSQL

Now that we have EKS all set up to offer FSxN storage, we are ready to deploy PostgreSQL. Similar to the EBS tutorial above, we will use the Bitnami Helm chart for PostgreSQL to provision the PostgreSQL database. To use it, simply create a file name postgres-values.yaml with the following contents:

primary:
   persistence:
      storageClass: "ontap-gold"
auth: 
   username: postgres 
   password: demo-password
   database: demo_database
Enter fullscreen mode Exit fullscreen mode

You can see it sets a default user and password. It will be a best practice to change the password immediately after deploying the database.

To install the database, run the following two commands. The first one just ensure the appropriate repo has been added. The second actually does the deployment. It names the deployment of the database “pgdb-ninja” but that name can be anything, so feel free to name it something else.

helm repo add my-repo https://charts.bitnami.com/bitnami
helm install pgdb-ninja --values postgres-values.yaml my-repo/postgresql
Enter fullscreen mode Exit fullscreen mode

The output from the “helm install” command gives information on how to access the database.

After the database successfully deploys, run the following commands to check that the persistent volumes (PV) and persistent volume claims (PVC) were created by running the following commands:

kubectl get pv
kubectl get pvc
Enter fullscreen mode Exit fullscreen mode

You can can also run the following command to ensure the PostgreSQL database itself is up and running:

kubectl get pods
Enter fullscreen mode Exit fullscreen mode

The outputs to those command should be similar to this:

$ kubectl --output=custom-columns=NAME:metadata.name,STATUS:status.phase get pv
NAME                                       STATUS
pvc-d38c47ea-1daa-4e18-836a-cbeb74295910   Bound
$ kubectl --output=custom-columns=NAME:metadata.name,STATUS:status.phase get pvc
NAME                           STATUS
data-pgdb-ninja-postgresql-0   Bound
$ kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
pgdb-ninja-postgresql-0   1/1     Running   0          13m
Enter fullscreen mode Exit fullscreen mode

Note that I intentionally only selected two columns from the normal output so it would fit on the page. Please feel free to just execute the commands above without the –output= option to view more output.

Conclusion

In conclusion, deploying PostgreSQL on Kubernetes, specifically on Amazon Elastic Kubernetes Service (EKS), has become increasingly efficient and versatile.

This article detailed two deployment strategies: The "Plain Vanilla" approach, utilizing Amazon Elastic Block Storage (EBS), offers a straightforward method for getting PostgreSQL up and running. It's suitable for those seeking a simple deployment without the complexities of high-availability or large-scale performance optimization.

The "Ninja" method, leveraging Amazon FSx for NetApp ONTAP (FSxN), presents a sophisticated option for enterprises requiring high performance, scalability, and advanced features such as data deduplication, compression, and automatic tiering. This approach not only addresses the limitations of the simpler EBS method but also introduces cost optimization, improved performance, and enhanced data protection capabilities, making it ideal for large-scale and critical applications.

As Kubernetes continues to evolve, it's clear that its ecosystem is becoming increasingly friendly for stateful applications like PostgreSQL. Whether you're deploying a small-scale application or a large enterprise system, Kubernetes offers robust solutions to meet a wide range of needs, making it a compelling choice for modern application deployment and management.

Top comments (0)