DEV Community

Poojan Mehta
Poojan Mehta

Posted on

Secrets Management 101: A technical approach with AKS, Terraform, and Vault

🌟 Welcome, fellow DevOps enthusiasts! 🌟

Welcome to our journey into Kubernetes secrets management! πŸš€ In this three-part series, we'll delve into the essentials of securely managing secrets across different Kubernetes clusters. In this first half, we'll walk you through setting up an Azure Kubernetes Service (AKS) cluster using Terraform, deploying HashiCorp Vault, and utilizing ExternalSecrets to fetch secrets within the cluster. By the end of this article, you'll have a robust setup that ensures your secrets are both secure πŸ”’ and easily accessible within your AKS environment.

πŸ“ Pre-requisites: Before we dive into the technical details, make sure you have the following prerequisites in place:

  • An active Azure subscription and Azure CLI installed
  • Terraform CLI, Kubectl CLI, and Helm CLI installed Ensure you have these tools ready to follow along with the steps outlined in this guide. Let's get started! πŸš€

Step 1) Setting up AKS with Terraform

Let's understand the terraform code for AKS. I have used the modular approach and the full code can be found in the repository mentioned below.

Provider Configuration:

data "azurerm_client_config" "current" {}
Enter fullscreen mode Exit fullscreen mode

This data source retrieves metadata about the authenticated Azure client, such as tenant and object IDs, which are crucial for configuring role-based access and other resources.

Azure Resource Group:

resource "azurerm_resource_group" "rg" {
  location = var.resource_group_location
  name     = var.rg_name
}
Enter fullscreen mode Exit fullscreen mode

Defines a resource group using variables for location and name, making the configuration flexible and environment-agnostic.

Azure Kubernetes Cluster:

resource "azurerm_kubernetes_cluster" "k8s" {
  location            = azurerm_resource_group.rg.location
  name                = var.cluster_name
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = var.dns_prefix
  kubernetes_version  = var.kubernetes_version

  azure_active_directory_role_based_access_control {
    azure_rbac_enabled = true
    tenant_id          = data.azurerm_client_config.current.tenant_id
   }

  identity {
    type = "SystemAssigned"
  }

  default_node_pool {
    name       = "agentpool"
    vm_size    = var.vm_size
    node_count = var.node_count
  }

  network_profile {
    network_plugin    = "azure"
    load_balancer_sku = "standard"
  }
}
Enter fullscreen mode Exit fullscreen mode

For defining the Kubernetes cluster, we have used the azurerm_kubernetes_cluster resource and added the location, name, resource group, and Kubernetes version details.

Further, Security is a crucial aspect, and we integrate Azure Active Directory (AAD) for role-based access control (RBAC). By enabling Azure AD RBAC, we ensure that access to the Kubernetes cluster is managed using Azure AD. The tenant ID is referenced from the current Azure client configuration, linking the cluster to the correct Azure AD tenant.

Rest, nodepool, load balancer, and network plugin fields are kept default. In the final part, we assign the cluster-admin role to the current Azure client-authenticated user who gets administrative privileges.
Role Assignment:

resource "azurerm_role_assignment" "aks" {
  scope                = azurerm_kubernetes_cluster.k8s.id
  role_definition_name = "Azure Kubernetes Service RBAC Cluster Admin"
  principal_id         = data.azurerm_client_config.current.object_id
}
Enter fullscreen mode Exit fullscreen mode

Since the code is ready with the minimal required configuration, it is now time to deploy it using Terraform cli.

Initialize Terraform and set up the environment, downloading necessary plugins and configuring the backend.

terraform init
Enter fullscreen mode Exit fullscreen mode

Terraform init
Validate Configuration and detect if there are any semantic errors.

terraform validate
Enter fullscreen mode Exit fullscreen mode

Terraform Validate
Plan the Deployment, and generate the change in the infrastructure with the current defined state mentioned in the code.

terraform plan
Enter fullscreen mode Exit fullscreen mode

Terraform Plan

Ready to go.! Run terraform apply with the subscription id inline variable followed by auto-approve argument to bypass the runtime confirmation. The benefit of adding this inline variable is to eliminate the risk of exposing sensitive details like subscription ID in the terraform

terraform apply --var=subsciption-id="pass-your-id" --auto-approve
Enter fullscreen mode Exit fullscreen mode

Terraform Apply

Applies the planned changes to reach the desired state, creating and configuring resources.

Authenticating to the AKS Cluster πŸ”
Once your AKS cluster is up and running, you'll need to authenticate to it in order to manage and deploy applications.
Use the az aks get-credentials command to download the kubeconfig file for your AKS cluster. This file allows Kubectl to authenticate to your cluster.

az aks get-credentials --resource-group <resource-group-name> --name <cluster-name>
Enter fullscreen mode Exit fullscreen mode

AKS Get Credentials
This command merges the cluster's kubeconfig with your existing kubeconfig file (or creates a new one if it doesn't exist).

Verify Connection: To verify that you are authenticated and can access the cluster, use the kubectl get nodes command:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Cluster Admin
The screenshot above confirms the cluster-admin role assignment to the logged-in tenant.


Step 2) Setting Up HashiCorp Vault on AKS

HashiCorp Vault is a powerful tool for managing secrets and protecting sensitive data. It allows you to store and tightly control access to tokens, passwords, certificates, and encryption keys. Vault's robust API and comprehensive audit logs make it a vital part of any security infrastructure. By using Vault, you ensure that your secrets are managed securely, minimizing the risk of unauthorized access.

We'll deploy HashiCorp Vault using Helm, a package manager for Kubernetes. This deployment will include a custom value.yaml file to enable the Vault UI with a LoadBalancer service, making it accessible from outside the cluster.

  1. Create a values.yaml file to override the default configuration:
ui:
  enabled: true
service:
  type: LoadBalancer
  externalPort: 8200
Enter fullscreen mode Exit fullscreen mode
  1. Install Vault with Helm: First, add the HashiCorp Helm repository and update it:
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
Enter fullscreen mode Exit fullscreen mode

Helm Repolist

  1. Next, deploy Vault using the custom values.yaml file:
helm install vault hashicorp/vault -f values.yaml
Enter fullscreen mode Exit fullscreen mode

Helm install vault

This command deploys Vault into your Kubernetes cluster with the settings defined in values.yaml. Verify the deployed version and resources in the screenshots below.

Kubectl get all

After deployment, the Vault starts in a sealed state, meaning it can't perform any operations until it's unsealed. Unsealing is a critical security feature that ensures the Vault server remains secure. To unseal Vault, you'll need to use the kubectl exec command to run the unseal process within the Vault pod.

First, initialize Vault to make it operational:

kubectl exec vault-0 -- vault operator init
Enter fullscreen mode Exit fullscreen mode

Vault Init

Once Vault is initialized, you can access the Vault UI through the LoadBalancer service.

  1. Get the LoadBalancer IP:
kubectl get svc
Enter fullscreen mode Exit fullscreen mode

Look for the service named "vault" and note the external IP address.

  1. Access the UI: Open your browser and navigate to http://:8200. You'll be prompted to pass 3 unseal keys which were displayed as output for the vault operator init command we ran earlier.

Vault Unseal

  1. Log in to Vault: Use the root token to log in. The root token was generated during the Vault initialization process.

Root token

  1. Creating a New Secret Engine: Vault uses secret engines to store and manage secrets. Let's create a new KV (key-value) secret engine to store our secrets. Navigate to the secret engine option in the UI and create one.

Create Secret Engine

Once completed, store key-value pairs which are supposed to be sensitive data injected into the pods as secret.

Secret Creation

(p.s: for simplicity of the demo, we kept other options in the secret engine default and performed the demo through UI, but the same can be done via CLI and API actions)


Step 3) Installing and Configuring ExternalSecrets on AKS

**
What are ExternalSecrets and Why Do We Need Them?**
ExternalSecrets provides a Kubernetes-native way to fetch secrets from external secret management systems like HashiCorp Vault. It helps in maintaining a centralized secrets store, enabling easier management and more secure access controls.

Two primary resources used in ExternalSecrets are ClusterSecretStore and SecretStore. Here's the difference:

ClusterSecretStore: A cluster-wide resource that defines how to connect to an external secret store like HashiCorp Vault. It is accessible across multiple namespaces.

SecretStore: A namespace-scoped resource for defining the connection to an external secret store. It is specific to a single namespace.

Using ClusterSecretStore allows multiple namespaces to share the same external secret store configuration, making it efficient for larger deployments.

  • Deploying ExternalSecrets Using Helm
  1. Install ExternalSecrets with Helm: First, add the ExternalSecrets Helm repository and update it:
helm repo add external-secrets https://charts.external-secrets.io
helm repo update
Enter fullscreen mode Exit fullscreen mode
  1. Next, deploy ExternalSecrets:
helm install external-secrets external-secrets/external-secrets -n external-secrets --create-namespace --installCRDs=true
Enter fullscreen mode Exit fullscreen mode

Install ExternalSecrets

This command deploys ExternalSecrets into your Kubernetes cluster in a new namespace, and it will install the CRDs as well.

Get all ExternalSecrets

  1. Creating ClusterSecretStore Resource We'll create a ClusterSecretStore resource that defines the connection to HashiCorp Vault. > πŸ’‘We have used vault's root authentication token, but one can create a non-root token and use it. Don't forget to base64 encode your token before passing in the secret.

Here’s the YAML configuration:

apiVersion: external-secrets.io/v1alpha1
kind: ClusterSecretStore
metadata:
  name: example
spec:
  provider:
    vault:
      server: "http://PUBLIC_IP_OF_VAULT:8200"
      path: "secret"
      version: "v2"
      namespace: "external-secrets"
      auth:
        tokenSecretRef:
          name: "vault-token"
          namespace: "external-secrets"
          key: "token"

---
apiVersion: v1
kind: Secret
metadata:
  name: vault-token
  namespace: external-secrets
data:
  token: ROOT TOKEN IN BASE64 ENCODED FORMAT # "root"
Enter fullscreen mode Exit fullscreen mode

Save this as clustersecretstore.yaml and apply it:

kubectl apply -f clustersecretstore.yaml
Enter fullscreen mode Exit fullscreen mode
  1. Creating ExternalSecret Resource Now we'll create an ExternalSecret resource to fetch secrets from Vault. Here, Here’s the YAML configuration:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: vault-example
spec:
  refreshInterval: "1h"
  secretStoreRef:
    name: example
    kind: ClusterSecretStore
  target:
    name: application-sync
  data: 
    - secretKey: PASSWORD
      remoteRef:
         key: secret
         property: PASSWORD
Enter fullscreen mode Exit fullscreen mode

Save this as externalsecret.yaml and apply it:

kubectl apply -f externalsecret.yaml
Enter fullscreen mode Exit fullscreen mode

Verifying the Configuration

To ensure everything is set up correctly, we'll use several kubectl commands.

  1. Verify the ExternalSecret Resource:
kubectl get externalsecrets
kubectl get clustersecretstore
Enter fullscreen mode Exit fullscreen mode

Secret app sync

  1. Verify the Creation of the Secret Resource:
kubectl describe secret application-sync
Enter fullscreen mode Exit fullscreen mode

Get Secrets JSONPath

  1. Fetch the Value of the Secret:
kubectl get secret my-secret -o jsonpath="{.data.my-key}" | base64 --decode
Enter fullscreen mode Exit fullscreen mode

This command fetches the value of the secret, decoding it from Base64. It is the same value we stored inside the secret engine in the vault UI.

In short, Kubernetes never knew about this secret in the Hashicorp Vault, but we used externalSecrets to bridge this gap and make those secrets run like native Kubernetes secrets. From here, we can inject this in pods as regular secrets either as an environmental variable, or volume, or secret reference.

Alas.!πŸ€“ We finally made it to the end of the first half of the demo. **We've set up an AKS cluster with Terraform, deployed HashiCorp Vault for secure secrets, and integrated ExternalSecrets to fetch them. **Your secrets are now safely managed in Kubernetes.
But the adventure isn't over! 🌟 Next, we'll venture into AWS, where we'll create a similar setup and best practices, to ensure our secrets are securely managed across the clouds. Stay tuned for more exciting discoveries!

Top comments (0)