DEV Community

suin
suin

Posted on

Sharing Secrets Between Kubernetes Clusters Using external-secrets PushSecret

In this article, I'll explain how to share secrets between Kubernetes clusters using the PushSecret feature of external-secrets. In multi-cluster environments, secret synchronization is one of the crucial operational challenges. By building a system that securely shares and automatically synchronizes secrets between clusters, we can improve both operational efficiency and security.

Desired Architecture

We'll implement the following architecture to automatically synchronize secrets from the source cluster to the target cluster:

Prerequisites

To follow along with this article, you'll need the following tools:

  • Docker
  • k3d
  • kubectl
  • Helm
  • kubectx (optional: for easier context switching)
  • kubectl-view-secret (optional: for viewing secret contents)

Environment Setup

We'll set up two clusters using k3d:

  • Source cluster: k3d-source-cluster
  • Target cluster: k3d-target-cluster

Creating a Shared Network

First, let's create a shared network to enable communication between clusters. This network will facilitate communication between the two Kubernetes clusters we'll create later.

docker network create shared-net --subnet 172.28.0.0/16 --gateway 172.28.0.1
Enter fullscreen mode Exit fullscreen mode

This command creates a network with the following characteristics:

  • Subnet: 172.28.0.0/16
  • Gateway: 172.28.0.1
  • Network name: shared-net

Setting Up the Source Cluster

The source cluster will provide the secrets. Create a source-cluster.yaml with the following content:

apiVersion: k3d.io/v1alpha5
kind: Simple
metadata:
  name: source-cluster
servers: 1
agents: 1
image: docker.io/rancher/k3s:v1.30.0-k3s1
kubeAPI:
  host: 0.0.0.0
  hostIP: 127.0.0.1
  hostPort: "6443"
ports:
  - port: 8080:80
    nodeFilters:
      - loadbalancer
registries:
  create:
    name: registry.localhost
    host: 127.0.0.1
    hostPort: "15000"
network: shared-net
options:
  k3d:
    wait: true
  kubeconfig:
    updateDefaultKubeconfig: true
    switchCurrentContext: true
  k3s:
    extraArgs:
      # Different CIDR ranges to avoid overlap
      - arg: "--cluster-cidr=10.42.0.0/16"
        nodeFilters:
          - server:*
      - arg: "--service-cidr=10.43.0.0/16"
        nodeFilters:
          - server:*
Enter fullscreen mode Exit fullscreen mode

Key points of this configuration:

  1. Network Settings
    • KubeAPI: Bound to port 6443
    • Load balancer: Port mapping 8080:80
    • Network: Uses the shared network created earlier
  2. Registry Settings
    • Creates a local registry (name: registry.localhost)
    • Bound to port 15000
  3. CIDR Settings
    • Cluster CIDR: 10.42.0.0/16
    • Service CIDR: 10.43.0.0/16 (Important to avoid network overlaps between clusters)

Create the cluster with:

k3d cluster create --config source-cluster.yaml
Enter fullscreen mode Exit fullscreen mode

Setting Up the Target Cluster

The target cluster will receive the secrets. Create a target-cluster.yaml with different settings:

apiVersion: k3d.io/v1alpha5
kind: Simple
metadata:
  name: target-cluster
servers: 1
agents: 1
image: docker.io/rancher/k3s:v1.30.0-k3s1
kubeAPI:
  host: 0.0.0.0
  hostIP: 127.0.0.1
  hostPort: "6444"  # Different from source cluster
ports:
  - port: 8081:80   # Different from source cluster
    nodeFilters:
      - loadbalancer
registries:
  use:
    - registry.localhost  # Use source cluster's registry
network: shared-net
options:
  k3d:
    wait: true
  kubeconfig:
    updateDefaultKubeconfig: true
    switchCurrentContext: true
  k3s:
    extraArgs:
      # Different CIDR ranges from source cluster
      - arg: "--cluster-cidr=10.44.0.0/16"
        nodeFilters:
          - server:*
      - arg: "--service-cidr=10.45.0.0/16"
        nodeFilters:
          - server:*
Enter fullscreen mode Exit fullscreen mode

Key differences from the source cluster:

  1. Different Ports
    • KubeAPI port: 6444 (source uses 6443)
    • Load balancer port: 8081:80 (source uses 8080:80)
    • CIDR ranges: 10.44.0.0/16 and 10.45.0.0/16
  2. Registry Setup
    • Uses the registry created by source cluster
    • Specified in the use section
  3. Network Settings
    • Uses the same shared network
    • Enables inter-cluster communication

Create the cluster:

k3d cluster create --config target-cluster.yaml
Enter fullscreen mode Exit fullscreen mode

Setting Up external-secrets

Now that our clusters are ready, let's set up external-secrets.

Installing on the Source Cluster

# Switch to source cluster context
kubectl config use-context k3d-source-cluster

helm repo add external-secrets https://charts.external-secrets.io
helm repo update

helm install external-secrets \
   external-secrets/external-secrets \
    -n external-secrets \
    --create-namespace
Enter fullscreen mode Exit fullscreen mode

Configuring Target Cluster Authentication

Let's set up the authentication credentials needed for the source cluster to access the target cluster.

Getting Authentication Information

First, get the target cluster's client certificate information:

kubectl config view --raw

# Note down these values:
# - client-certificate-data
# - client-key-data
# - certificate-authority-data
Enter fullscreen mode Exit fullscreen mode

Setting Up Authentication

Create target-cluster-credentials.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: target-cluster-credentials
  namespace: default
type: Opaque
data:
  client-certificate-data: ... # Base64 encoded data from kubeconfig
  client-key-data: ... # Base64 encoded data from kubeconfig
Enter fullscreen mode Exit fullscreen mode

Apply the credentials to the source cluster:

# Switch to source cluster context
kubectl config use-context k3d-source-cluster

# Apply credentials
kubectl apply -f target-cluster-credentials.yaml

# Verify the created Secret
kubectl get secret target-cluster-credentials -o yaml
Enter fullscreen mode Exit fullscreen mode

Setting Up SecretStore

The SecretStore defines the backend store where external-secrets will store and retrieve secrets.

Create secret-store.yaml:

apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: target-cluster
  namespace: default
spec:
  provider:
    kubernetes:
      remoteNamespace: default  # Target cluster namespace
      server:
        url: https://k3d-target-cluster-server-0:6443  # Using k3d internal hostname
        caBundle: ... # certificate-authority-data from kubeconfig
      auth:
        cert:
          clientCert:
            name: target-cluster-credentials
            key: client-certificate-data
          clientKey:
            name: target-cluster-credentials
            key: client-key-data
Enter fullscreen mode Exit fullscreen mode

Apply and verify the SecretStore:

# Apply SecretStore
kubectl apply -f secret-store.yaml

# Check status
kubectl describe secretstore target-cluster

# Expected output should include:
Status:
  Conditions:
    Last Transition Time:  ...
    Message:               SecretStore validated
    Reason:               Valid
    Status:               True
    Type:                 Ready
Enter fullscreen mode Exit fullscreen mode

Setting Up and Testing PushSecret

Creating PushSecret

PushSecret automatically pushes secrets from the source cluster to the target cluster.

Create a Sample Secret (Source Cluster)

# Create sample secret
kubectl create secret generic my-secret \
  --from-literal=username=admin \
  --from-literal=password=supersecret

# Verify created secret
kubectl get secret my-secret -o yaml
Enter fullscreen mode Exit fullscreen mode

Create PushSecret Definition

Create push-secret.yaml:

apiVersion: external-secrets.io/v1alpha1
kind: PushSecret
metadata:
  name: pushsecret-example
  namespace: default
spec:
  # Replace existing secrets in provider
  updatePolicy: Replace
  # Delete provider secret when PushSecret is deleted
  deletionPolicy: Delete
  # Resync interval
  refreshInterval: 10s
  # SecretStore to push secrets to
  secretStoreRefs:
    - name: target-cluster
      kind: SecretStore
  # Target Secret
  selector:
    secret:
      name: my-secret  # Source cluster Secret name
  data:
    - match:
        secretKey: username  # Source cluster Secret key
        remoteRef:
          remoteKey: my-secret-copy  # Target cluster Secret name
          property: username-copy    # Target cluster Secret key
    - match:
        secretKey: password  # Source cluster Secret key
        remoteRef:
          remoteKey: my-secret-copy  # Target cluster Secret name
          property: password-copy    # Target cluster Secret key
Enter fullscreen mode Exit fullscreen mode

Apply PushSecret:

kubectl apply -f push-secret.yaml

# Check status
kubectl describe pushsecret pushsecret-example
Enter fullscreen mode Exit fullscreen mode

Verifying Operation

Check Secret in Target Cluster

# Switch to target cluster
kubectl config use-context k3d-target-cluster

# Check secret
kubectl describe secret my-secret-copy

# Expected output:
Name:         my-secret-copy
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password-copy:  11 bytes
username-copy:  5 bytes
Enter fullscreen mode Exit fullscreen mode

Verify Secret Contents

# Using kubectl-view-secret plugin
kubectl-view-secret my-secret-copy --all

# Or directly using base64 decode
kubectl get secret my-secret-copy -o jsonpath='{.data.username-copy}' | base64 -d
kubectl get secret my-secret-copy -o jsonpath='{.data.password-copy}' | base64 -d
Enter fullscreen mode Exit fullscreen mode

Expected output:

password-copy='supersecret'
username-copy='admin'
Enter fullscreen mode Exit fullscreen mode

Test Automatic Synchronization

Update the secret in the source cluster and verify it's reflected in the target cluster:

# Switch to source cluster
kubectl config use-context k3d-source-cluster

# Update secret
kubectl create secret generic my-secret \
  --from-literal=username=newadmin \
  --from-literal=password=newsecret \
  --dry-run=client -o yaml | kubectl apply -f -

# Switch to target cluster and verify
kubectl config use-context k3d-target-cluster
kubectl-view-secret my-secret-copy --all
Enter fullscreen mode Exit fullscreen mode

Cleanup

# Switch to source cluster
kubectl config use-context k3d-source-cluster

# Delete PushSecret
kubectl delete pushsecret pushsecret-example

# Verify secret deletion in target cluster
kubectl config use-context k3d-target-cluster
kubectl get secret my-secret-copy
# Should show: "Error from server (NotFound): secrets "my-secret-copy" not found"

# Delete clusters
k3d cluster delete --config source-cluster.yaml
k3d cluster delete --config target-cluster.yaml

# Delete shared network
docker network rm shared-net
Enter fullscreen mode Exit fullscreen mode

Conclusion

In this article, we've explored how to use external-secrets' PushSecret feature to share secrets between Kubernetes clusters. In production environments, this feature can significantly improve secret management efficiency in multi-cluster setups!

References

Top comments (0)