DEV Community

Cover image for Mesh Expansion with Linkerd, AKS, and Azure Virtual Machines
Ivan Porta
Ivan Porta

Posted on • Originally published at Medium

Mesh Expansion with Linkerd, AKS, and Azure Virtual Machines

Kubernetes adoption continues to grow at an unprecedented pace, especially among larger organizations. According to a recent PortWorx survey of over 500 participants from companies with more than 500 employees, 58% plan to migrate at least some of their VM-managed applications to Kubernetes, while 85% plan to move the majority of their VM workloads to cloud-native platforms. This popularity of container orchestration is driven by scalability, flexibility, operational simplicity, and cost considerations , which make hybrid cloud environments particularly appealing.

Image description

At the same time, many enterprises still maintain a significant on-premises footprint, and recent uncertainties due to the Broadcom acquisition of VMware have accelerated the push to modernize traditional VM-based workloads. However, as organizations adopt microservices, they often still need to communicate with legacy services running on-premises. This is where Mesh Expansion comes into play. By extending a service mesh beyond the confines of Kubernetes clusters, Mesh Expansion allows modern microservices to seamlessly interact with traditional on-premises services. In this article, I will show you how to expand your mesh using Linkerd Enterprise, Azure Kubernetes Service (AKS), and a Virtual Machine running in Azure.

Setup the environment

First, let’s deploy all the resources required for this demonstration. We’ll use Terraform to provision an Azure Resource Group, Virtual Networks (VNets), Subnets, a Kubernetes cluster (AKS), and a Linux Virtual Machine.

Image description

The following is the related Terraform configuration:

terraform {
  required_version = ">= 0.13"

  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">= 3.0.0"
    }
  }
}

provider "azurerm" {
  features {}
  subscription_id = "c4de0e1c-1377-4248-9beb-e1f803c76248"
}

# -----------------------------------------------------------
# General
# -----------------------------------------------------------
resource "azurerm_resource_group" "resource_group" {
  name     = "rg-training-krc"
  location = "Korea Central"
}

# -----------------------------------------------------------
# Networking
# -----------------------------------------------------------
resource "azurerm_virtual_network" "virtual_network_kuberentes" {
  name                = "vnet-training-aks-krc"
  address_space       = ["10.224.0.0/16"]
  location            = azurerm_resource_group.resource_group.location
  resource_group_name = azurerm_resource_group.resource_group.name
}

resource "azurerm_virtual_network" "virtual_network_virtual_machine" {
  name                = "vnet-training-vm-krc"
  address_space       = ["10.1.0.0/28"]
  location            = azurerm_resource_group.resource_group.location
  resource_group_name = azurerm_resource_group.resource_group.name
}

resource "azurerm_subnet" "subnet_kuberentes" {
  name                 = "aks-subnet"
  address_prefixes     = ["10.224.1.0/24"]
  resource_group_name  = azurerm_resource_group.resource_group.name
  virtual_network_name = azurerm_virtual_network.virtual_network_kuberentes.name
}

resource "azurerm_subnet" "subnet_virtual_machine" {
  name                 = "vm-subnet"
  address_prefixes     = ["10.1.0.0/29"]
  resource_group_name  = azurerm_resource_group.resource_group.name
  virtual_network_name = azurerm_virtual_network.virtual_network_virtual_machine.name
}

resource "azurerm_virtual_network_peering" "virtual_network_peering_virtual_machine" {
  name                      = "VirtualMachineToAzureKubernetesService"
  resource_group_name       = azurerm_resource_group.resource_group.name
  virtual_network_name      = azurerm_virtual_network.virtual_network_virtual_machine.name
  remote_virtual_network_id = azurerm_virtual_network.virtual_network_kuberentes.id
}

resource "azurerm_virtual_network_peering" "virtual_network_peering_kuberentes" {
  name                      = "KubernetesToVirtualMachine"
  resource_group_name       = azurerm_resource_group.resource_group.name
  virtual_network_name      = azurerm_virtual_network.virtual_network_kuberentes.name
  remote_virtual_network_id = azurerm_virtual_network.virtual_network_virtual_machine.id
}

resource "azurerm_route_table" "route_table" {
  name                = "rt-training-krc"
  location            = azurerm_resource_group.resource_group.location
  resource_group_name = azurerm_resource_group.resource_group.name
}

resource "azurerm_subnet_route_table_association" "route_table_association_virtual_machine" {
  subnet_id      = azurerm_subnet.subnet_virtual_machine.id
  route_table_id = azurerm_route_table.route_table.id
}

resource "azurerm_subnet_route_table_association" "route_table_association_kubernetes" {
  subnet_id      = azurerm_subnet.subnet_kuberentes.id
  route_table_id = azurerm_route_table.route_table.id
}

# -----------------------------------------------------------
# Kubernetes
# -----------------------------------------------------------
resource "azurerm_kubernetes_cluster" "kubernetes_cluster" {
  name                = "aks-training-krc"
  location            = azurerm_resource_group.resource_group.location
  resource_group_name = azurerm_resource_group.resource_group.name
  dns_prefix          = "trainingaks"
  identity {
    type = "SystemAssigned"
  }
  default_node_pool {
    name                         = "default"
    node_count                   = 1
    vm_size                      = "Standard_D2_v2"
    vnet_subnet_id               = azurerm_subnet.subnet_kuberentes.id
  }
}

# -----------------------------------------------------------
# Virtual Machine
# -----------------------------------------------------------
resource "azurerm_network_interface" "network_interface" {
  name                = "nic-training-krc"
  location            = azurerm_resource_group.resource_group.location
  resource_group_name = azurerm_resource_group.resource_group.name
  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.subnet_virtual_machine.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.public_ip.id
  }
}

resource "azurerm_public_ip" "public_ip" {
  name                = "pip-training-krc"
  resource_group_name = azurerm_resource_group.resource_group.name
  location            = azurerm_resource_group.resource_group.location
  allocation_method   = "Static"
}

resource "azurerm_linux_virtual_machine" "virtual_machine" {
  name                            = "vm-training-krc"
  resource_group_name             = azurerm_resource_group.resource_group.name
  location                        = azurerm_resource_group.resource_group.location
  size                            = "Standard_F2"
  admin_username                  = "adminuser"
  admin_password                  = "Password1234!" 
  disable_password_authentication = false
  network_interface_ids = [
    azurerm_network_interface.network_interface.id,
  ]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "0001-com-ubuntu-server-jammy"
    sku       = "22_04-lts"
    version   = "latest"
  }
}
Enter fullscreen mode Exit fullscreen mode

Networking configuration

Virtual Network Peering

Because we have deployed the Virtual Machine in a different Virtual Network from the AKS nodes, we need Virtual Network Peering so they can communicate. Peering allows traffic to flow through the Microsoft private backbone, making the separate virtual networks appear as one. This enables our Virtual Machine to reach AKS nodes by their private IP addresses and vice versa — without routing over the public internet.
The Terraform configuration above creates two VNet peering resources:

  • VirtualMachineToAzureKubernetesService (connects the VM’s VNet to the AKS VNet)
  • KubernetesToVirtualMachine (connects the AKS VNet back to the VM’s VNet) Both are necessary because peering is bidirectional. We can test the connectivity by using a privileged debug container on the node to ping the Virtual Machine’s private IP:
$ kubectl get nodes -o wide
NAME                              STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-default-37266765-vmss000000   Ready    <none>   12m   v1.30.9   10.224.1.4    <none>        Ubuntu 22.04.5 LTS   5.15.0-1079-azure   containerd://1.7.25-1

$ kubectl debug node/aks-default-37266765-vmss000000 -it --image=mcr.microsoft.com/cbl-mariner/busybox:2.0
Creating debugging pod node-debugger-aks-default-37266765-vmss000000-x9zjr with container debugger on node aks-default-37266765-vmss000000.
If you don't see a command prompt, try pressing enter.
/ # ping 10.1.0.4
PING 10.1.0.4 (10.1.0.4): 56 data bytes
64 bytes from 10.1.0.4: seq=0 ttl=64 time=5.452 ms
64 bytes from 10.1.0.4: seq=1 ttl=64 time=2.412 ms
64 bytes from 10.1.0.4: seq=2 ttl=64 time=1.018 ms
64 bytes from 10.1.0.4: seq=3 ttl=64 time=0.879 ms
64 bytes from 10.1.0.4: seq=4 ttl=64 time=1.046 ms
64 bytes from 10.1.0.4: seq=5 ttl=64 time=1.007 ms
--- 10.1.0.4 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 0.879/1.969/5.452 ms
Enter fullscreen mode Exit fullscreen mode

And SSH into the Virtual Machine using its public IP and pinging the AKS node’s private IP:

$ ping  10.224.1.4
PING 10.224.1.4 (10.224.1.4) 56(84) bytes of data.
64 bytes from 10.224.1.4: icmp_seq=1 ttl=64 time=2.03 ms
64 bytes from 10.224.1.4: icmp_seq=2 ttl=64 time=1.30 ms
64 bytes from 10.224.1.4: icmp_seq=3 ttl=64 time=1.14 ms
^C
--- 10.224.1.4 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.142/1.488/2.029/0.387 ms
Enter fullscreen mode Exit fullscreen mode

Route Tables

Even though our Virtual Machine and the Kubernetes nodes can now communicate via private IP addresses, the VM still cannot resolve Kubernetes Cluster IPs or Pod IPs. This is a critical requirement because, once we install the Linkerd proxy, the VM will need to communicate with Linkerd components running inside the Kubernetes cluster , such as linkerd-destination, linkerd-identity, and any target services. These services have internal IPs provided by Kubernetes and rely on CoreDNS (running within the cluster) for name resolution.
To route requests from the VM to Kubernetes services, we can add custom routes in Azure so that any traffic destined for the Kubernetes services or Pod ranges gets forwarded to the AKS node. That node will then use CoreDNS to resolve the service and Pod IPs. In particular, you’ll need routes for the following CIDR:

Image description

The resulting routes will be the following:

Image description

Once these rules are in place the VM will be able to send traffic to Kubernetes services by using their cluster-internal addresses. If we check the services running in the AKS cluster we will see the kube-dns is at 10.0.0.10.

$ kubectl get svc -A
NAMESPACE     NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default       kubernetes       ClusterIP   10.0.0.1      <none>        443/TCP         48m
kube-system   kube-dns         ClusterIP   10.0.0.10     <none>        53/UDP,53/TCP   47m
kube-system   metrics-server   ClusterIP   10.0.144.73   <none>        443/TCP         47m
Enter fullscreen mode Exit fullscreen mode

We can then test DNS resolution in the Virtual Machine by specifying kube-dns as our DNS server:

$ nslookup metrics-server.kube-system.svc.cluster.local 10.0.0.10
Server:  10.0.0.10
Address: 10.0.0.10#53

Name: metrics-server.kube-system.svc.cluster.local
Address: 10.0.144.73
Enter fullscreen mode Exit fullscreen mode

To avoid having to specify the DNS server , we can to configure the VM’s netplan file so that 10.0.0.10 is recognized as a primary DNS server. On Ubuntu, netplan configuration files are typically located in /etc/netplan/. The file named 50-cloud-init.yaml is a default or auto-generated configuration that describes how the Ubuntu system should bring up its network interfaces, like eth0, and apply IP addresses, routing, and DNS settings.

$ vim /etc/netplan/50-cloud-init.yaml
network:
  version: 2
  ethernets:
    eth0:
      match:
        macaddress: "00:22:48:f6:f2:98"
        driver: "hv_netvsc"
      dhcp4: true
      nameservers:
        addresses:
          - 10.0.0.10
        search:
          - cluster.local
          - svc.cluster.local
      dhcp4-overrides:
        route-metric: 100
      dhcp6: false
      set-name: "eth0"
Enter fullscreen mode Exit fullscreen mode

After editing, apply the new configuration:

$ sudo netplan apply
Enter fullscreen mode Exit fullscreen mode

Now, DNS resolution on the VM automatically uses 10.0.0.10 for Kubernetes service lookups. You can verify this by running:

$ nslookupp metrics-server.kube-system.svc.cluster.local 10.0.0.10
Server:  10.0.0.10
Address: 10.0.0.10#53

Name: metrics-server.kube-system.svc.cluster.local
Address: 10.0.144.73
Enter fullscreen mode Exit fullscreen mode

Installing Linkerd Enterprise

Now that the networking configuration is completed, we can move forward and install Linkerd. In this demonstration, I will use Helm Charts. If you wanna know more about the different ways to install linkerd, you can read my previous article. How to Install Linkerd Enterprise via CLI, Operator, and Helm Charts

First, we will need to install the Linkerd Custom Resource Defintions.

$ helm upgrade --install linkerd-enterprise-crds linkerd-buoyant/linkerd-enterprise-crds \
  --namespace linkerd \
  --create-namespace \
  --set manageExternalWorkloads=true 
Enter fullscreen mode Exit fullscreen mode

Next, we will need to install the Linkerd control plane with the value manageExternalWorkloads set to true.

helm upgrade --install linkerd-control-plane linkerd-buoyant/linkerd-enterprise-control-plane \
  --version 2.17.1 \
  --namespace linkerd \
  --create-namespace \
  --set-file identityTrustAnchorsPEM=./certificates/ca.crt \
  --set-file identity.issuer.tls.crtPEM=./certificates/issuer.crt \
  --set-file identity.issuer.tls.keyPEM=./certificates/issuer.key \
  --set linkerdVersion=enterprise-2.17.1 \
  --set manageExternalWorkloads=true \
  --set license=**** 
Enter fullscreen mode Exit fullscreen mode

The manageExternalWorkloads set to true will deploy a the linkerd-autoregistration service and deployment.

kubectl get svc -A
NAMESPACE     NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
default       kubernetes                  ClusterIP   10.0.0.1       <none>        443/TCP         147m
kube-system   kube-dns                    ClusterIP   10.0.0.10      <none>        53/UDP,53/TCP   146m
kube-system   metrics-server              ClusterIP   10.0.144.73    <none>        443/TCP         146m
linkerd       linkerd-autoregistration    ClusterIP   10.0.228.3     <none>        8081/TCP        29s
linkerd       linkerd-dst                 ClusterIP   10.0.129.121   <none>        8086/TCP        4m13s
linkerd       linkerd-dst-headless        ClusterIP   None           <none>        8086/TCP        4m13s
linkerd       linkerd-enterprise          ClusterIP   10.0.56.38     <none>        8082/TCP        4m13s
linkerd       linkerd-identity            ClusterIP   10.0.201.170   <none>        8080/TCP        4m13s
linkerd       linkerd-identity-headless   ClusterIP   None           <none>        8080/TCP        4m13s
linkerd       linkerd-policy              ClusterIP   None           <none>        8090/TCP        4m13s
linkerd       linkerd-policy-validator    ClusterIP   10.0.180.41    <none>        443/TCP         4m13s
linkerd       linkerd-proxy-injector      ClusterIP   10.0.61.156    <none>        443/TCP         4m13s
linkerd       linkerd-sp-validator        ClusterIP   10.0.234.34    <none>        443/TCP         4m13s
Enter fullscreen mode Exit fullscreen mode

Workload Authentication and SPIRE

When a Linkerd proxy starts inside a Kubernetes cluster, it generates a private key and submits a Certificate Signing Request (CSR). This CSR includes the service account token, which the Linkerd identity service uses — together with the Kubernetes API — to validate the proxy’s identity before issuing an x509 certificate. This certificate identifies the proxy in a DNS-like form and sets Subject Alternative Name (SAN) fields accordingly.
Outside of Kubernetes, we don’t have service accounts or a default identity mechanism. That’s where SPIFFE and SPIRE come into play:

  • SPIFFE defines a standard for identifying and securing workloads.
  • SPIRE is a production-ready implementation of SPIFFE that many service mesh and infrastructure providers (including Linkerd) can leverage for secure identity management.

SPIRE Architecture

In SPIRE, you run two main components:
Server: Manages and issues identities based on registration entries. It uses these entries to assign the correct SPIFFE ID to each authenticated agent.
Agent: Runs on the same node as the workload and exposes a gRPC API for workloads to request identities. The agent “attests” the workload by checking system or container-level attributes — such as Unix user ID, container image, or other selectors — to ensure the workload is truly what it claims to be. Once validated, the agent issues an x509 SVID (SPIFFE Verifiable Identity Document) containing a URI SAN in the form spiffe://trust-domain-name/path.

Install SPIRE

In production, it’s typical to run the SPIRE server on a dedicated node and have multiple SPIRE agents each running on separate nodes where workloads live. For simplicity, we’ll install both server and agent on a single virtual machine. First we will download and copy SPIRE binaries files to /opt/spire/, which is a common directory for add-on software packages on Linux.

wget https://github.com/spiffe/SPIRE/releases/download/v1.11.2/SPIRE-1.11.2-linux-amd64-musl.tar.gz
tar zvxf SPIRE-1.11.2-linux-amd64-musl.tar.gz
cp -r spire-1.11.2/. /opt/spire/
Enter fullscreen mode Exit fullscreen mode

Since we are going to integrate SPIRE with the Linkerd Certificate Authority (CA), we will need to upload your TrustAnchor certificate and key to the VM:

$ scp ./certificates/ca.key adminuser@20.41.77.179:/home/adminuser/ca.key
adminuser@20.41.77.179's password: ********
ca.key              
                                                                                                   100%  227    24.9KB/s   00:00    
$ scp ./certificates/ca.crt adminuser@20.41.77.179:/home/adminuser/ca.crt
adminuser@20.41.77.179's password: ********
ca.crt      

$ mv /home/adminuser/ca.crt /opt/spire/certs/ca.crt
$ mv /home/adminuser/ca.key /opt/spire/certs/ca.key     
Enter fullscreen mode Exit fullscreen mode

Then we are going to create a simple SPIRE server configuration that will bind the server API to 127.0.0.1:8081, the it will use the root.linkerd.cluster.local as the trust domain and load the previously uploaded CA and key from /opt/spire/certs/ca.crt and /opt/spire/certs/ca.key.

cat >/opt/spire/server.cfg <<EOL
server {
    bind_address = "127.0.0.1"
    bind_port = "8081"
    trust_domain = "root.linkerd.cluster.local"
    data_dir = "/opt/spire/data/server"
    log_level = "DEBUG"
    ca_ttl = "168h"
    default_x509_svid_ttl = "48h"
}
plugins {
    DataStore "sql" {
        plugin_data {
            database_type = "sqlite3"
            connection_string = "/opt/spire/data/server/datastore.sqlite3"
        }
    }
    KeyManager "disk" {
        plugin_data {
            keys_path = "/opt/spire/data/server/keys.json"
        }
    }
    NodeAttestor "join_token" {
        plugin_data {}
    }
    UpstreamAuthority "disk" {
        plugin_data {
            cert_file_path = "/opt/spire/certs/ca.crt"
            key_file_path = "/opt/spire/certs/ca.key"
        }
    }
}
EOL
Enter fullscreen mode Exit fullscreen mode

Then, it’s time for the SPIRE agent. In this case, we will need to instruct it to communicate with the server at 127.0.0.1:8081, use the same trust domain, root.linkerd.cluster.local and use the unix plugin for workload attestation.

cat >/opt/spire/agent.cfg <<EOL
agent {
    data_dir = "/opt/spire/data/agent"
    log_level = "DEBUG"
    trust_domain = "root.linkerd.cluster.local"
    server_address = "localhost"
    server_port = 8081
    insecure_bootstrap = true
}
plugins {
   KeyManager "disk" {
        plugin_data {
            directory = "/opt/spire/data/agent"
        }
    }
    NodeAttestor "join_token" {
        plugin_data {}
    }
    WorkloadAttestor "unix" {
        plugin_data {}
    }
}
EOL
Enter fullscreen mode Exit fullscreen mode

We can now start the SPIRE server.

$ ./opt/spire/bin/spire-server run -config ./opt/spire/server.cfg
...
INFO[0000] Using legacy downstream X509 CA TTL calculation by default; this default will change in a future release 
WARN[0000] default_x509_svid_ttl is too high for the configured ca_ttl value. SVIDs with shorter lifetimes may be issued. Please set default_x509_svid_ttl to 28h or less, or the ca_ttl to 288h or more, to guarantee the full default_x509_svid_ttl lifetime when CA rotations are scheduled. 
WARN[0000] Current umask 0022 is too permissive; setting umask 0027 
INFO[0000] Configured                                    admin_ids="[]" data_dir=/opt/spire/data/server launch_log_level=debug version=1.11.2
INFO[0000] Opening SQL database                          db_type=sqlite3 subsystem_name=sql
INFO[0000] Initializing new database                     subsystem_name=sql
INFO[0000] Connected to SQL database                     read_only=false subsystem_name=sql type=sqlite3 version=3.46.1
INFO[0000] Configured DataStore                          reconfigurable=false subsystem_name=catalog
INFO[0000] Configured plugin                             external=false plugin_name=disk plugin_type=KeyManager reconfigurable=false subsystem_name=catalog
INFO[0000] Plugin loaded                                 external=false plugin_name=disk plugin_type=KeyManager subsystem_name=catalog
INFO[0000] Configured plugin                             external=false plugin_name=join_token plugin_type=NodeAttestor reconfigurable=false subsystem_name=catalog
INFO[0000] Plugin loaded                                 external=false plugin_name=join_token plugin_type=NodeAttestor subsystem_name=catalog
INFO[0000] Configured plugin                             external=false plugin_name=disk plugin_type=UpstreamAuthority reconfigurable=false subsystem_name=catalog
INFO[0000] Plugin loaded                                 external=false plugin_name=disk plugin_type=UpstreamAuthority subsystem_name=catalog
DEBU[0000] Loading journal from datastore                subsystem_name=ca_manager
INFO[0000] There is not a CA journal record that matches any of the local X509 authority IDs  subsystem_name=ca_manager
INFO[0000] Journal loaded                                jwt_keys=0 subsystem_name=ca_manager x509_cas=0
DEBU[0000] Preparing X509 CA                             slot=A subsystem_name=ca_manager
DEBU[0000] There is no active X.509 authority yet. Can't save CA journal in the datastore  subsystem_name=ca_manager
INFO[0000] X509 CA prepared                              expiration="2025-03-10 10:57:17 +0000 UTC" issued_at="2025-03-03 10:57:17.573729853 +0000 UTC" local_authority_id=721ccaf61807f4d9d1fe258476359e740feeb15e self_signed=false slot=A subsystem_name=ca_manager upstream_authority_id=737a7f3dfd9afd9669a777208b012bab53bf1164
INFO[0000] X509 CA activated                             expiration="2025-03-10 10:57:17 +0000 UTC" issued_at="2025-03-03 10:57:17.573729853 +0000 UTC" local_authority_id=721ccaf61807f4d9d1fe258476359e740feeb15e slot=A subsystem_name=ca_manager upstream_authority_id=737a7f3dfd9afd9669a777208b012bab53bf1164
INFO[0000] Creating a new CA journal entry               subsystem_name=ca_manager
DEBU[0000] Successfully stored CA journal entry in datastore  ca_journal_id=1 local_authority_id=721ccaf61807f4d9d1fe258476359e740feeb15e subsystem_name=ca_manager
DEBU[0000] Successfully rotated X.509 CA                 subsystem_name=ca_manager trust_domain_id="spiffe://root.linkerd.cluster.local" ttl=604799.402084726
DEBU[0000] Preparing JWT key                             slot=A subsystem_name=ca_manager
WARN[0000] UpstreamAuthority plugin does not support JWT-SVIDs. Workloads managed by this server may have trouble communicating with workloads outside this cluster when using JWT-SVIDs.  plugin_name=disk subsystem_name=ca_manager
DEBU[0000] Successfully stored CA journal entry in datastore  ca_journal_id=1 local_authority_id=721ccaf61807f4d9d1fe258476359e740feeb15e subsystem_name=ca_manager
INFO[0000] JWT key prepared                              expiration="2025-03-10 10:57:17.597952274 +0000 UTC" issued_at="2025-03-03 10:57:17.597952274 +0000 UTC" local_authority_id=6Yb52ncPDI4FDZy2unga1133Vne6HS8d slot=A subsystem_name=ca_manager
INFO[0000] JWT key activated                             expiration="2025-03-10 10:57:17.597952274 +0000 UTC" issued_at="2025-03-03 10:57:17.597952274 +0000 UTC" local_authority_id=6Yb52ncPDI4FDZy2unga1133Vne6HS8d slot=A subsystem_name=ca_manager
DEBU[0000] Successfully stored CA journal entry in datastore  ca_journal_id=1 local_authority_id=721ccaf61807f4d9d1fe258476359e740feeb15e subsystem_name=ca_manager
DEBU[0000] Rotating server SVID                          subsystem_name=svid_rotator
DEBU[0000] Signed X509 SVID                              expiration="2025-03-05T10:57:17Z" spiffe_id="spiffe://root.linkerd.cluster.local/spire/server" subsystem_name=svid_rotator
INFO[0000] Building in-memory entry cache                subsystem_name=endpoints
INFO[0000] Completed building in-memory entry cache      subsystem_name=endpoints
INFO[0000] Logger service configured                     launch_log_level=debug
DEBU[0000] Initializing health checkers                  subsystem_name=health
DEBU[0000] Initializing API endpoints                    subsystem_name=endpoints
INFO[0000] Starting Server APIs                          address="127.0.0.1:8081" network=tcp subsystem_name=endpoints
INFO[0000] Starting Server APIs                          address=/tmp/spire-server/private/api.sock network=unix subsystem_name=endpoints
Enter fullscreen mode Exit fullscreen mode

Once it is up and running, we can generate a one-time join token that the agent will use to attest itself to the server.

$ /opt/spire/bin/spire-server token generate -spiffeID spiffe://root.linkerd.cluster.local/agent -output json | jq -r '.value'
5f497c6c-4fa5-45bd-b1ce-9d7770a7761b
Enter fullscreen mode Exit fullscreen mode

Then, we can start the SPIRE agent.

$ /opt/spire/bin/spire-agent run -config /opt/spire/agent.cfg -joinToken "5f497c6c-4fa5-45bd-b1ce-9d7770a7761b"
INFO[0000] Creating spire agent UDS directory            dir=/tmp/spire-agent/public
WARN[0000] Current umask 0022 is too permissive; setting umask 0027 
INFO[0000] Starting agent                                data_dir=/opt/spire/data/agent version=1.11.2
INFO[0000] Configured plugin                             external=false plugin_name=disk plugin_type=KeyManager reconfigurable=false subsystem_name=catalog
INFO[0000] Plugin loaded                                 external=false plugin_name=disk plugin_type=KeyManager subsystem_name=catalog
INFO[0000] Plugin loaded                                 external=false plugin_name=join_token plugin_type=NodeAttestor subsystem_name=catalog
INFO[0000] Configured plugin                             external=false plugin_name=unix plugin_type=WorkloadAttestor reconfigurable=false subsystem_name=catalog
INFO[0000] Plugin loaded                                 external=false plugin_name=unix plugin_type=WorkloadAttestor subsystem_name=catalog
INFO[0000] Bundle is not found                           subsystem_name=attestor
DEBU[0000] No pre-existing agent SVID found. Will perform node attestation  subsystem_name=attestor
INFO[0000] SVID is not found. Starting node attestation  subsystem_name=attestor
WARN[0000] Insecure bootstrap enabled; skipping server certificate verification  subsystem_name=attestor
INFO[0000] Node attestation was successful               reattestable=false spiffe_id="spiffe://root.linkerd.cluster.local/spire/agent/join_token/5f497c6c-4fa5-45bd-b1ce-9d7770a7761b" subsystem_name=attestor
DEBU[0000] Entry created                                 entry=6f0e6ebd-cb2f-48ac-9919-30d6e2820ca8 selectors_added=1 spiffe_id="spiffe://root.linkerd.cluster.local/agent" subsystem_name=cache_manager
DEBU[0000] Renewing stale entries                        cache_type=workload count=1 limit=500 subsystem_name=manager
INFO[0000] Creating X509-SVID                            entry_id=6f0e6ebd-cb2f-48ac-9919-30d6e2820ca8 spiffe_id="spiffe://root.linkerd.cluster.local/agent" subsystem_name=manager
DEBU[0000] SVID updated                                  entry=6f0e6ebd-cb2f-48ac-9919-30d6e2820ca8 spiffe_id="spiffe://root.linkerd.cluster.local/agent" subsystem_name=cache_manager
DEBU[0000] Bundle added                                  subsystem_name=svid_store_cache trust_domain_id=root.linkerd.cluster.local
DEBU[0000] Initializing health checkers                  subsystem_name=health
INFO[0000] Starting Workload and SDS APIs                address=/tmp/spire-agent/public/api.sock network=unix subsystem_name=endpoints
Enter fullscreen mode Exit fullscreen mode

Finally, create a registration entry for the process that will act as the Linkerd proxy outside of Kubernetes. The -selector "unix:uid:998" means any process running under UID 998 on this agent node will receive the SPIFFE ID specified:

/opt/spire/bin/spire-server entry create \
    -spiffeID "spiffe://root.linkerd.cluster.local/proxy-harness" \
    -parentID "spiffe://root.linkerd.cluster.local/agent" \
    -selector "unix:uid:998"
Enter fullscreen mode Exit fullscreen mode

Install the Linkerd Proxy

With the SPIRE agent running and issuing identities, we can now set up Linkerd’s proxy harness on the virtual machine. The harness is a small daemon that installs the Linkerd proxy, configures iptables for traffic redirection, and registers itself with the Linkerd control plane running in the Kubernetes cluster. First we will need to download it with the following command:

wget https://github.com/BuoyantIO/linkerd-buoyant/releases/download/enterprise-2.17.1/linkerd-proxy-harness-enterprise-2.17.1-amd64.deb
apt-get -y install ./linkerd-proxy-harness-enterprise-2.17.1-amd64.deb
Enter fullscreen mode Exit fullscreen mode

Create a Workload Group in Kubernetes

Then, in the Kubernetes cluster, we will need to deploy an ExternalGroup resource. This tells the Linkerd control plane that an external workload (running outside of Kubernetes) is part of the service mesh under the namespace training. The readiness probe ensures that Linkerd can verify when the proxy harness is up and healthy.

$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: training
---
apiVersion: workload.buoyant.io/v1alpha1
kind: ExternalGroup
metadata:
  name: training-vm
  namespace: training
spec:
  probes:
  - failureThreshold: 1
    httpGet:
      path: /ready
      port: 80
      scheme: HTTP
      host: 127.0.0.1
    initialDelaySeconds: 3
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  template:
    metadata:
      labels:
        app: training-app
        location: vm
    ports:
    - port: 80
EOF
Enter fullscreen mode Exit fullscreen mode

Next, use harnessctl (installed with the harness package) to point the harness at your Linkerd control plane:

harnessctl set-config \
  --workload-group-name=training-vm \
  --workload-group-namespace=training \
  --control-plane-address=linkerd-autoregistration.linkerd.svc.cluster.local.:8081 \
  --control-plane-identity=linkerd-autoregistration.linkerd.serviceaccount.identity.linkerd.cluster.local
Config updated
Enter fullscreen mode Exit fullscreen mode

Finally, start the daemon:

systemctl start linkerd-proxy-harness
Enter fullscreen mode Exit fullscreen mode

Running journalctl will output the harness logs where we can see it updating the iptables rules to ensure that all the traffic goes through the proxy.

journalctl -u linkerd-proxy-harness -f
...
Mar 03 11:36:23 vm-training-krc systemd[1]: Starting Linkerd proxy harness...
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -D PREROUTING -j PROXY_INIT_REDIRECT -m comment --comment proxy-init/install-proxy-init-prerouting"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="iptables v1.8.7 (legacy): Couldn't load target `PROXY_INIT_REDIRECT':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -D OUTPUT -j PROXY_INIT_OUTPUT -m comment --comment proxy-init/install-proxy-init-output"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="iptables v1.8.7 (legacy): Couldn't load target `PROXY_INIT_OUTPUT':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -F PROXY_INIT_OUTPUT"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="iptables: No chain/target/match by that name.\n"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -F PROXY_INIT_REDIRECT"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="iptables: No chain/target/match by that name.\n"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -X PROXY_INIT_OUTPUT"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="iptables: No chain/target/match by that name.\n"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -X PROXY_INIT_REDIRECT"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="iptables: No chain/target/match by that name.\n"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy-save -t nat"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="# Generated by iptables-save v1.8.7 on Mon Mar  3 11:36:23 2025\n*nat\n:PREROUTING ACCEPT [0:0]\n:INPUT ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:POSTROUTING ACCEPT [0:0]\nCOMMIT\n# Completed on Mon Mar  3 11:36:23 2025\n"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy-save -t nat"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="# Generated by iptables-save v1.8.7 on Mon Mar  3 11:36:23 2025\n*nat\n:PREROUTING ACCEPT [0:0]\n:INPUT ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:POSTROUTING ACCEPT [0:0]\nCOMMIT\n# Completed on Mon Mar  3 11:36:23 2025\n"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -N PROXY_INIT_REDIRECT"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -A PROXY_INIT_REDIRECT -p tcp --match multiport --dports 4567,4568 -j RETURN -m comment --comment proxy-init/ignore-port-4567,4568"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -A PROXY_INIT_REDIRECT -p tcp -j REDIRECT --to-port 4143 -m comment --comment proxy-init/redirect-all-incoming-to-proxy-port"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -A PREROUTING -j PROXY_INIT_REDIRECT -m comment --comment proxy-init/install-proxy-init-prerouting"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -N PROXY_INIT_OUTPUT"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner 998 -j RETURN -m comment --comment proxy-init/ignore-proxy-user-id"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -A PROXY_INIT_OUTPUT -o lo -j RETURN -m comment --comment proxy-init/ignore-loopback"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -A PROXY_INIT_OUTPUT -p tcp --match multiport --dports 4567,4568 -j RETURN -m comment --comment proxy-init/ignore-port-4567,4568"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -A PROXY_INIT_OUTPUT -p tcp -j REDIRECT --to-port 4140 -m comment --comment proxy-init/redirect-all-outgoing-to-proxy-port"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy -t nat -A OUTPUT -j PROXY_INIT_OUTPUT -m comment --comment proxy-init/install-proxy-init-output"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="/usr/sbin/iptables-legacy-save -t nat"
Mar 03 11:36:23 vm-training-krc harness-init[2131]: time="2025-03-03T11:36:23Z" level=info msg="# Generated by iptables-save v1.8.7 on Mon Mar  3 11:36:23 2025\n*nat\n:PREROUTING ACCEPT [0:0]\n:INPUT ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:POSTROUTING ACCEPT [0:0]\n:PROXY_INIT_OUTPUT - [0:0]\n:PROXY_INIT_REDIRECT - [0:0]\n-A PREROUTING -m comment --comment \"proxy-init/install-proxy-init-prerouting\" -j PROXY_INIT_REDIRECT\n-A OUTPUT -m comment --comment \"proxy-init/install-proxy-init-output\" -j PROXY_INIT_OUTPUT\n-A PROXY_INIT_OUTPUT -m owner --uid-owner 998 -m comment --comment \"proxy-init/ignore-proxy-user-id\" -j RETURN\n-A PROXY_INIT_OUTPUT -o lo -m comment --comment \"proxy-init/ignore-loopback\" -j RETURN\n-A PROXY_INIT_OUTPUT -p tcp -m multiport --dports 4567,4568 -m comment --comment \"proxy-init/ignore-port-4567,4568\" -j RETURN\n-A PROXY_INIT_OUTPUT -p tcp -m comment --comment \"proxy-init/redirect-all-outgoing-to-proxy-port\" -j REDIRECT --to-ports 4140\n-A PROXY_INIT_REDIRECT -p tcp -m multiport --dports 4567,4568 -m comment --comment \"proxy-init/ignore-port-4567,4568\" -j RETURN\n-A PROXY_INIT_REDIRECT -p tcp -m comment --comment \"proxy-init/redirect-all-incoming-to-proxy-port\" -j REDIRECT --to-ports 4143\nCOMMIT\n# Completed on Mon Mar  3 11:36:23 2025\n"
Mar 03 11:36:23 vm-training-krc systemd[1]: Started Linkerd proxy harness.
Mar 03 11:36:23 vm-training-krc sudo[2160]:     root : PWD=/ ; USER=proxyharness ; COMMAND=/bin/bash -c '\\/bin\\/bash -c \\/var\\/lib\\/linkerd\\/bin\\/harness'
Mar 03 11:36:23 vm-training-krc sudo[2160]: pam_unix(sudo:session): session opened for user proxyharness(uid=998) by (uid=0)
Mar 03 11:36:24 vm-training-krc start-harness.sh[2161]: 2025-03-03T11:36:24.259338Z  INFO harness: Harness admin interface on 127.0.0.1:4192
Mar 03 11:36:24 vm-training-krc start-harness.sh[2161]: 2025-03-03T11:36:24.477490Z  INFO harness: identity used for control: spiffe://root.linkerd.cluster.local/proxy-harness
Mar 03 11:36:24 vm-training-krc start-harness.sh[2161]: 2025-03-03T11:36:24.498363Z  INFO controller{addr=linkerd-autoregistration.linkerd.svc.cluster.local:8081}: linkerd_pool_p2c: Adding endpoint addr=10.0.228.3:8081
Mar 03 11:36:24 vm-training-krc start-harness.sh[2161]: 2025-03-03T11:36:24.552077Z  INFO report_health:controller{addr=linkerd-autoregistration.linkerd.svc.cluster.local:8081}: linkerd_pool_p2c: Adding endpoint addr=10.0.228.3:8081
Mar 03 11:36:24 vm-training-krc start-harness.sh[2164]: [     0.120238s]  INFO ThreadId(01) linkerd2_proxy: release 0.0.0-dev (48069376) by Buoyant, Inc. on 2025-02-04T06:51:38Z
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.276965s]  INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.831749s]  INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.831858s]  INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.831863s]  INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.831866s]  INFO ThreadId(01) linkerd2_proxy: Tap DISABLED
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.831870s]  INFO ThreadId(01) linkerd2_proxy: SNI is training-vm-7ef4eba0.training.external.identity.linkerd.cluster.local
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.831874s]  INFO ThreadId(01) linkerd2_proxy: Local identity is spiffe://root.linkerd.cluster.local/proxy-harness
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.831878s]  INFO ThreadId(01) linkerd2_proxy: Destinations resolved via linkerd-dst-headless.linkerd.svc.cluster.local:8086 (linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local)
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.868536s]  INFO ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_pool_p2c: Adding endpoint addr=10.244.0.200:8090
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.868739s]  INFO ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_pool_p2c: Adding endpoint addr=10.244.0.200:8086
Mar 03 11:36:25 vm-training-krc start-harness.sh[2164]: [     0.889893s]  INFO ThreadId(02) daemon:identity: linkerd_app: Certified identity id=spiffe://root.linkerd.cluster.local/proxy-harness
Enter fullscreen mode Exit fullscreen mode

Meanwhile, in the SPIRE agent logs, we’ll see entries confirming that the harness process (PID 3817 in the example) is attested under the proxyharness user (UID 998). The SPIRE agent issues an x509 SVID for spiffe://root.linkerd.cluster.local/proxy-harness, matching the registration entry we created earlier:

...
INFO[1013] Creating X509-SVID                            entry_id=5c0955f0-335c-4b5b-a3b4-9c0eae649e39 spiffe_id="spiffe://root.linkerd.cluster.local/proxy-harness" subsystem_name=manager
DEBU[1013] SVID updated                                  entry=5c0955f0-335c-4b5b-a3b4-9c0eae649e39 spiffe_id="spiffe://root.linkerd.cluster.local/proxy-harness" subsystem_name=cache_manager
DEBU[1013] PID attested to have selectors                pid=3817 selectors="[type:\"unix\" value:\"uid:998\" type:\"unix\" value:\"user:proxyharness\" type:\"unix\" value:\"gid:999\" type:\"unix\" value:\"group:proxyharness\" type:\"unix\" value:\"supplementary_gid:999\" type:\"unix\" value:\"supplementary_group:proxyharness\"]" subsystem_name=workload_attestor
DEBU[1013] Fetched X.509 SVID                            count=1 method=FetchX509SVID pid=3817 registered=true service=WorkloadAPI spiffe_id="spiffe://root.linkerd.cluster.local/proxy-harness" subsystem_name=endpoints ttl=172799.338898565
DEBU[1013] PID attested to have selectors                pid=3817 selectors="[type:\"unix\" value:\"uid:998\" type:\"unix\" value:\"user:proxyharness\" type:\"unix\" value:\"gid:999\" type:\"unix\" value:\"group:proxyharness\" type:\"unix\" value:\"supplementary_gid:999\" type:\"unix\" value:\"supplementary_group:proxyharness\"]" subsystem_name=workload_attestor
DEBU[1013] Fetched X.509 SVID                            count=1 method=FetchX509SVID pid=3817 registered=true service=WorkloadAPI spiffe_id="spiffe://root.linkerd.cluster.local/proxy-harness" subsystem_name=endpoints ttl=172799.334645563
Enter fullscreen mode Exit fullscreen mode

Testing

With all components running, we’re now ready to verify traffic flow between a workload running on our VM and services in the Kubernetes cluster. First, we need to install Docker on the VM:

sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enter fullscreen mode Exit fullscreen mode

Then, start a simple HTTP echo service:

docker run -p 80:80 hashicorp/http-echo:latest echo -text="Welcome from $(hostname)"
Enter fullscreen mode Exit fullscreen mode

To confirms the workload is serving traffic internally on port 80 we can execute the following command:

$ curl localhost:80
Welcome from vm-training-krc
Enter fullscreen mode Exit fullscreen mode

Next, we’ll define a service in the training namespace pointing to the ExternalGroup. We’ll also deploy a test pod that has the Linkerd sidecar injected.

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: training-vm
  namespace: training
spec:
  type: ClusterIP
  selector:
    app: training-app
    location: vm
  ports:
  - port: 80
    protocol: TCP
    name: one
---
apiVersion: v1
kind: Service
metadata:
  name: test-server
spec:
  type: ClusterIP
  selector:
    app: test-server
  ports:
  - port: 80
    protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
  name: curl-test
  annotations:
    linkerd.io/inject: enabled
spec:
  containers:
  - name: curl
    image: curlimages/curl:latest
    command: ["sleep", "infinity"]
EOF
Enter fullscreen mode Exit fullscreen mode

Once the curl-test pod is running, you can exec into it and issue a request to the VM workload:

$ kubectl exec curl-test -c curl -- curl http://training-vm.training.svc.cluster.local:80
Welcome from vm-training-krc
Enter fullscreen mode Exit fullscreen mode

The “Welcome from vm-training-krc” response confirms that the ExternalGroup and Linkerd proxy harness are working correctly, allowing in-cluster traffic to reach the VM workload. Next, we’ll confirm traffic can flow from the VM to a service in Kubernetes. To do so, we will deploy a simple application in the cluster.

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: simple-app
  annotations:
    linkerd.io/inject: enabled
---
apiVersion: v1
kind: Service
metadata:
  name: simple-app-v1
  namespace: simple-app
spec:
  selector:
    app: simple-app-v1
    version: v1
  ports:
    - port: 80
      targetPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-app-v1
  namespace: simple-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: simple-app-v1
      version: v1
  template:
    metadata:
      labels:
        app: simple-app-v1
        version: v1
    spec:
      containers:
        - name: http-app
          image: hashicorp/http-echo:latest
          args:
            - "-text=Simple App v1"
          ports:
            - containerPort: 5678
EOF
Enter fullscreen mode Exit fullscreen mode

On the VM, you can send a request to the simple-app-v1.simple-app.svc.cluster.local service using either curl or wget.

$ curl -v http://simple-app-v1.simple-app.svc.cluster.local:80
*   Trying 10.0.57.85:80...
* Connected to simple-app-v1.simple-app.svc.cluster.local (10.0.57.85) port 80 (#0)
> GET / HTTP/1.1
> Host: simple-app-v1.simple-app.svc.cluster.local
> User-Agent: curl/7.81.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< x-app-name: http-echo
< x-app-version: 1.0.0
< date: Mon, 03 Mar 2025 12:11:19 GMT
< content-length: 29
< content-type: text/plain; charset=utf-8
< 
Simple App v1
* Connection #0 to host simple-app-v1.simple-app.svc.cluster.local left intact
Enter fullscreen mode Exit fullscreen mode

The “Simple App v1” response confirms that the VM-based application can reach in-cluster services via Linkerd.

References:

Top comments (0)