By: Thorsten Hans
Spin apps are incredibly fast. However, sometimes you may want to have fine-granular control over how individual Spin Apps are scaled based on the actual load your overall system is facing. Although you could leverage Kubernetes built-in Horizontal Pod Autoscaler (HPA) to scale based on fundamental metrics such as CPU or memory utilization, we’ll explore how you can scale your Spin apps running on Kubernetes with SpinKube and KEDA.
What Is SpinKube
SpinKube is an open-source Kubernetes stack specifically designed to bring the power of WebAssembly and Spin to cloud-native environments. It makes deploying and managing WebAssembly (Wasm) applications seamless by integrating Spin apps into Kubernetes clusters, offering developers a modern, lightweight runtime for fast and efficient reactive applications. With SpinKube, developers can harness Kubernetes’ scalability and ecosystem while enjoying the benefits of Wasm’s performance, portability, and security.
Why AutoScaling Spin Apps
Horizontal autoscaling dynamically adjusts the number of Spin application instances based on real-time demand, ensuring optimal resource utilization and responsiveness. For Spin apps running on Kubernetes, this means scaling the number of pods in the deployment governed by the Spin Operator. You scale out during traffic spikes and scale in when facing no or low usage to save costs. By combining horizontal autoscaling with all the advantages of Wasm and SpinKube, you can focus on implementing business logic. At the same time, the underlying platform ensures the app scales effectively under varying workloads.
What Is KEDA
KEDA (Kubernetes Event-Driven Autoscaling) extends Kubernetes’ scaling capabilities by allowing workloads to scale based on event-driven metrics such as message queue length, HTTP requests, or custom Prometheus queries. Unlike traditional Horizontal Pod Autoscalers (HPA) that rely solely on CPU or memory metrics, KEDA provides fine-grained control and adaptability to diverse application needs. For developers using SpinKube, KEDA enables efficient scaling of Spin apps based on application-specific metrics, making it easier to handle event-driven workloads in a Kubernetes environment. KEDA has a vast amount of built-in scalers to simplify integration with services running inside and outside of Kubernetes itself.
The Sample Application
For demonstration purposes, we will explore and scale a small ETL scenario implemented as a Spin application.
ETL stands for Extract, Transform, Load, a process used to collect data from various sources, transform it into a structured format, and load it into a target system like a database, a data warehouse, or another integration system for further processing. It is a fundamental workflow in data engineering, enabling organizations to consolidate, clean, and prepare data for analysis and decision-making.
For the sake of this article, we’ll focus on scaling the transformation part of the ETL application. We’ll use an Amazon SQS (Simple Queue Service) queue as the ingestion layer and a simple Valkey Channel (deployed to the Kubernetes cluster) as the target system.
The Amazon SQS queue receives messages with customer-related information as a message payload in the following format:
{
"firstName": "John",
"lastName": "Doe",
"age": 28,
}
For every message appearing in the SQS queue, our Spin application is executed using the SQS trigger for Spin. It validates the incoming message payload and transforms the payload into the OutputCustomer
structure:
{
"firstName": "John",
"lastName": "Doe",
"fullName": "John Doe",
"adult": true,
}
Finally, the Spin app will send the transformed message to the customers
channel in Valkey.
You can find all source code and scripts referenced in this article in this repository over on GitHub.
The Customer Transformer Spin App
As mentioned in the previous paragraph, the transformer has been built using the SQS trigger for Spin. Looking at the implementation, you can see that incoming data is validated, transformed and loaded into the target system (Redis Channel provided through Valkey):
// snip: Remove use statements to increase readability
mod bindings;
mod models;
struct Component;
const REDIS_CONNECTION_STRING_VAR: &str = "REDIS_CONNECTION_STRING";
impl Guest for Component {
/// Implement the function specified in the SQS WIT world
fn handle_queue_message(message: Message) -> Result<MessageAction, Error> {
// prepare
// ensure the Redis/Valkey connection string is provided
let Ok(redis_cs) = variables::get(REDIS_CONNECTION_STRING_VAR) else {
println!("Redis Connection String not set.");
return Ok(MessageAction::Leave);
};
// validate
// 1: only process messages with a message ID
let Some(id) = message.id else {
println!("Leaving message in SQS: No Message ID");
return Ok(MessageAction::Leave);
};
// 2: only process messages with a body
let Some(body) = message.body else {
println!("Leaving message in SQS: No Message Body");
return Ok(MessageAction::Leave);
};
// 3: only process those messages that have a body which could be turned into
// an instance of InboundCustomerModel
let Ok(customer) = serde_json::from_str::<InboundCustomerModel>(&body) else {
println!("Leaving message in SQS: Invalid Message Body");
return Ok(MessageAction::Leave);
};
// transform
// this converts an instance of InboundCustomerModel into an instance of
// OutboundCustomerModel (see transform fn below)
let transformed = transform(&customer);
// load
// load the data into the target system (here redis channel provided through valkey)
match Loader::with_target(redis_cs).load(id, transformed) {
Ok(r) => Ok(r),
Err(e) => Err(Error::Other(e.to_string())),
}
}
}
/// This function encapsulates the actual transformation
/// "Business Logic" applied is
/// - Creating the fullname
/// - Deciding based on the age if treated as an adult or not
pub fn transform(customer: &InboundCustomerModel) -> OutboundCustomerModel {
OutboundCustomerModel::from(customer)
}
/// Encapsulated Loader (to transfer data into the target system)
pub struct Loader {
connection_string: String,
}
impl Loader {
/// creates a new instance of Loader with connection string set
pub fn with_target(connection_string: String) -> Self {
Self { connection_string }
}
/// Loads a customer (OutboundCustomerModel) into the target system
/// Takes
/// - the id (received initially as message.Id from SQS)
/// - the transformed object (OutboundCustomerModel)
/// Tries to load it into the target system (Redis Channel via Valkey)
pub fn load(&self, id: String, outbound_model: OutboundCustomerModel) -> Result<MessageAction> {
let connection = Connection::open(&self.connection_string)?;
let payload = serde_json::to_vec(&outbound_model)?;
connection.set(id.as_str(), &payload)?;
println!(
"Loaded {} with Id ({}) to Redis Channel",
outbound_model.full_name, id
);
// only if loading the message succeeded we remove the message from SQS
Ok(MessageAction::Delete)
}
}
// ensure necessary capabilities are exported by the resulting wasm module
bindings::export!(Component with_types_in bindings);
Cloud Infrastructure Prerequisites
To scale the bespoke Spin app on Kubernetes with SpinKube and KEDA, you’ll need a Kubernetes cluster and an AWS SQS (Simple Queue Service) queue. The Kubernetes cluster acts as the platform for running your Spin apps, while the AWS SQS queue is used to trigger the Spin app and serves as the event source to drive scaling decisions.
Setting up the Kubernetes cluster and the AWS SQS queue is outside the scope of this article, but you can deploy an Amazon EKS cluster by following this guide, or use k3s as a lightweight, local alternative. For setting up an SQS queue, refer to this tutorial.
Deploying SpinKube, KEDA & Valkey to Kubernetes
As the app relies on multiple cluster-wide services, you can use the following deployments scripts (located in the root folder of the repository):
-
deploy-spinkube.sh
- To deploy SpinKube and its dependencies -
deploy-keda.sh
- To deploy KEDA -
deploy-valkey.sh
- To deploy Valkey
Consult the official SpinKube, KEDA and Valkey documentation for discovering alternative deployment approaches, if necessary.
# Deploy SpinKube
./deploy-spinkube.sh
🚀 Deploying SpinKube to your Kubernetes Cluster
...
✅ SpinKube deployed to your Kubernetes Cluster
# Deploy KEDA
./deploy-keda.sh
🚀 Deploying KEDA to your Kubernetes Cluster
...
✅ KEDA deployed to your Kubernetes Cluster
# Deploy Valkey
./deploy-valkey.sh
🚀 Deploying Valkey to your Kubernetes Cluster
...
✅ Valkey deployed to your Kubernetes Cluster
Application Deployment
Spin Apps are packaged and distributed as OCI artifacts. By leveraging OCI artifacts, Spin Apps can be distributed using any registry that implements the Open Container Initiative Distribution Specification (a.k.a. “OCI Distribution Spec”).
The spin
CLI simplifies packaging and distribution of Spin Apps and provides an atomic command for this (spin registry push
). You can package and distribute the customer-transformer
app that you created as part of the previous section like this:
# Build, Package & Distribute the Spin app
./build-and-push-spinapp.sh
🚀 Building and Pushing your Spin App
Please provide the SQS queue URL: https://sqs.eu-central-1.amazonaws.com/111/in
...
Building component customer-transformer with `cargo build --target wasm32-wasi --release`
Working directory: "./customer-transformer"
Finished `release` profile [optimized] target(s) in 0.13s
Finished building all Spin components
Pushing app to the Registry..
Pushed with digest sha256:0c56ce5de6bdc47f9a739f27c5845b99546cba59900b195c43152ec4362e9fae
✅ SpinApp build and pushed to ttl.sh/customer-transformer:24h
It is a good practice to add the --build flag to spin registry push. It prevents you from accidentally pushing an outdated version of your Spin App to your registry of choice.
Once the Spin app is stored in the OCI compliant registry (here ttl.sh
), you can deploy it to your Kubernetes cluster using the deploy-spinapp.sh
script (also located in the root folder of the repository):
# Deploy Spin App
./deploy-spinapp.sh
🚀 Deploying SpinApp to your Kubernetes Cluster
Please provide your AWS region: eu-central-1
configmap/aws created
deployment.apps/customer-transformer created
✅ SpinApp deployed to your Kubernetes Cluster
Looking at the Deployment
manifest, you can see that the runtimeClassName
is specified as wasmtime-spin-v2
which instructs Kubernetes to run the Spin app using containerd-shim-spin
instead of spawning a traditional container. The wasmtime-spin-v2
RuntimeClass has been provisioned to the Kubernetes cluster as part of the SpinKube deployment. You can always check for available RuntimeClasses in your Kubernetes cluster using kubectl get runtimeclass
.
Also notice that necessary configuration data is loaded from corresponding Kubernetes Secrets and ConfigMaps and:
apiVersion: apps/v1
kind: Deployment
metadata:
name: customer-transformer
## ... snip ...
spec:
template:
spec:
runtimeClassName: wasmtime-spin-v2
containers:
- name: customer-transformer
image: "ttl.sh/customer-transformer:12h"
command: ["/"]
# ... snip ...
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-credentials
key: AWS_ACCESS_KEY_ID
optional: false
# ... snip ...
- name: REDIS_CONNECTION_STRING
valueFrom:
secretKeyRef:
name: valkey
key: valkey-url
optional: false
Deploying The KEDA Scaler
With the Spin app deployed to Kubernetes, you can move on and take care of configuring the KEDA AWS SQS Scaler, which will be responsible for horizontally scaling the Spin app based on the number of messages in our AWS SQS queue.
Use the deploy-scaler.sh
script (located in the root folder of the repository), to deploy the KEDA AWS SQS Scaler. The script will ask you to provide necessary information:
# Deploy the KEDA scaler
./deploy-scaler.sh
echo "🚀 Deploying KEDA AWS SQS Scaler to your Kubernetes Cluster"
Please provide the SQS queue URL: https://sqs.eu-central-1.amazonaws.com/111/in
...
echo "✅ KEDA AWS SQS Scaler deployed to your Kubernetes Cluster"
KEDA offers different ways for authenticating in the context of AWS. For production workloads, you may want to leverage workload identities as described here in the KEDA documentation. For the sake of this article, the script will simply store AWS credentials in a Kubernetes Secret and link it to the
ScaledObject
using aTriggerAuthentication
CR.
The ScaledObject
is shown below. It tells KEDA to scale the Spin App (represented as deployment customer-transformer
) in the bounds of 0
and 10
when more than 10
messages are in the desired SQS queue. It also instructs KEDA to poll the number of messages every 2
seconds and use a 30
second cool-down period:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: customer-transformer-sqs-scaler
spec:
minReplicaCount: 0
maxReplicaCount: 10
pollingInterval: 2
cooldownPeriod: 30
scaleTargetRef:
name: customer-transformer
triggers:
- type: aws-sqs-queue
authenticationRef:
name: aws-credentials
metadata:
queueURL: <YOUR_SQS_QUEUE_URL>
queueLength: "10"
awsRegion: <YOUR_AWS_REGION>
Verify Autoscaling
The repository contains a small Rust application that could be used for sending batches of messages to the SQS queue (see src/loader
). You can run the app using the cargo run
command with the necessary flags --message-count
and --queue-url
:
pushd src/loader
export QUEUE_URL="<YOUR_QUEUE_URL>"
cargo run --message-count 1000 --queue-url $QUEUE_URL
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.25s
Running `target/debug/loader --message-count 1000 --queue-url ***`
AWS SQS Loader
--------------
Generating 1000 messages for ***
AWS SQS allows sending batches with max 10 items.
💡 Will create and send 100 batches
Batch sent!
...
Batch sent!
✅ Sent a total of 1000 messages
popd
Use an additional terminal instance to verify horizontal scaling:
# Retrieve deployment when facing load
kubectl get deployment customer-transformer
NAME READY UP-TO-DATE AVAILABLE AGE
customer-transformer 8/8 8 8 23
# Retrieve pods when facing load
kubectl get pod
NAME READY STATUS RESTARTS AGE
customer-transformer-cc84f6cf-xm86l 1/1 Running 0 35s
customer-transformer-cc84f6cf-dpms5 1/1 Running 0 35s
customer-transformer-cc84f6cf-27489 1/1 Running 0 35s
customer-transformer-cc84f6cf-4v54h 1/1 Running 0 19s
customer-transformer-cc84f6cf-77cbq 1/1 Running 0 20s
customer-transformer-cc84f6cf-2d6d5 1/1 Running 0 20s
customer-transformer-cc84f6cf-pf4kd 1/1 Running 0 20s
customer-transformer-cc84f6cf-x5msx 1/1 Running 0 4s
Once all messages have been processed correctly (and the cool down period has passed), you can check again for the customer-tranformer
deployment, and you’ll notice the number of replicas going down - all the way to zero:
# Retrieve deployment once load has been processed
kubectl get deployment customer-transformer
NAME READY UP-TO-DATE AVAILABLE AGE
customer-transformer 0/0 0 0 25m
Conclusion
By combining SpinKube and KEDA, you can efficiently scale Spin apps based on real-time event-driven metrics, like messages in an AWS SQS queue. This approach allows you to optimize resource usage, reduce costs, and handle varying workloads seamlessly, while leveraging the performance and security benefits of WebAssembly.
SpinKube ensures seamless integration with the Kubernetes ecosystem, resulting in horizontal scaling of WebAssembly workloads is the same as scaling traditional containers.
With Kubernetes, KEDA, and SpinKube working together, you get fine-grained control over scaling behavior, enabling your applications to remain responsive and efficient under fluctuating demands. This powerful combination empowers developers to build modern, lightweight, and highly scalable applications in a cloud-native environment.
Top comments (0)