Hello devs! ππ» We're back!
Today we're making available the first alpha release of AssemblyLift v0.4. This initial release brings a few exiting improvements to AssemblyLift including
-
WASI modules; Asml functions now target
wasm32-wasi
and are compiled as executables instead of libraries - Simplified Rust function boilerplate (thanks to WASI)
- Experimental Kubernetes backend provider
- Hyper-based HTTP function runtime
Let's take a look!
Installing
We're going to use the new k8s provider, so as a prerequisite you'll need docker
installed on your system as well as a configured Kubernetes cluster. Functions are written in Rust and will need cargo
to be available as well. Finally, as of writing the AssemblyLift runtime images are hosted on public ECR, which may require authentication using the aws
CLI.
You can grab AssemblyLift for macOS with
curl -O public.assemblylift.akkoro.io/cli/0.4.0-alpha.1/x86_64-apple-darwin/asml
Or for Linux with
curl -O public.assemblylift.akkoro.io/cli/0.4.0-alpha.1/x86_64-linux-gnu/asml
You can make the binary executable by running chmod +x asml
.
If necessary, authenticate docker with ECR using
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
Deploy a function to Kubernetes
Let's spin up a new project to experiment with
asml init -n asmlnetes && cd asmlnetes
Next, open up service.toml
and update our service & function definition to use the k8s backend.
[service]
name = "my-service"
[service.provider]
name = "k8s-hyper-alpine"
[service.provider.options]
registry_name = "my-dockerhub-registry"
# ECR is also supported! Instead of registry_name use:
# registry_type = "ecr"
# aws_account_id = "1234567890"
# aws_region = "ca-central-1"
[api.functions.my-function]
provider = { name = "k8s-hyper-alpine" }
name = "my-function"
We've also deployed the HTTP IOmod as a container which can be specified via the iomod
section as follows
[iomod.dependencies.http]
type = "container"
version = "0.2.0"
coordinates = "akkoro.std.http"
If you want to play with the HTTP module, you'll need to add the corresponding dependency to the functions' Cargo.toml.
http = { package = "assemblylift-iomod-http-guest", version = "0.2" }
Our function handler used to be defined in lib.rs
, however with WASI support we now use a regular main
function in main.rs
.
use asml_core::*;
#[handler]
async fn main() {
FunctionContext::log("Hello, world!".into());
}
On both Lambda and Kubernetes platforms, function input is now stored in the variable ctx.input
, which is injected by the handler macro.
The input shape used by the Hyper runtime isn't finalized, but as of writing you deserialize function input as follows
use std::collections::BTreeMap;
use asml_core::*;
use serde::Deserialize;
#[handler]
async fn main() {
let event: HttpFunctionRequest = serde_json::from_str(&ctx.input).unwrap();
// The event body is encoded as Z85; in this case we assume it's a string, but it could be binary!
let body = std::str::from_utf8(z85::decode(event.body.unwrap()).unwrap()).unwrap();
FunctionContext::success(format!("Body = {:?}\n", body));
}
#[derive(Deserialize)]
struct HttpFunctionEvent {
method: String,
headers: BTreeMap<String, String>,
body: Option<String>,
}
IOmods are called as they have been in previous versions, but the new version of the HTTP module guest includes a handy request builder to make things a little easier π
use std::collections::BTreeMap;
use asml_core::*;
use serde::Deserialize;
use http::HttpRequestBuilder;
#[handler]
async fn main() {
let http_req = HttpRequestBuilder::new()
.method("GET")
.host("akkoro.io")
.path("/")
.build();
let http_res = http::request(http_req).await.unwrap();
FunctionContext::log(format!("HTTP status: {}", http_res.code));
FunctionContext::success(format!("Body = {:?}\n", http_res.body));
}
#[derive(Deserialize)]
struct HttpFunctionEvent {
method: String,
headers: BTreeMap<String, String>,
body: Option<String>,
}
Running asml cast
should compile the function and generate a Terraform plan, which if everything is defined correctly should create some Kubernetes resources in a namespace (asml-{project_name}-{service_name}).
This should work against whatever cluster is defined in ~/.kube/config
, however at the moment only a NodePort
service is created for each function -- LoadBalancer
support for cloud deployments on AKS and such will come in a later update.
After running asml bind
to deploy your service(s), you'll be able to find the function pods for example with
kubectl get pods -n asml-asmlnetes-my-service
If you're running on a local K8s cluster (I use the distro provided by Docker Desktop personally), you can cURL requests to localhost
on the exposed NodePorts for each function.
Find the port by listing the function services
kubectl get services -n asml-asmlnetes-my-service
What's next?
The 0.4.0-alpha.x series will iterate on the above compute platform enhancements, concentrating mainly on k8s support.
The data model we wrote about a while ago is still going through more of a design phase. The plan is to introduce that functionality in a 0.4.0-beta series, and iterate on that up to the final 0.4 release.
Top comments (0)