DEV Community

Cover image for Understanding Unikernels: The Future of Cloud Computing, Probably
Nithin Jois
Nithin Jois

Posted on • Edited on • Originally published at nithinjois.com

Understanding Unikernels: The Future of Cloud Computing, Probably

Having used Containers, Kubernetes and Serverless (a lot!) over the last few years, I catch myself wondering 'What next?' when it comes to more efficient, faster and secure units of computing. My initial research indicates that Unikernels could be a promising bet.

What are Unikernels?

Here's how I summarised it for myself after a fair bit of reading:

Single purpose compute environments packaged with necessary dependencies, runtime, libraries, kernel capabilities and everything else required boot and run Application code, and System code in a single address space with No Operating System, No Users and No Shell resulting in smaller, faster and a more secure system.

The code-base and necessary components of run unikernels are typically a few MBs in size, thereby decreasing the possibility of multiple unnecessary and, potentially vulnerable components being installed. To top it off, System calls are usually unsupported as well. Additionally, randomising the memory layout(ASLR) makes it almost impossible for an attacker to invoke specific OS function calls.

Summary:

  • No Shell
  • No Users
  • Single Process
  • Does not need underlying Host OS(typically runs on Hypervisor)
  • Decreased Attack Surface

Types of Unikernels

Clean Slate

Restricted to a single language with all the necessary interfaces provided and also allows language specific virtual machines to function.

Example: MirageOS, IncludeOS, HalVM, LING, runtime.js

Legacy

Implemented using the POSIX(Portable Operating System Interface) standard which makes it possible to run most POSIX-Compliant applications and services with a few configuration changes without having to re-write them completely.

Example: OSv, Graphene, Nanos, Rumprun


VMs vs. Containers vs. Lightweight VMs vs. Unikernels

VMs Containers Lightweight VMs Unikernels
Runs on a Hypervisor Requires a Host OS Requires a Host OS Runs on a Hypervisor
Uses own Kernel + Allocated Resources Relies on Host Kernel + Consumes Host Resources Dedicated virtualized Kernel(KVM) + Allocated Resources Built-in Kernel libraries + Allocated Resources
Multiple Processes Multiple Processes Multiple Processes Single Process
Large Attack Surface Relatively Low Attack Surface Low Attack Surface Minimal Attack Surface

Comparison of High-level Architectures

VMs vs. Containers vs. Lightweight VMs vs. Unikernels


Why isn't everyone using this already!?

Although Unikernels have been around for a while, one of the main reasons it hasn't been in the spotlight all that much is probably owing to the fact that most applications are designed to run as multi-process systems. Migrating them to a single-process, single address-space system can be challenging to say the least. Managing and using unikernels for someone not completely familiar with the system can be challenging as well, but there are projects that are making this relatively easy.

Thanks to DevSecOps and Microservices, a lot of the complexity in Operations and Security is fairly abstracted. But, asking developers to become experts at low-level C and learn all there is about Operating Systems, and the necessary libraries to write Unikernels for Applications isn't going to end very well. Finally, debugging and testing can bequite challenging for the average developer unless they know their way around strace, ftrace and GDB.

Remember, you're not going to be testing just the application code anymore. If the libraries, or the language are modified to work with unikernels, those will have to be tested as well.

For Unikernels to be widely used, the complexity behind them should ideally be abstracted, and the effort to implement, deploy and manage should be seamless.


Implementing Unikernels

I've decided to use Nanos unikernel by NanoVMs and their CLI tool ops to build, and deploy. It seems like the simplest option out there, and the documentation is fairly straight-forward as well.

Additionally, the fact that they support deployments on multiple cloud-providers right out of the box seemed pretty nifty. I'm going to leverage this feature and deploy on AWS.

Ops is going package everything to create an AMI(Amazon Machine Image), which I can then launch as an EC2 instance.

Disclaimer: I'm a tiny, tiny investor in NanoVMs.

Prerequisites

Since I didn't want to mess with my machine, I got myself a lightsail server running ubuntu 20.04 LTS. Once the necessary dependencies(aws-cli, qemu, golang and ops) were installed on the server, I configured aws-cli by running aws configure and, created an aws s3 bucket called s3://understanding-unikernels that's going to help create the AMI.

prerequisites.sh

apt update && apt -y upgrade
echo "Installing golang"
wget https://golang.org/dl/go1.15.7.linux-amd64.tar.gz && tar -C /usr/local -xzf go1.15.7.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
echo "Installing AWS-CLI and qemu"
apt install -y awscli qemu
echo "Installing ops"
curl https://ops.city/get.sh -sSfL | sh
Enter fullscreen mode Exit fullscreen mode

Installed prerequisites and respective versions

IAM Permissions

The aws user I configured on my server for aws-cli has AdministratorAccess policy attached to it and, also needs the vmimport policy since ops is going to be importing AMI(s). This can be done from the AWS Management Console by attaching the vmimport managed policy to the aws user.

I definitely do not recommend using AdministratorAccess policy. I'm only using it since I'm testing things out, and it makes things a lot easier!

If Cloud Security, Containers, Kubernetes Security and DevSecOps is something you're interested in, be sure to check out AppSecEngineer for self-paced trainings with state-of-the-art hands-on Labs!

IAM Permissions

Create Snapshot Image

Now that all the prerequisites have been installed, and the necessary dependencies are configured, I'm going to create a super-simple golang static file server that's going to serve index.html on port 8080.

server.go

package main

import (
    "log"
    "net/http"
)

func main() {
    fs := http.FileServer(http.Dir("static"))
    http.Handle("/", fs)

    log.Println("Listening...on 8080")
    http.ListenAndServe(":8080", nil)
}
Enter fullscreen mode Exit fullscreen mode

static/index.html

<!doctype html>
<html>
<head>
<meta charset="utf-8">
    <title>Hello!</title>
</head>
<body>
    <h1>Understanding Unikernels!</h1>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

Once server.go and index.html are created, we can build our code to create the server executable.

GOOS=linux go build server.go
Enter fullscreen mode Exit fullscreen mode

The last thing that needs to be done before we're ready to create our unikernel is the configuration file for ops. This file is going to contain the s3 bucket-name that was previously created(s3://understanding-unikernels), and the aws-region to create resources.

config.json

{
    "Dirs" : ["static"],
    "CloudConfig" :{
        "ProjectID" :"test-project",
        "Zone": "<region>",
        "BucketName":"<bucket-name>"
    },
    "RebootOnExit": true
}
Enter fullscreen mode Exit fullscreen mode

After running all the steps mentioned above, directory structure should be similar to the one below:

directory structure

understanding-unikernels/
├── config.json
├── server
├── server.go
└── static
    └── index.html
Enter fullscreen mode Exit fullscreen mode

By leveraging ops, we can now create and list out the unikernel machine-image(s) on our AWS account.

ops image create -t aws -c config.json -a server

ops image list -t aws -z <region>
Enter fullscreen mode Exit fullscreen mode

Create Snapshot Image

Create Instance

Once the unikernel has been imported onto AWS as an AMI, we can proceed to launch it as an EC2 instance with ops. Since the static file-server is configured to run on port 8080, we'll need to mention that along with the AWS region as part of the ops command to run the instance.

ops instance create -c config.json -t aws -z <region> -i server-image --port 8080
Enter fullscreen mode Exit fullscreen mode

Create Instance

One thing I observed was that ops defaults to the Instance type t2.micro which uses the xen hypervisor. The --flavor flag in ops can be used to change the Instance type. Additionally, support for AWS Nitro instances has been added as well, but the AMI needs ENA(Elastic Network Adapter) enabled to work with Nitro instances which can be checked by running the following command:

aws ec2 describe-images --image-id <ami-id> --query "Images[].EnaSupport"
Enter fullscreen mode Exit fullscreen mode

Delete Instance

The final step is to delete the EC2 instance, and the AMI that was created. Doing this with ops is extremely simple.

ops instance delete -t aws -z <region> <instance-id>

ops instance list -t aws -z <region>

ops image delete <ami-name> -t aws -z <region> 
Enter fullscreen mode Exit fullscreen mode

Delete Instance


Security Considerations

One major security issue is the fact that unikernels run the application, and the kernel together as a single process allowing applications(or attackers) to potentially call kernel-level functions. Supply chain attacks, which are a very likely threat can make this a reality.

Nanos however manages to retain different privileges for kernel and user code by having separate partitions thereby preventing such attacks to a certain extent. Additionally, arbitrary code execution can be prevented by leveraging the exec_protection flag that prevents the kernel from executing any code outside the main program executable.


Interesting Projects

If you are working on a cool unikernel project that's not mentioned below, please reach out to me. I'd love to give it a shot and add it to the list!


References

For those of you interested in going further down this rabbit hole, here are a few links that will help:


Originally published at https://nithinjois.substack.com.

Top comments (0)