DEV Community

Cover image for How AWS Lambda Works Under The Hood
Oliver Jumpertz
Oliver Jumpertz

Posted on • Edited on • Originally published at oliverjumpertz.com

How AWS Lambda Works Under The Hood

AWS Lambda is an awesome offering. You provide the code, AWS handles the infrastructure and execution for you.

But where is your code actually executed? How does AWS do it?

I recently tried to find it out for myself, and this article is the result of what I learned.

Thinking about Lambda

We know that we can write a script that looks like below, and then upload it to AWS while they take care of everything else.
And from simply looking at the code, it doesn't look like the usual API method we'd implement in Express.



exports.handler = async (event) => {
  return {
    statusCode: 200,
    body: JSON.stringify({ msg: "Hello from Lambda!" })
  };
};


Enter fullscreen mode Exit fullscreen mode

We're exporting a function, so something else must take our code, import it, and then handle everything else.

Conclusion number one:
Something must run our code.

We also know the infamous cold start issues which became better over time but are still there. Sometimes the environment is shut down and then started again.

Conclusion number two:
Whatever runs our code can be shut down and started again.

Have you ever noticed that it's actually impossible to access anything on the host system? If not, try to, and you'll see that the environment will prevent it.

Conclusion number three:
The environment is pretty secure.

Thinking about the technology Lambda is based on

There are a few ways of how AWS could have implemented Lambda (taking into account its initial release year of 2014):

  • Containerization
  • Virtualization
  • Something running on bare metal

We can quickly rule out "Something running on bare metal". AWS already had EC2 at that time and some good knowledge of virtualization.
It would not have made a lot of sense for AWS to step back from virtualization and not make use of their existing infrastructure.
They basically had everything in place to provision a virtual machine on-the-fly.

What about Containers then?
They can be spun up quickly and disposed of again.
AWS could have taken the code, wrap it with something and then put it inside a container.
This would have been a great idea, but also something completely new for AWS at that time.
Additionally, it would not explain the (old) cold start issues, because containers are usually pretty fast to spin up.

What about virtualization then?
It would make a lot of sense.
At the time of starting Lambda, AWS already had EC2 and all the infrastructure to provision a virtual machine on-the-fly. It would also explain why a lambda function being cold-started could sometimes take so long until it finally served a request. But how did they manage to reduce the cold start time up to today?

Before we dive deeper, I'll give you the answer:
Lambda has, since its release, been based on virtualization technology. No fancy containers, nothing self-developed.

It simply made the most sense for AWS to do it exactly this way. They had all the knowledge, as you've read above, and they had the infrastructure for provision. All they had to add was something to wrap user functions, and something to call them, as well as some supporting services which could handle eventing.

And now that we know that it's virtualization, we can take a look at what's exactly used nowadays.

Enter Firecracker

Firecracker is a virtualization technology, or better, a virtual machine monitor (VMM) developed at Amazon (now open-sourced) and written in Rust.

It's the engine powering all your Lambda functions.

What Firecracker basically does is creating and managing a multitude of Linux Kernel-based Virtual Machines (KVMs), which are microVMs that are faster and more secure than traditional VMs.

The architecture of Firecracker

The interesting thing about those microVMs is, that they are actually on par with containers regarding memory footprint and start-up time, all while offering even more security due to the advanced features KVM offers.

You can read more about KVM here.

Firecracker comes with a REST API that is used to create VMs, delete them, manage them, etc.
Whenever you create a new lambda function and upload your code, the Firecracker REST-API is called under the hood to create a microVM with your function's CPU and memory settings.

AWS keeps base images that contain language/runtime specific bootstrap code.
This is the code that actually invokes your handler, passes it the request, and takes the response to return it to the caller.
This is also the code where various metrics are measured that are then used to calculate your bill.

You can imagine the code as containing an infinite loop, waiting for requests, passing them to your function, returning the response, and gathering execution metrics.

After Firecracker created a new microVM, including your language-specific runtime, your code is then put into its /var/runtime/bin folder. This is the place where the bootstrap code resides, too.
Now your function is basically able to run and accept requests.

AWS will, after a while, however, shut the VM down to save resources on their side.
This is, once again, a call to the Firecracker API.

Incoming requests, e.g. through API Gateway, lead to Firecracker being tasked to start the VM again, such that it can process the request.

Surrounding infrastructure and supporting services

There are of course a lot of surrounding systems and services which do their work to make AWS Lambda what it is.

There are services and systems around Firecracker that make all those requests to its API.
Some services are routing the requests. Some services decide when to call Firecracker to shut a certain VM down or pause it, and when to spin it up again.
And there are certainly a lot more services, like Queues, scheduling asynchronous messages, and much more.

Conclusion

It's pretty interesting to see what way AWS has gone to make Lambda what it is today, and even more interesting to see Firecracker and how it solves a lot of problems serverless functions cause for service providers. And Firecracker is an integral part of it, as a pretty exciting piece of technology, that doesn't go the usual container route but used another awesome feature of the Linux kernel: KVM.

Before you leave

If you like my content, visit me on Twitter, and perhaps you’ll like what you see.

Top comments (8)

Collapse
 
dmalouf profile image
David Malouf

Nice read, thank you Oliver!

I'm not that well-versed in KVM microVMs so I figured I'd ask you: do you know why/how KVM sometimes creates new Lambdas with (sometimes) artifacts from a previously-started Lambda? For example, sometimes there will be items in-memory (AWS often encourages 'storing' DB connections in-memory as they can, sometimes, be reused by a 'new' Lambda) or sometimes there are files in the /tmp folder.

Just curious.

Collapse
 
oliverjumpertz profile image
Oliver Jumpertz

Hey David, thank you very much!

The /tmp folder is permanent.
Imagine you use VMWare to start a virtual machine on your computer. You can shut it down, but the files you stored are persisted and are there again when you restart the VM. The same holds true for microVMs managed by Firecracker.

Regarding in-memory storage:
Your handler is actually executed by a wrapper. That wrapper is an HTTP service that forwards requests to your handler function. As long as the VM is running, that wrapper is running, like a normal microservice. And for all that time it is running, you can indeed store things in memory.

As soon as your VM is shut down, however, that stored information gets lost. :)

Collapse
 
dmalouf profile image
David Malouf

That makes sense, thank you Oliver!

Following on, my experience is that the memory/tmp-folder are not always left-over. Here's my scenario: I was tasked with a proof-of-concept need where a PDF uploaded to S3 triggered a Lambda that would take the PDF, break it into single-page PDFs, convert all of those single-page PDFs into full-size and thumbnail images, and then put all of these 'new' files (single-page PDFs, images) back into S3.

So I wrote a script that put 100 relatively-large PDFs into S3 at roughly the same time (i.e. concurrently). I had the Lambda log what was in the /tmp folder (i.e. print to stdin the list of files in /tmp). I was surprised to see 'other' PDFs in the log (other than the one being worked on).

I was even more surprised that some Lambdas logged no 'other' PDFs, some a few 'others', some many 'others'.

Based on your article, I am wondering if having microVMs running on different physical servers might be the key to this? But I'm just guessing (haha).

Thanks a TON for replying to my first question - very kind to take trivial questions from strangers :-)

Thread Thread
 
oliverjumpertz profile image
Oliver Jumpertz

Okay, yes, I wasn't clear enough on that one, but I also didn't cover this in the article, explicitly.

It is actually never guaranteed that two invocations of your Lambda functions use the same VM. That's up to the management layer.

Multiple invocations in your case will most likely have lead to multiple VMs being sound up with your image. And some of those were reused later, so your temp files were still there.

There sadly is not a lot of explicit documentation on that behavior, as it is also relevant for security, so information is pretty rare. :)

Collapse
 
kappaj2 profile image
Andre Kapp

You are actually able to write to the /tmp folder from the Lambda code. Used it for processing big files. Rather than allocate lots and lots of memory and money, upload the big file to /tmp folder and read line-by-line and process. This gives you 512MByte data.
You are also able to share EFS file system between multiple Lambda's - very neat when you need to spin up more than one Lambda to process your workload.

Something interesting with the hot/cold Lambda's and running your own threads. When the main thread enters the Lambda and spins up other threads, you need to make 100% sure those threads ens before the main thread exits.
AWS will stop the Lambda when the main thread exits. When a new request comes in and the Lambda wakes up again, those old threads start running again. This is so difficult to trace until you actually realized what happened!

Great article!

Collapse
 
newworldman profile image
BP

This is very interesting. Is there a chance corrupted code gets copied into these VMs, and stays in memory, even after the VMs get shut down in Firecracker?

Collapse
 
oliverjumpertz profile image
Oliver Jumpertz

It's pretty difficult, if not impossible to make that happen. Firecracker creates its microVMs with KVM, which is a Linux Kernel feature.

It was first merged into the kernel in 2007, and thus had a lot of time to to be penetration tested.

Additionally, the VMs created only run with a very limited device set (even the keyboard device is only implemented to shut down the VM).

So overall those VMs are pretty safe, and even if you manage to get corrupted code into the VM, the likelyhood of it being able to break something is very low. But of course, the chances are never zero that someone still finds a way. 😊

Collapse
 
newworldman profile image
BP

Oh, ok, thank you. I was wondering from the standpoint of something not working correctly, maybe contributing to a constant CORS error, but perhaps that wouldn't happen.