Introduction
A fairly common approach when it comes to working with serverless code is to write it as a Python, Node or Go application given their reputation for very fast cold starts.
But what if we are faced with pre-existing Java apps targeting serverless environments such as AWS Lambda? Perhaps a good majority of our code base hosts Java and we have developed a rich ecosystem of tools and libraries that we would like to reuse. Re-writing the entire fleet of such applications into a different language is expensive, not to mention, we are giving up features such as static type safety and compile time optimizations.
A while back, I was faced with this exact scenario: 9 AWS Lambda apps written in Java that would be very slow on cold starts to the point where some of them would occasionally time out.
The Lambdas in question were placed behind API Gateway and used for admin tasks by calling the corresponding REST APIs. This functionality was not very heavily used and so running into cold starts was inevitable; however, because this was not a critical service it was a perfect opportunity for experimentation: to figure out if these Lambdas could be salvaged.
It wasn’t long before I ran into several other blog posts about developers successfully using GraalVM and frameworks such as Quarkus to address this very problem. And so I’ve decided to try it out for myself.
But what are these tools anyways?
GraalVM
In short, GraalVM is a Java Virtual Machine that comes with a toolset capable of compiling Java to a Native Image and executing it using the Graal JVM.
Normally Java utilizes the “Just In Time” (JIT) compiler, which as the name suggests, performs optimizations and compilation during the execution of our code. The long-running applications benefit from this given the JVM optimizers constantly monitor program’s execution and perform fine-tuning that over-time translates into better performance.
This is great if an application is instantiated once, and is expected to run for several hours or more, but not so great if we are dealing with Kubernetes, AWS Lambdas and batch jobs that hope to boot up Java apps quickly, perform time-sensitive operations and scale depending on the demand - speaking of turbo lag for the car enthusiasts out there.
And this is where GraalVM’s Native Image capability steps-in to help. Instead of using JIT compiler, it opts in for a very different approach of compiling our code ahead of time (AOT). It pre-bakes our pie using static code analysis and even pre-initializes certain classes during build time so that they are ready to fire any time our application code is executed.
The result? Very fast cold starts, which make Native Images very capable in serverless domains where apps are short-lived and have to boot up quickly.
One thing to note is that even though GraalVM is capable of AOT, it can also serve as a drop-in replacement for the existing JVM offering better performance given GraalVM’s new JIT compiler written in Java.
But wait, there is more! Because Native Image includes only the code that is on the known execution path we trim the fat and all of the Java classes that have not been explicitly declared to keep will not be available. Because we only keep the bits that are expected to execute, we increase our application’s security.
Take for example the infamous Log4J vulnerability that was using Remote Code Execution as the means of compromising the host. With Native Images, gadget chaining is very unlikely to be successful because the pieces of library code required to convey the attack aren’t even reachable.
Quarkus
Quarkus on the other hand, is a Java framework optimized for serverless applications which comes with a toolbox that makes building Native Images easier by offering an extension for specifically configuring and building AWS Lambdas as native executables.
C1 Compiler
During my Lambda optimization journey, I have also run into alternative optimization techniques. One of such optimizations was the proposed exclusive usage of a C1 compiler during Lambda’s execution which has promised to deliver faster cold starts. Normally, Java applications that run inside a JVM use a tiered compilation which consists of a faster, but less optimal C1, followed by C2 which is slower, but offers more optimal performance for Java apps that execute for a long time. Given that Lambdas are short-lived, the benefits of C2 compilation are negligible.
A guide walking through the process of configuring C1 compilation for AWS Lambdas is available here.
Of course I wanted to know how much of an improvement this technique can offer in comparison to my GraalVM master plan in place, and so I have also included it in my findings below.
Further details about JVM’s tiered compilation as well as GraalVM’s brand new JIT compiler can be found in this Baeldung article.
“But wait, what about AWS SnapStart?”
Ironically enough, a few months after I shipped my changes to production, AWS came up with their latest SnapStart capability, which takes a snapshot of a running Lambda and instead of re-initializing it all over again, it uses snapshot images as a restore point promising faster cold starts. I had to give it a try to find out if the use of GraalVM was a wasted effort and also included it in my findings.
It’s worth noting that in order to get the most out of SnapStart, code refactor would have been required in order to utilize the beforeCheckpoint
and afterRestore
hooks (more details here). Given I wanted to avoid any major code changes if possible, I’ve used this feature “as is”, without implementing these methods and rearranging any code.
The Master Plan
Now back to GraalVM! To my surprise, after incorporating this solution, there were absolutely no Java code changes required aside from adding and adjusting build configuration files and some required metadata.
Sounds too good to be true?
Maybe a little. Given that we are using AOT compilation, in the world of Java, this poses a certain challenge if it comes to the use of language features such as Reflections, Proxies, Interfaces, and Service registries which a lot of libraries rely on. This is why the GraalVM compiler requires extra configuration metadata to be declared that explicitly registers certain classes and services so that they can be included in the final artifact. GraalVM provides a so-called agent which can be used to run alongside your executable to automatically identify the required configuration which can make this process easier.
Quarkus provides several extensions for well-known libraries to make them “native-image friendly”, but given I was working with an existing code base, and my goal was to avoid any major refactor (or any code changes for that matter), I settled for creating the required configuration files that the existing libraries required in order to produce Native Images successfully.
Be aware that compiling Native Images is resource intensive and it takes a significantly longer amount of time compared to the bytecode compilation targeting a standard JVM runtime. Chances are that you may find yourself having to allocate more RAM to a build node to avoid out of memory issues, which shouldn’t be a deal breaker, but is definitely something to keep in mind.
Now that I’ve had my Native Image Lambdas compiled and packaged it was time to deploy them into a test environment. Normally, Java Lambdas utilize AWS’s Java Runtimes to execute; however, given we are trying to use a Native Image which is a binary artifact that contains our app code wrapped inside the Graal JVM, we must select one of the “Custom” Amazon Linux environments that AWS offers.
Testing Methodology
I have used a Postman API Collection to send requests to all the 9 Lambdas and measure cold start response times for each technique mentioned above. To ensure I was always encountering a cold start, I have reloaded target Lambda’s configuration which assures that the next invocation will not use an instance that might be already warm. All Lambdas were configured with 1GB of RAM. I have also measured a single invocation for every configuration given the process was time consuming; however, the observed response times painted a pretty clear picture.
Results
So did it work? Absolutely! Here are the results:
And the clear winner is: GraalVM Native Images - on average, it has resulted in a 3x speedup in comparison with the unchanged Java Lambdas - no more time-outs, and much better response times, which is exactly what I wanted to achieve.
SnapStart did not perform as well as I thought it would without any code changes. When the C1 compiler was employed in addition to the SnapStart feature, it lowered the cold start times further, but still did not beat GraalVM’s Native Image. That’s not to say that it’s not a viable option as a fast and easy to implement improvement; however, if we want to optimize our Lambda as much as possible and we have some time and resources to adjust configuration and our build process, GraalVM is definitely superior when it comes to performance and security.
Memory Footprint
As claimed by GraalVM, Native Images require less resources to run effectively as compared to their regular JVM counterparts. I wanted to see how the cold start and warm start performance would hold up if I was to reduce the amount of RAM these Lambdas had to work with. This time I have selected only a single Lambda app to perform this test. Here are the results:
And on their promise they have delivered! Regular JVM Lambdas ran out of memory when attempting configuration of 256 MB and lower, whereas the Native Image seemed to be unphased and continued executing. If not for 128 MB being the lowest available memory option, I wonder how much lower we could have gone. Native Images are not only faster on cold starts, but offer consistent performance when working with limited resources, which translates into lower operating costs.
Conclusion
Java’s ecosystem is rich and vast with many new technologies and enhancements emerging every day that keep Java in the game when it comes to serverless applications. One such emerging technology is GraalVM. What started as a research project is now slowly being adopted and poses as a viable alternative to a standard JVM such as HotSpot. In this blog post, I have barely scratched the surface of what GraalVM has to offer and I would encourage readers to explore it further. There are several success stories from companies such as Adyen (article link) or Facebook (article link) who were able to utilize GraalVM to save time and money.
So the next time you are about to discount Java as an option, give GraalVM a try. And now that Spring Boot 3 supports GraalVM Native Images out of the box, it’s easier than ever to employ them for your serverless workloads to capitalize on performance, low resource consumption and added security that GraalVM has to offer.
Top comments (0)