Problematic long cold start of Spring Boot apps in AWS Lambda

Spring Boot is not designed to run in Serverless world. That’s a fact. However, for any strange reasons, you might end up with a requirement to run Spring Boot application in a Serverless manner, on AWS Lambda let’s say.

You will quickly find out that this is not only possible but quite nice really. You basically create RESTful controllers, expose endpoints, use Services and Repositories as you would normally. You can then grab this cool library: aws-serverless-java-container and create small wrapper-adapter-lambda-entry-point that will act as a bridge between Lambda invocation through API Gateway to your Spring endpoints. Something like this:

You can utilise all your knowledge about Spring Boot development and create deployment package ready for AWS Lambda in no less than 15 minutes. You have the same testability power, you can even run your application as a regular http-server based one locally. It’s wonderful… until you find out how long it takes to cold-start your Lambda.

You can find out more about cold starts in this amazing post by Yan Cui.

For my case, having very simple Spring Boot application, without database, cold start takes between 25 and 35 seconds! This is for Lambda configured to use 1.5GB of memory. Java and Spring Boot are very dependent on this memory setting as it also increases CPU power. Bumping memory to 3GB can drop cold starts to 8-14 seconds range. However, with 3GB your Lambda will be much more costly and there is a chance you don’t need this amount of memory anyway.

Any way to improve on that?

Improving cold start of Spring Boot in AWS Lambda

First thing you should do is to apply all the tricks described on this page. I did this and there were some improvements, yet nothing spectacular. I then found out, reading through Cloud Watch logs, that Spring context is spinning up twice when invoking cold Lambda:

Spring context spinnig up twice

More detailed look and it seems like the first context spin is not finishing - you don’t see Spring’s log statement, in the first spin-up section:

2019-07-11 06:13:56.923 INFO 1 --- [ main] your.package.Application : Started Application in 8.83 seconds (JVM running for 25.139)

Then, there is a log from AWS stating that request processing has started and again Spring context spinning up. This time context spins up fully and finally the request is being processed. I was confused because I was expecting Spring context to spin up only once (in static initializer section).

As it turns out, when AWS runs your code in cold Lambda, it has a phase called init. During this phase, for our Java app, it runs JVM and allows our code to do some preparations, like static variables initialization. So all subsequent warm calls have those statics ready. The thing is, this init phase has hard 10 seconds timeout. If our code won’t finish within 10 seconds, AWS will stop it. It won’t however mark Lambda as invalid - Lambda will be ready to handle requests, just not initialized properly. And this is exactly what happens in our case. With Lambda configured to use 1.5GB of RAM, Spring-Boot context can take 15 seconds to spin up. You can see this clearly in CloudWatch - first spin ends abruptly after exactly 10 seconds. So what happens next, when your code starts executing incoming request? Well, JVM finds out your static variables are not initialized and initializes them. This is why we see the second context spin. Assuming that your Lambda has defined run timeout for at least 30 seconds, it should be enough to run full Spring Boot application and handle the request. Any next, warm call will find out static variables initialized, context ready, hence it will be handled extremely quickly, in a matter of milliseconds.

What I did, I moved static section initialization into lazy init with null checking, like that:

With that, you can find in Cloud Watch logs that Spring context spins-up only once. init phase is much, much faster - previously it took its 10 seconds timeout window completely yet not being able to finish. Now it won’t do any initialization in init. So we end up in almost the same scenario as before (context initialization before handling first request), but without wasting 10 seconds in init. The result? Cold start of 1.5 GB Lambda takes about 14-15 seconds - 10 seconds less than before.

Do you find our article useful?

There is just one step to get access to carefully selected and distilled news and tips from the Serverless world! Just click the button below and let us send useful materials to you.

Subscribe me

Does it improve anything for 3GB Lambdas? It can, because it eliminates cases when spinning up context takes more than 10 seconds, which can randomly happen, depending on AWS load. So nothing to lose and a lot to gain. I’ve found cold startups for 3GB in the constant range of 5-6 seconds.

Summary

Thanks for reading. This quick fix may as well make your use-case for Spring Boot in AWS Lambda usable. Sure, it’s still too long to handle user interface requests, but it may improve your batch or server-to-server processing time considerably.

Efficient Cloud That Suits Your Pocket

We have many years of experience with migrating, designing and optimizing Cloud systems. Let us prepare a solution that suits your needs.

Schedule a call with our expert