Battle of the Serverless — Part 3: And Go Wins (Kind Of)
I’m sitting in a hotel room with my mind abuzz, wondering how I can squeeze out two more 10 minute blocks of productivity before I crash. Time to close the chapter on this time-consuming, but enlightening, exercise of benchmarking languages supported by AWS Lambda. I always go through a similar process of vetting out technology stacks when new projects hit, and it’s time to make a decision based on this data.

Unfortunately, the technology stack decision making process all too often looks like this in real life:
Frank: “We have to build a secure, fault tolerant, event-driven, resilient, scalable (yay!), cloud native microservice (another yay!) Let’s think which technology we’re going to pick.”
Jess: “According to “The Internet”, Go is hot right now.”
Frank: “Ok, I’m writing this down in our proposal. We’ll publish it in the charter on Monday.”
Jess: “Well, hang on. At the moment we only have JavaScript and Java developers available.”
Frank: “Angular and Spring Boot it is! But can we throw in a queue so we can say we’re decoupled? Our bosses and guild will love that.”
This is exactly the scenario I wanted to avoid, so I headed down this path of doing real testing, real experimentation, landing on real data to drive an educated decision/opinion for a future-proof solution and approach.
What was the experiment?
My use case is a providing a near real-time web API suite backed by microservices intended to scale to ~400 requests per second in bursts, but will experience slow times. Resiliency and being event-driven are givens, with scalability and being loosely decoupled right behind. This experiment continues the work done in our pretend suite of microservices exposed via API Gateway for an API with the codename of Slipspace in a mock company called STG. (Slipspace drives are how the ships in the Halo universe travel so quickly to different sectors of the galaxy through something called Slipstream Space, so thought it was cool for a name requiring awesome warp speed APIs.)
Part 1 is here: https://medium.com/@shouldroforion/battle-of-the-serverless-part-1-rust-vs-go-vs-kotlin-vs-f-vs-c-32a66613f919
Part 1.5 is here: https://medium.com/@shouldroforion/battle-of-the-serverless-part-1-5-608a73c5f9fa
Part 2 (updated) is here: https://medium.com/@shouldroforion/battle-of-the-serverless-part-2-aws-lambda-cold-start-times-1d770ef3a7dc
For this last and final part of the experiment, we threw in a DynamoDB backend and executed a query scan operation against 1000 documents, with a projection expression of 4 properties, on every request sent to the Lambda function being targeted.
Here are the final numbers
For the final round, times are outputs based on Lambda functions written in Rust, Go, Kotlin, F#, C#, Python, and TypeScript/Node.js. ~200K requests hit each function over the period of about a day.
Quick observations show the following:
- Go has the fastest average execution duration, Python is the slowest
- Go, Rust and Python are extremely consistent languages, meaning they showed a very stead execution cycle time over time
- F# and C# tended to have the highest cold start times, TypeScript hit the lowest in this set of tests
- Go, C#, F#, TypeScript, and Kotlin hit the fastest execution cycles during their lifetime
- Kotlin consumed the most memory, almost double the next ranking language, Go and Rust used the least
- Most of the compiled languages are still faster than the interpreted languages when their functions are warm
- C#, F# and, to lesser extent, Kotlin were pretty spiky over time in their execution cycles





Based on this data, Go wins
Based on this data and querying other data from the Internet, I worked through my own personal technology choosing framework and scoring mechanism. This is where we landed on the scoring total and order for all the languages tested in this experiment.

Honestly, my scoring framework is very broad. For my use case, not all of these criteria are important. The highlighted grey ones are the important ones and in this order:
- Use case (microservice/API)
- Cold execution (avg)
- Stable execution (avg)
- SimplicitySupported (in AWS)
- Warm execution (avg)
- Loved & wanted
The scoring is simple: based on the placement of the language in the ranking category grid, give it a value of 1–7 with 7 being the higher ranking (higher is better.) I work through each category, summing up values as I go. After all is said and done, we end up with this overall scoring order, with Go coming out clearly on top for the categories this exercise cares about.

As much as possible, I’ve tried to keep bias out of this, though it’s completely a success due to the survey results driven by some community bias. Go is what I’ll use for this use case going forward.
Want to experiment on your own?
This is the repository I built and used for testing. Each language has its own subfolder, codebase, and serverless.yml file: https://github.com/shouldroforion/aws-lambda-benchmarks
For firing off requests I used the api-cannon.sh bash script at the root of the repository, in addition to Charles Proxy to monitor cold start times.
FIN/ACK
This ends the AWS Lambda performance tests. I hope they can help someone in making some educated decisions about serverless technologies. Even though “no server is better than no server” stands true, “no server” can still be very complicated. Serverless architecture requires a different mindset and will bring different challenges. Use data in your decisions, don’t just Google what’s hot and hyped. Cheers!
