Go-ing Serverless: How to Write Efficient Go Programs

Riyadh Al Nur
AirAsia MOVE Tech Blog
4 min readJan 16, 2022

by Riyadh Al Nur

Go is a fast and a performant language’ — we have all heard this line before. Even then, when working with highly scalable systems, any amount of milliseconds you can shave off is a win-win for everyone. This is more evident in the context of serverless functions.

One thing I have noticed so far is that engineers will tend to re-use whatever they run or are running in a traditional server environment without optimising their program for the serverless environment. In this post, we will look into a framework for writing Go programs in the context of serverless functions.

Cold starts refer to the state of your function when there is an incoming request. If there are no idle instances/containers available to serve the request, one will be spun up to serve the request. This adds latency to a request since the container will take some time to get from start to being ready to serve the request.

Billing in serverless environments includes the start-up time and the execution time of a given function. A reduction in the start-up time will mean lower costs, as bulk of the billing will be for the actual function execution time.

If doing a cold start then, your start-up time will be higher — the time it takes to wake the container up and the function set up time. While cold starts will still depend on your application usage patterns, in practice, you will tend to come across cold starts in a serverless environment.

How billing works for serverless functions

Our application

The Go application for this post does one thing — when triggered using an HTTP request, it will read a file from a GCP Cloud Storage bucket, read its contents and return the contents of it in the response.

When writing a normal application to do this, we would usually do something like this — initialise a client in the init function and use the global variable throughout your code to access the resource.

Code snippet of using init to initialise set up code in Go
Initialising a connection using init()

While this is very much a piece of code you can re-use in a serverless function, the main glaring issue is that you are spending time at the beginning of every function execution initialising a connection to a resource regardless of whether you are using this connection in your code, e.g if reading the contents of the file in the bucket depends on a certain parameter being present in the request, you are essentially making a connection regardless, for any incoming request regardless of whether it is valid or not.

In a serverless environment, we can rewrite the code above to be efficiently initialised with a guarantee that it will only be done once across multiple threads. We make use of Go’s built-in sync package in order to achieve this. The code now is rewritten then like this. The .Do block can be placed wherever access to the client is required.

Code snippet of using sync.Once to lazy initialise set up code in Go
Initialise a connection efficiently using sync.Once

With this change, the only thing we update is the way we connect to the resource; only when we need it, thereby reducing the start-up time of our function.

Results

We carried out a simple load test against both implementations of the same program. Our parameters are simple — the average response times should be less than 100ms and the 95th percentile response times should be less than 250ms. The load test was set to run for 1.5 minutes in ramp mode with a maximum of 20 simultaneous users. And the results speak for themselves.

Response times when using init()
Response times when using sync.Once

Our initial implementation was almost 2.4x times slower on average for the same number of users. Looking at individual results, the response times are closer. It’s only when we put the function through its paces that we see the huge difference in performance.

Execution timings when using init()
Execution timings when using sync.Once

Parting words

It takes a bit of getting used to thinking in a serverless way; the paradigm shift is considerable. Once achieved, even the smallest of changes can have a big impact not just on the user experience but on the bottom-line as well.

--

--

Riyadh Al Nur
AirAsia MOVE Tech Blog

Full-stack Software engineer. OSS. Write sometimes. Gamer. Expat. Low-key goofy guy!