How to Build a Lambda Infrastructure in Go

Konstantin Makarov
Scum-Gazeta
Published in
4 min readSep 9, 2023

Introduction

I really enjoy it when engineers create their own versions of popular solutions. One article where they created their own Kubernetes is worth reading!

I already made a driver for PostgreSQL, and now I decided to take on the holy grail: AWS

When I took the exam for the AWS-Developer certificate, I was impressed that they have all the solutions you could ever dream of. But how they work, I was lucky not to be asked about that. It’s time to figure it out.

Let’s dream a little.

So I decided to travel back in time for 10 years and start my cloud empire. I decided to start with the game-changer: Lambda.

Of course, if you go back to the future and compare how far AWS has come in that time, you’ll be surprised. But 10 years ago, my solution would have changed the rules of the game in development.

Golang has been around for a few years, and Docker is just about to appear. I would immediately enter the market with my product.

Let’s compare what AWS has now and what I had 10 years ago:

| Function                      | AWS-Lambda | Lambda-Go    |
|-------------------------------|------------|--------------|
| Serverless computing | yes | yes |
| Events | all | HTTP |
| Various programming languages | yes | Golang |
| Flexible scalability | yes | One instance |
| Development period | 10 years | 1 day |

Bitter truth

Okay, enough joking around — let’s be serious now. Let’s see how it works for me.

Of course, I chose the simplest architectural solution, and the project is currently in its early stages. I want to bring it to production and implement the missing functionality — but I will need your architectural help! I probably could have made a mistake in my implementation, so I would be happy to get any feedback.

It all started with the fact that I was very interested in how AWS services work, in particular Lambda. I Googled, but I couldn’t find anything specific — only assumptions about how it probably works:

https://www.bschaatsbergen.com/behind-the-scenes-lambda/

https://matthewleak.medium.com/aws-lambda-under-the-hood-how-lambda-works-43efba14d899

AWS is a big player and it appears to be isolating its processes using virtualization and Firecracker, all I had at the time was the fact that the term “cold function start” led me to believe that it was containerization, and that was fine with me.

The plan in my head was as follows

  1. The user prepares a simple Go program and uploads it to the server.
  2. The server builds an image based on the program and creates a container from it.
  3. When the user wants to use their function, they make an HTTP request to it.
  4. The server starts the prepared container, proxies the request to it, gets the response, and then stops the container.

Only 4 steps, but of course each one has some pitfalls.

Function preparation

I didn’t reinvent the wheel, and I started with the existing AWS library.

The library supports different signature options, which I think is due to the different types of events that Lambda supports. For example, for queues, you probably don’t need to do anything other than acknowledge the event.

But I only left myself one signature which is very well suited for HTTP:

type Handler func(ctx context.Context, payload []byte) ([]byte, error)

As a result, we get the code that is already familiar to us.

Preparing artifacts for the Docker

At first, I thought that using the Docker API would be overhead because it uses ContainerD under the hood. So I wanted to do the same, but I decided not to waste time and use the Docker API that I am familiar with.

https://docs.docker.com/engine/api/sdk/examples/

In any case, according to clean architecture, this is behind the interfaces and we can change the implementation at any time.

When a user sends us their Lambda function in a tar-gz archive, we unpack it into a folder with a prepared Dockerfile and build the image.

Once the image is built, we create a container from it. The container is not running yet, but we are almost ready to start it. When creating the container, we expose a random port inside the container and remember it.

Proxy user requests

When a user requests a Lambda function, we start a pre-prepared container. We then forward the user’s request to the container. We get the response and return it to the user. It’s a simple process, but I added a retry-pattern to wait until the container is ready to receive requests.

This is a good place to collect metrics that I can use to scale the function in future releases.

You can familiarize yourself with the final product code in my repository:
https://github.com/ihippik/lambda-go

The main thing is to remember that it currently has many shortcomings that I will try to follow up on to eliminate, but it is great for the awareness of how complex and interesting the world around us is ❤️

My next steps:

Here are some ideas I have for code changes, and maybe for a follow-up article:

--

--