Agnostic serverless functions with Kubeless

Serverless can be multi-cloud, flexible, and integrated into your Kubernetes cluster!

Diego A. Rojas
Blog Técnico QuintoAndar
4 min readJun 1, 2020

--

Serverless architectures became popular some years ago when AWS released Lambda functions. Since then, each cloud provider developed its own serverless framework and start discussing whose better, more performative, popular, cheaper, and so on would finish in a 1000 day discussion. Although all of them have different features, characteristics, and implementations, all of them share a common problem: They make you dependent on their platform.

We can see this problem replicated in almost all services these platforms offer. If one day your company decides to go multi-cloud or even migrate your entire infrastructure from one provider to another, due to costs, performance, availability, company policies, or whatever reason, it won’t be easy and for so many services, you will need to rewrite the entire code.

Kubernetes, on the other hand, has gained so much popularity in the last years and now is one of the most used platforms for cloud container management. Its highly customized features and integration with other platforms (such as Terraform) make it very interesting for Infrastructure engineers, but for me, the most attractive feature it's Kubernetes gives you independence.

As Kubernetes manages — in a very good way — container creation, replicas, and scaling, in case we wanted to implement serverless functions we had to do it using the could provider service. But what exactly does this service behind the scenes? Exactly, creates a container to execute the code. Nothing new for Kubernetes.

At QuintoAndar we strongly believe in freedom and autonomy. As we grow, we like to be independent and scalable without sacrificing our flexibility to choose the best tool at the moment we need. So, why not start writing serverless functions ready to work in any cloud provider? Enter Kubeless.

Write once, run anywhere

Using Kubeless writing once and running anywhere is totally possible. To show you a bit how I'm going to walk you through how we’ve replaced AWS Lambda functions to retrieve some objects from Dynamo using Kubeless.

A very interesting tool we use in our development environment is Kind. It will allow you to create a Kubernetes cluster inside a docker container, ready to work. Also, you will need external command-line tools such kubectl, and kubeless.

You can find all installation instructions in the Kubeless official docs. In case you get an error while downloading the zip file via curl, just download it manually and execute the rest of the command (ignoring the curl part). Then, you just will need to create your namespace:

Now, let's create our Localstack service to emulate AWS DynamoDB. In this case, I'm using docker-compose to create the localstack container in the same network Kind has created. If not, outgoing traffic from our Kubernetes cluster won't be able to connect with localstack.

Here is an example of the docker-compose service definition for localstack:

In case you’ve created your kind cluster manually, double-check the container network name.

Now, we have set up our local environment. Its time to start writing functions. Kubeless supports many code languages using different runtimes. In this case, we're gonna use Golang.

Golang function structure

So, let's look at how is our project structure:

So, let’s write our function code to save a new element inside the dynamodb_example.go file:

Obs.: Nowadays, Kubeless runtime for Go only supports dep. A pull request is being reviewed now to implement support for Go mod. Thanks, Thiago dos Santos Pinto.

And our Gopkg.toml file:

Once our function is created, let's deploy it in our local cluster. Go to the root folder, and execute:

Once deployed, you can call your function:

Now we are able to store elements into our DynamoDB table. As a challenge for you, now create the function to retrieve the saved elements ;)

When you are at the development stage, the code changes a lot. Since our code will be deployed on Kubernetes, it is necessary to delete the previous function to deploy the code changes:

Done! Our functions are successfully deployed into our Kubernetes cluster using Kubeless! Now, depending on your needs, you can configure triggers to execute your function when an HTTP endpoint is requested, some cronjob is scheduled or some payload was queued to Kafka. Check out the Kubeless triggers documentation to learn how to configure it in your cluster, also you can check the Serverless Framework docs to improve your serverless architecture.

Loved it? You can find all the required documentation and examples on how to set up and much more here!

Some things to keep in mind

One of the most common mistakes while you are developing a serverless based project, is attempting to simulate an API behavior. Serverless functions are meant to be triggered and do very specific things. Example: The Contact Us section of a web page to store the client’s data. If you attempt to structure it in a restful way such:

  • GET /client
  • GET /client/{id}
  • POST /client

You will need to configure an API gateway to call the right function depending on the request call, increasing the project complexity and maintainability. A concrete API implementation fits better for this case.

Want to work on cool projects like using Kubeless into highly available, highly scalable services in production? Join us!

--

--