Go serverless in Kubernetes with Knative

Paulo Kieffer
ilegra
Published in
4 min readOct 24, 2019

What the fucking Knative is?

Knative is an open-source project set up by engineers from Google, Pivotal, and other industry leaders.
It’s a collection of components that extends Kubernetes with focus on managing cloud services such as creating pods, auto-scaling, blue/green deploy and networking, making our lives simpler when coding serveless functions.

Advantages over other frameworks

In my research, Knative was the most complete framework when considering topics just like installation, implementation, configuration and
its mainly features: auto-scaling (including scale to zero) and blue/green deploy.

Others serveless frameworks

Installing Knative

You can get some samples of how to install Knative in minikube and different cloud providers such as Google, Azure, Ibm, on this link:

https://knative.dev/docs/install/

Knative components

Build: Responsible for creating and managing containers. We only provide the code and Knative Build looks after the rest.

In others words, it allows you to define a process that runs to completion and can provide status. For example, fetch, build, and package your code by using a Knative Build which communicates whether the process succeeds.

Serving: Responsible for deploying HTTP applications like AWS Lambda.

Eventing: Responsible for building event-driven applications.

Obs.: When this post has written only Google Cloud Scheduler, Google Cloud Storage and Kubernetes were active development, the others were just proof of concept.

Let’s code

In this quick example, we are going to make up a simple function with Go in order to say hello to the world 😁. (for this example I’ve used minikube as a cluster)

  1. Creating a helloworld.go file:

2. Creating a Dockerfile

3. Creating a service.yaml

4. Building and deploying

docker build -t paulokrj/helloworld-go .
docker push paulokrj/helloworld-go

5. Deploying the app

kubectl apply --filename service.yaml

6. Checking the pods

Before executing the request we can see there isn’t any pods.

kubectl get pods

7. Executing

Now you can do a simple request to helloworld-go.

8. Checking active pods

Now if we execute the command ‘kubectl get pods’, it will show up the helloworld-go pod which was just been created.

9. The pod is over

After 90 seconds without requests, the Knative will scale our pod to zero.

Autoscaler

With Knative autoscaler it’s possible to control the number of pods per service, considering two kinds of metrics: concurrency requests and cpu usage.
Knative per default uses concurrency requests as metric and after 100 concurrency requests brought a new pod up.

Talking about default configuration, if you don’t type the rules to scaling, knative will scale to zero when it spends 90 seconds without requests and, to “N” pods, when a lot of requests are brought up.

You can change this metrics by just typing a comand like the following example:

spec:
template:
metadata:
autoscaling.knative.dev/minScale: "2"
autoscaling.knative.dev/maxScale: "10"

Another option is to control the scaling by the cpu usage. We just need to change the class of validation and add up the percentage rule:

spec:
template:
metadata:
annotations:
# Standard Kubernetes CPU-based autoscaling.
autoscaling.knative.dev/class: hpa.autoscaling.knative.dev
autoscaling.knative.dev/metric: cpu

Removing the service

To remove the sample app from your cluster, delete the service record:

kubectl delete --filename service.yaml

--

--