Getting started with OpenFaaS on Minikube

Liz Rice
5 min readNov 18, 2017

--

There are quite a lot of tutorials about getting started with OpenFaaS, but here’s what I needed to get started on Minikube.

In kicking the tyres I was surprised to find that OpenFaaS forks a new process for each invocation. Read on for more on that!

Getting set up

  1. Install minikube
  2. Install the OpenFaaS CLI (I did brew install faas-cli on my Mac)
  3. Install OpenFaaS — I used the Helm charts. (At time of writing that required me to create an additional ClusterRole for the FaaS Controller, and then the Helm charts worked fine).
  4. Find the OpenFaas Gateway:
$ minikube service list
|-------------|----------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|-----------------------------|
| default | alertmanager | No node port |
| default | alertmanager-external| http://192.168.99.100:31113 |
| default | faas-netesd | No node port |
| default | faas-netesd-external | http://192.168.99.100:31111 |
| default | gateway | No node port |
| default | gateway-external | http://192.168.99.100:31112 |
| default | kubernetes | No node port |
| default | prometheus | No node port |
| default | prometheus-external | http://192.168.99.100:31119 |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
| kube-system | tiller-deploy | No node port |
|-------------|----------------------|-----------------------------|

Browse to the gateway-external address and you’ll see the OpenFaas UI.

You can also point the faas-cli at the gateway-external address, for example:

$ faas-cli list — gateway http://192.168.99.100:31112/
Function Invocations Replicas

For now this doesn’t show anything interesting because I haven’t created any functions. But let’s fix that.

Adding a function

When you go to define a new function, you can see immediately that what you’re supplying is a container image. OpenFaaS is going to instantiate the image and run your function within it.

Your container image needs to include a watchdog component that will be responsible for invoking your function on request. There are quite a lot of sample functions available, and for the purpose of my first experiment I decided to use one of these.

Defining a function in the UI requires four fields:

  • Image—the container image
  • Service name — a Kubernetes service is going to get created with this name.
  • fProcess —the executable that you’ll invoke to perform the function. At the moment this field is compulsory in the UI but I imagine it will eventually be possible to default to a value defined in the Dockerfile.
  • Network — although this field is currently required in the UI, whatever I put here didn’t seem to have any effect. (My speculative guess is that this might select a network when using OpenFaaS under Docker?)

By defining the function and looking at kubectl get pods I can see that OpenFaaS creates a pod for the service straight away (it doesn’t wait for the function to be invoked).

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
alertmanager-6dbdcddfc4-vdpz7 1/1 Running 0 1h
faas-netesd-5bc679d756-hj4s8 1/1 Running 0 1h
gateway-965d6676d-hlc66 1/1 Running 0 1h
lizfaas-598848dd4-lw8hw 1/1 Running 0 1m
prometheus-64f9844488-s9fmj 1/1 Running 0 1h

Invoking the function

The sample function I used takes a user or organisation name and looks up how many public repositories it has on Docker Hub. Enter the user name as the request text, and then hit the Invoke button above and you’ll see the response displayed.

Aqua Security has two public repos on Docker Hub at the moment

FaaS CLI

You can use the CLI to view the function:

$ faas-cli list --gateway http://192.168.99.100:31112/
Function Invocations Replicas
lizfaas 2 1

And invoke the function:

$ faas-cli invoke lizfaas --gateway http://192.168.99.100:31112/
Reading from STDIN — hit (Control + D) to stop.
aquasec
The organisation or user aquasec has 2 repositories on the Docker hub.

But is it really a “function”?

In the definition of “fProcess” you’re giving a link to an executable. And yes, every invocation of the function is actually launching a process to run that executable. Forking a new process is an expensive operation for the OS.

Suppose you could call the function directly from the same process that handles the HTTP request — that’s going to take in the order of 10–20ns to invoke. The overhead of starting a new process is more like 10–20ms. Yes, that’s a factor of a million.

There are probably plenty of applications where forking a process will give totally acceptable performance, and it’s absolutely find for a proof-of-concept, but it’s a level of overhead that rings a lot of alarm bells to me. At scale, anything with an O(10⁶) performance hit will translate into cost.

This is pretty different from the current state of affairs in commercial propositions like AWS Lambda, Google Cloud Functions or Azure Functions, or other serverless frameworks like kubeless, where the function you want to run is genuinely supplied as a function, not an executable.

The good news is that there is work afoot in OpenFaaS to support real function support with language-specific runtime support, in a feature called Afterburn.

Presumably, rather than using the supplied watchdog you could write your own tiny HTTP server which simply listens on port 8080 and calls your function directly; that way you wouldn’t have the process-starting performance hit at all, as OpenFaaS does keep your container / pod warm for you.

Executables-as-a-Service

So, arguably OpenFaaS today is really more like Executables-as-a-Service than Functions. It routes HTTP requests to the right container, and the watchdog connects them via stdin and stdout to your binary.

On the positive side, this means OpenFaaS supports any binary executable to fulfil the role of the function. It makes it super-easy to configure anything you want to respond to web requests. And the asynchronous support is a nice feature too.

It also comes with Prometheus set up ready to give you metrics, container auto-scaling, and a UI (albeit fairly early stage at time of writing) for configuring functions.

Premature optimization?

It’s pretty fashionable at the moment to avoid thinking about performance at all until after anything’s in production and genuinely causing a problem, on the basis that premature optimization is the root of all evil. Is my concern about the performance of processes vs functions an example of this? Well, I believe sometimes we take these axioms too far — but let me know what you think!

--

--

Liz Rice

Containers / eBPF / security / open source @isovalent @ciliumproject / cycling / music @insidernine