Knative, a FaaS anti-pattern?

Knative was announced with a lot of fanfare and created a decent amount of noise in the blogosphere when it was released at Google Next in July, with several FaaS solutions like Riff and Openwhisk adopting it. Taken at face value, it does appear to be a best practice for container-as-a-service (CaaS) based solutions, but for a function-as-a-service (FaaS) solution it can be argued that Knative is, in fact, a FaaS anti-pattern.

In Knative’s description it states that it is a way “…to build, deploy, and manage modern serverless workloads”. Almost since the launch of Amazon Lambda, the word “serverless” has been associated with FaaS. Saying this, it is important to realize that a FaaS solution is not needed in order to create a serverless process or application. A developer can create regular application containers (say one with NGINX) and if those containers start quickly enough and can suspend their operations without impact, then technically, a traditional workload can be made serverless without needing to be based on FaaS.

In the CaaS world, Knative can be thought of as the correct way to do perform autoscaling on Kubernetes. It solves the scale-to-zero use case that is a requirement if you want to autoscale correctly. It also plugs in important components like a servicemesh (Istio) and the ability to perform logging and tracing in the correct fashion. In short, for CaaS, it seems to be a best practice.

When it comes to FaaS however, the exact opposite is true. First, it’s important to realize that FaaS solutions come in different flavors. Two of the most important are:

  1. Does the FaaS solution have its own scheduler?
  2. Does the FaaS solution require packaging code into a container?

The answer to these questions is important because it fundamentally changes the capabilities of those FaaS platforms. For instance, a FaaS solution with its own scheduler can:

  • Support cold/warm/hot lambda pools for better performance and scheduling
  • Support pipelining optimizations for composed functions
  • Run in environments outside Kubernetes

Looking at the public cloud providers like Amazon Lambda, Google Cloud Functions and Azure Functions, they each use their own schedulers and are not dependent on an external solution like Kubernetes for scheduling.

Similarly, the question of how a function is packaged is important due to the fact that not requiring code to be packaged in a container has the following impact:

  1. Solutions can be quickly iterated on without needing to proceed through a full CI/CD flow. (i.e., developer productivity is not impacted)
  2. Pipeline optimizations can be performed, like running functions in different languages within the same process space.

Again, looking at the public cloud providers’ FaaS solutions like AWS Lambda, they deploy artifacts not containers.

If you look at the FaaS solutions that adopted Knative publicly, it was the ones who were weak or lacking in these capabilities. As a reward, these solutions gained a better scaling solution than they previously had, while also adding Istio support and logging integration. However the trade-off is a platform that impacts developer productivity as well as application performance. We will say it again… packaging code into containers should be considered a FaaS anti-pattern! The correct way to do this is for the container to pull and execute the code from an artifact repository in the same way public FaaS solutions do.