Build serverless application on top of Kubernetes #sdmel19
This article is about my talk in ServerlessDays Melbourne 2019. I’d like who didn’t attended to know the contents so I wrote it down.
I will talk about building serverless application on top of Kubernetes.
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
These are concept of Kubernetes.
- apply desired state to your cluster defined in declarative manner
- controllers on K8s watch and keep desired state
For example, If you want 3 containers to be on your cluster, you declare spec.replicas three like right image. This makes the K8s deploy three containers to the cluster. K8s controller watches the state of resource and if one container failed, K8s will recreate one container. In this way, K8s keeps desired state continuously.
This helps developers focus more on developing and operation engineers deal with more business critical issues instead of managing containers.
How do we build a service with K8s? Of course, we have to write to code. In addition, we have to write Dockerfile, build Docker image and push it to container registry. Then we deploy service and expose it to internet. Finally, we have to set up monitoring and autoscaling.
Kubernetes is awesome, but we have a lot of things to develop a service. We want to focus more on writing code!
Next, I want to check concept of serveless. If we adopt the way of serverless, we might be able to focus more on writing code. CNCF, which stands for Cloud Native Computing Foundation, has serverless working group which discusses what serverless computing is.
Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.
It’s the idea that consumers of serverless computing no longer need to spend time and resources on server provisioning, maintenance, updates, scaling, and capacity planning. Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams.
You can check it in detail in the white paper.
As a result, developers can focus on writing their applications’ business logic. Operations engineers can elevate their focus to more business critical tasks. We want to realize this situation on top of K8s.
A serverless computing platform may provide one or both of the following:
- Function as a Service (FaaS) like AWS Lambda and Google Cloud Functions
- Backend as a Service (BaaS) like Firebase and Auth0
But, K8s itself is not FaaS or BaaS. How do we build serverless application on top of K8s?
One of the solutions is Knative. Knative is Kubernetes-based platform to build, deploy, and manage modern serverless workloads.
Knative defines serverless workloads as computing workloads that are:
- Amenable to the process scale-out model
- Primarily driven by application level (L7 — HTTP, for example) request traffic
Third one means autoscaling is not based on resource consumption like CPU and memory.
Knative thinks that K8s provides basic primitives like Deployment, and Service in support of the model. By standardizing on higher-level primitives which perform substantial amounts of automation of common infrastructure, it should be possible to build consistent toolkits that provide a richer experience than updating yaml files with kubectl.
So, what is Knative? First, it abstracts K8s resources. It means Knative make K8s resources simple for developers and operators. Second, it provides building blocks to build your own PaaS/FaaS. In concrete, serving and eventing components are provides. Now, build component is deprecated. Third, it solves mundane but difficult tasks such as:
- Deploying a container
- Routing and managing traffic with blue/green deployment
- Scaling automatically and sizing workloads based on demand
- Binding running services to eventing ecosystems
Thus, Knative helps us build platform to focus more on business value for developers and operation engineers.
This is Knative stack. Platform is Kubernetes. Gateway like Istio, Gloo and Ambassador is between Kubernetes and Gateway. On top of them, Knative components exist. Those are installed to Kubernetes.
There are many products build from Knative like PFS, GitLab Serverless and Knative Lambda Rungime. And you can build your own platform with it.
To build faas on top of K8s with Knative, we have to prepare those things. We have to develop server to pass request to function because deploying artifacts are a container. The server and functions are decoupled and include them into a Dockerfile. We had better make CLI like faas-cli or tm to put K8s manifests and kubectl out of consciousness. It’s better to use cloudevents handler in your function. I will explain cloudvent later.
What’s the difference from other FaaS on top of K8s? There are already many FaaS solutions on top of K8s like kubeless, Fission and OpenFaaS. Firstc, Knative itself is not FaaS/PaaS. It provides the layer between K8s and serverless framework. Second, multiple vendors are developing together as OSS. Vendors are Google, redhat, Pivotal, IBM, SAP, and so on. Third, using cloudevents for event handling. So, we can avoid vendor lock in and easily migrate to other Knative based FaaS/PaaS.
Events are everywhere. AWS S3, GCP Cloud PubSub message, K8s as well. However, each event publisher tends to use its own event format.
CloudEvents is a specification for describing event data in common formats. It provides interoperability across services, platforms and systems. And it also provides SDKs as OSS for each language.
These are pros/cons of FaaS on K8s with Knative. First topic is freedom of runtime. We can utilize any language, binary, and non vendor SDK. But, we have to prepare library by ourself. Second topic is standardize packaging format. We don’t have to use different Zip for each FaaS, but Dockerfile though we have to learn how to write effective Dockerfile. Then we have more responsibility, but can make them templated in consistent manner.
Of course, you can manage products built from Knative and your own knative based platform. But Google released a managed serverless platform based on Knative. It’s Cloud Run. Cloud Run is part of Google Cloud and it is a managed serverless platform that enables you to run stateless containers invocable via HTTP requests.
Cloud Run provides you options to use serverless services. You can select serverless services based on your application’s demand from Cloud Run, App Engine, and Cloud Functions.
This is today’s summary.
- Knative provides a solution to build severless platform on top of K8s
- Cloud Run provides a managed serveless container platform
- If you don’t operate K8s clusters, you can choose serverless service between CaaS and FaaS. CaaS is Container as a Service, managed K8s like GKE, AKS and EKS.
- Knative and Cloud Run are not fully matured but have a great potential. I hope to make contributions