Knative what’s that now?
One of my bug bears about Kubernetes (K8s) and even with the managed versions such as GKE is that you still need to twist way too many knobs and are into the weeds way too quickly , so much so I am pretty sure my team mates are tired of me saying “ The old me would love K8s (managed version or not ) “ as coming from an operational background it was designed to keep the old me happy !
Preston(@ptone) one of my colleagues suggested I should look at Knative so that’s what this post is all about. Come join me on my journey of discovery re figuring out what Knative is and where it fits in the ever growing K8s ecosystem.
So what is Knative?
It runs on Kubernetes (k8s) and is a set of components designed to make it easier to undertake the following common tasks.
- Deploying a container
- Orchestrating source-to-URL workflows on Kubernetes
- Routing and managing traffic with blue/green or Canary deployments
- Automatic scaling and sizing workloads based on demand( including scaling to 0)
- Binding running services to eventing ecosystems
Ultimately it’s a way to abstract the operational overhead of deploying and managing workloads that run on K8s and provides a consistent approach so that developers can focus on writing cool code.
At the time of writing there are 3 components available that can be used in addressing the tasks listed above
- Build — Source-to-container build orchestration
- Eventing — Management and delivery of events
- Serving — Request-driven compute that can scale to zero
What operational challenges does it solve?
- Platform agnostic. Wherever you can run K8s( Managed or not) you can run Knative . Providing a consistent approach to some common tasks when using K8s to run your applications
- Abstracts the operational overhead of carrying out common tasks required when deploying applications to K8s (see list above)
- Allows you to use the build & CI/CD tooling and frameworks you already use.
- Enables developers to focus just on writing interesting code, without worrying about the “boring but difficult” parts of building, deploying, and managing an application.
Now this latter point really did have me pricking up my ears! ( although I hope deploying a container is considered boring rather than difficult)
So tbh I was still a little confused as to the overlap with Istio and even with vanilla K8s. If you’ve read my Istio Why do I need it? post you’ll know Istio provides traffic routing capabilities as well hence me still being confused at this stage.
The diagram below from the Knative README illustrates how particular personas interact with K8s, Istio and Knative
At this stage in my exploration of the docs ( GCP Knative landing page & Knative intro README docs ) it wasn’t obvious to me that Knative had a dependency on Istio . Once I realised that kinda important piece of the puzzle when reading the Knative install on GKE README the overlap with Istio made sense. I reimagined the above diagram so the actual operational dependencies are obvious and the types of activities carried out by specific personas at each layer is made clearer. I am putting aside the fact that you don’t necessarily need Istio or Knative to deploy an application and expose it but if you are using Knative the activities that should occur where do need defining!
To understand if using these components do abstract the tasks listed earlier so that I am not worrying about the “boring but difficult” parts of building, deploying, and managing an application. I decided I would compare a sub set of typical tasks using native GKE and GKE with Knative. The tasks were:
- Deploying a container
- Upgrading to another version of the application
- Configuring auto scaling components
- Carrying out a canary release. I wanted 2 versions of the app to be running side by side and both to be accessible. I also only wanted to use a single cluster . Note: I am glossing over the differences between blue/green deployments and canary releases but you however should be concerned as they are subtly different approaches . I leave that as an exercise for you to determine which one meets your requirements.
- Building, deploying & managing a multi component application
I was trying to gauge the extent I needed to focus on operational tasks versus the abstraction of the operational knob twiddling . (I’d already gone through hands on with Istio and detailed my adventures in another post so I did not see the point of repeating that exercise) As some of the tasks are so fundamental to using K8s it meant I didn't need to type out lots of words about those tasks( I’m assuming if you're reading this you have a basic understanding of K8s) and I could just focus on where the divergence in experience was obvious.
The baseline was an operational K8s cluster. I used GKE for this as you knew I would but you can obviously use whatever flavour of managed K8s or use one you deploy and manage yourself.
- Deploying a container: Using kubectl deploy or apply it’s a straightforward process ( a bit noddy I agree but sets the scene) I am intrigued how this could be any simpler to be honest !
- Upgrading to another version of the application: using kubectl again painless
- Configuring auto scaling components : This is pretty straight forward see https://cloud.google.com/kubernetes-engine/docs/how-to/scaling-apps#autoscaling_deployments
- Carrying out a canary release: Now here it got interesting . Labels are really important here and well YAML .The diagram shows what needs doing to the stable and canary configs. The number of replicas dictates how much traffic gets to the canary
Istio has way better traffic management capabilities and just by deploying Istio you have much greater control over directing the percentage of traffic to your canaries than just increasing the numbers of pods! ( This post is already way longer than anticipated so I refer you back to the very good Istio docs or even my older post on Istio)
Deploying & managing a multi component application. Welcome to YAML central. An example of configuring a multi component application can be found here: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
I had to install Istio and then Knative on top of my GKE cluster before I could begin. Must admit I was wondering from an operational point of view how you would troubleshoot this in case of an operational issue.
My test cluster of 3 nodes & 3 vCpu was too small and I didn’t want to see what wouldn't work if I just used it as is , so I had to enbiggen that before installing.
I just followed the installation for Istio & Knative. My Istio install had a problem so guess I was going to have to sort that out at some point!
The various kubectl commands I needed to run did a whole bunch of config . I was pretty happy it was relatively painless so far ( I was expecting the error above to come back & bite me but it didn’t!).
The build & serving components seemed fine though at this stage !
Now onto the tasks:
- Deploying a container: Same as deploying to native GKE , create your YAML file and apply it using kubectl apply .The beauty here is that the YAML config file is so much simpler . This is because the serving module abstracts all the operational aspects .
A side note: When I ran
kubectl get ksvc helloworld-go — output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain`
I got an error: the server doesn’t have a resource type “ksvc”
So I replaced ksvc with its expanded form: “services.serving.knative.dev “ This seems like a bug as the Tag for the image seems to indicate I am running a recent enough version . v0.1.1gke-1.10.6gke-1.11.2
- Upgrading to another version of the application : You use kubectl so as with the native GKE you just just change the image used
- Configuring auto scaling components : Easy enough but here I really felt out of control as it wasn't obvious how it manages the scaling as the service.yaml file for the example I was following only had this in it:
So I ended up reading this . The config-autoscaler.yaml file has the autoscaling settings in it . So yeah at this point I was thinking as the developer persona interacting with Knative some of the operational overhead really has been abstracted from me! I actually really just needed to deploy my code . Whereas with native GKE I had to give it max and min values for the replica and the target CPU utilization .
- Carrying out a canary release: I can have 2 versions of the app running on the same cluster and have traffic going to both so good enough for my quick tour . Side note I got this message when deploying my route YAML file just a mismatch with the docs but it’s saying the right thing as in route :
route.serving.knative.dev “blue-green-demo” created``
The example is very straightforward and does a way better job of describing what you need to do than I would but in summary : Deploy 2 versions of the app . Make a change defining what percent of traffic you want to go to which version in a routing config file , apply that and job done as they say!
Up to now I have deliberately glossed over the “ being able to run your serverless workloads on GKE ” stuff that is described on the GCP landing page for Knative as I needed to understand what Knative actually was before getting diverted.
You still need someone to twiddle the knobs for the layer cake ( as I have come to think of the Knative config) . I may be slightly biased but the GKE-Knative cake is a good choice . That doesn’t absolve you of having some sysadmin chops even with using a managed service though, it just makes it less of an operational overhead.
Coming back to the GKE serverless add on which is mentioned in the “ being able to run your serverless workloads on GKE ” section of the GCP Knative landing page. The add on will actually automate the installation of the cake ( see this post under the section Serverless and containers: The best of both worlds where that is made clear. Plainer english there than the landing page!) At the time of writing there was a sign up form here .
Knative really does make it simpler for your developers especially if all they’re interested in is writing code and deploying it without worrying about all that other stuff. I was pleasantly surprised and that was before I even looked at the GKE serverless component.
So if your personas fit my interpretation of who does what with the cake then I would suggest look at Knative . Here’s my 👍🏾 for Knative if you’re a dev but tbh this cake needs to be a fully managed service to really meet the nirvana it tantalizingly promises . Today you still need someone to manage the operational overhead of the K8s ecosystem even if you start from a managed version of K8s
Thanks to Vic ( @vicnastea) another of my teammates for reading this before I hit publish and to him indoors for reviewing my personas to cake layer drawing !