When I began investigating sidecars and service meshes I needed to understand how a key feature, automatic sidecar injection, worked. If I use something like Istio or Consul, when I deploy my application container, an Envoy container suddenly appears preconfigured in the same pod. Huh? How? This led me to start digging….
For those who are unaware, a sidecar container is a container that you deploy alongside your application containers to assist the application in some way. A few examples of this include a proxy to help manage traffic and terminate TLS, a container for streaming logs and metrics, or a container that scans for security issues. The idea is to isolate and encapsulate the various concerns of a full application from the business logic itself by using separate containers for each function.
Before I continue, I want to set some expectations. The point of this article is not to explain the intricacies or use cases of Docker, Kubernetes, service meshes, etc., but rather to clearly illustrate one powerful method for extending these technologies. This article is for those already somewhat familiar with using these technologies or, at the very least, done a good amount of reading. You will need a machine with docker and Kubernetes already setup to try this. Easiest method: https://docs.docker.com/docker-for-windows/kubernetes/ (works on docker for mac as well)
First, let’s take apart Kubernetes a bit.
When you want to deploy something to Kubernetes, you need to send an object to the kube-apiserver. The way most folks do that is by passing arguments or a YAML file to kubectl. When you do this, the API server goes through a handful of stages before pushing the data to etcd and getting things scheduled:
This is the pipeline we need to understand in order to understand how sidecar injection works. Specifically, we need to look at Admission Control, which is where Kubernetes validates and, if needed, alters the objects before they are persisted. Kubernetes also allows the registration of webhooks, which can perform custom validation and mutation.
However, this process of creating and registering custom hooks is not tremendously straightforward or well documented. I had to spend several days reading and rereading documents and reverse engineering both Istio and Consul code. I honestly spent at least half a day doing random trial and error when it came to coding some of the API response.
So, once I finally had this working, I thought it would be unconscionable not to share it with all of you. It’s simple and powerful. The lack of a clear guide was the only thing missing!
The webhook is exactly what is sounds like: an HTTP endpoint that implements an API defined by Kubernetes. You are creating an API server that Kubernetes can call before proceeding with deployments. This was one of the more cryptic items to get right. There were only a couple examples, some just Kubernetes unit tests, others buried in some pretty large code bases, all written in Go. I choose something a bit more accessible — Node.js:
The API path, /mutate in this case, can be whatever you like (just needs to match the Kubernetes YAML later on) — the important thing is to see and understand the JSON you receive from the API server. In this use case we aren’t grabbing anything out of the JSON, but you may need to. What we are doing in the code above is updating the JSON. You need two things for this: 1) learn and understand JSON Patch, and 2) properly convert the JSON Patch statement into a base64 encoded byte array. Once you get those two things right, you just need to respond to the API server with a very simple object. In this case, we are simply adding a label to any pod that comes through: foo=bar.
Ok, so we have some code that can listen and respond to requests from the Kubernetes API server, but where do we deploy it? And how do we get Kubernetes to forward us those requests? You can deploy this endpoint anywhere the Kubernetes API server has connectivity. The simplest place to deploy this code is within the Kubernetes cluster itself, which is what we will do for this example. I’ve tried to keep the example as simple as possible so everything is done with Docker and kubectl. Let’s start by building a container to host the code:
As you can see this is as simple as it gets. Take the community node image and push our code into it. Then you can do a simple build:
docker build . -t localserver
Next, we create a Kubernetes deployment:
Notice how we reference the image we just created? This could have just as easily been a pod, or anything we can connect a Kubernetes service to. Let’s define that service next:
That will create an internal named endpoint within Kubernetes that points to our container. The final step will be to tell Kubernetes that we want the API server to call this service when it’s ready to do mutations:
That’s it! So simple… but what about security? One aspect we won’t cover here is RBAC within Kubernetes. I am assuming that you are just running all this with minikube, or the Kubernetes that comes with Docker for X. However, we will cover one required element. Kubernetes API server will only call HTTPS endpoints, and for that, you need to have SSL certificates on the application. You’ll also need to tell Kubernetes what the root certificate authority is.
For DEMO PURPOSES ONLY — I have added some code to the Dockerfile to create a root CA and use it to sign a certificate:
Now, we have the code updated to run HTTPS and have told Kubernetes where to find us and which certificate authority to trust. All that’s left is to deploy it all to the cluster:
kubectl create -f deployment.yaml
kubectl create -f service.yaml
kubectl create -f hook.yaml
- The deployment.yaml runs our container which serves the hook API via https and returns the JSON Patch to mutate the object
- The service.yaml gives our container an endpoint: webhook-service.default.svc
- The hook.yaml tells the API server where to find us https://webhook-service.default.svc/mutate
Now that everything is deployed to the cluster, let’s test it by adding a new pod/deployment. If everything is working the hook should add the extra foo label:
kubectl create -f test.yaml
Ok so you got “deployment.apps test created”… but did it work?
kubectl describe pods testName: test-6f79f9f8bd-r7tbd
Start Time: Sat, 10 Nov 2018 16:08:47 -0500
Awesome! You see how, even though test.yaml only had a component label, the resulting pod has two labels component and foo?
Wait?! Weren’t we going to be using this to create a sidecar container? … I said I was going to show you how to add a sidecar ;) Now you have the knowledge and even the resulting code: https://github.com/dowjones/k8s-webhook Go play with it and figure out how to make your own auto injected sidecar. It should be pretty simple. You just need to figure out the proper JSON Patch that will add the extra container to the list in the test deployment. Happy Orchestrating!