Knative 1/2

Adventures in Kubernetes

Daz Wilkin
Google Cloud - Community
7 min readJul 27, 2018

--

Personally, I think “Knative” should be pronounced with a silent “K” like “Knight” because, if you forever have to explain the K’s not silent…

I’ve been mired (yes, mired) in Helm of late and “I don’t love it”. So, reading the many articles describing this week’s announcement of pronounced kay-nay-tiv, I was intrigued to try it out during my Friday afternoon “Deep Work” time.

I had a sense for the ambition of this technology but, I will admit, reading the content that was written this week, didn’t leave me any clearer. I’ll use this story to try to articulate what I think’s going on here. To be clear: while I’m a Googler and quite a passionate advocate of Kubernetes and Istio, I’m not involved in the engineering of any of these solutions.

Caveat

This story is my walk-through of Google’s documentation. I’m not doing anything novel. If you’d prefer to stay definitive, I refer you to the Google documentation:

https://github.com/knative/docs/blob/master/install/Knative-with-GKE.md

Setup

I’m going to use Kubernetes Engine and a self-installed Knative because Google’s serverless addon for Kubernetes Engine is not yet available. I’m a proponent of Regional Clusters too so, using the commands below will get you a shiny regional cluster too:

PROJECT=[[YOUR-PROJECT-ID]]
BILLING=[[YOUR-BILLING-ID]]
CLUSTER=[[YOUR-CLUSTER-NAME]]
REGION=[[YOUR-REGION]] # us-west1
LATEST="1.10.5-gke.3"
gcloud projects create ${PROJECT}gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable container.googleapis.com \
--project=${PROJECT}
gcloud beta container clusters create $CLUSTER \
--username="" \
--cluster-version=${LATEST} \
--machine-type=custom-2-8192 \
--image-type=COS \
--num-nodes=1 \
--enable-autorepair \
--enable-autoscaling \
--enable-autoupgrade \
--enable-stackdriver-kubernetes \
--min-nodes=1 \
--max-nodes=3 \
--region=${REGION} \
--project=${PROJECT} \
--preemptible \
--scopes="https://www.googleapis.com/auth/cloud-platform"
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)

I had problems with the Istio install first-time round. If you have problems, whacking and reinstalling Istio seems (!) to fix issues:

kubectl apply \
--filename=https://storage.googleapis.com/knative-releases/serving/latest/istio.yaml
kubectl label namespace default istio-injection=enabledkubectl get pods \
--namespace=istio-system \
--watch

You’re looking to stabilize with a list of pods similar to the following:

NAME                                        READY     STATUS      
istio-citadel-6fd4747d74-cz784 1/1 Running
istio-cleanup-secrets-jpqwt 0/1 Completed
istio-egressgateway-8689d84656-x8pzj 1/1 Running
istio-galley-bc65ccfc4-7nr2s 1/1 Running
istio-ingressgateway-84ffcdd574-kqcvd 1/1 Running
istio-mixer-post-install-1.0-fs4q8 0/1 Completed
istio-pilot-878dd49f6-7bcvt 2/2 Running
istio-policy-d9d9d7d6-r8n5z 2/2 Running
istio-sidecar-injector-7b4f7c4bcc-bblbj 1/1 Running
istio-statsd-prom-bridge-6889648ccf-mjdvk 1/1 Running
istio-telemetry-698c747dc5-zzcqs 2/2 Running

In Kubernetes Engine Console (filtered by Namespace:istio-system):

Kubernetes Engine Console `Namespace:istio-system`

NB Istio creates a Network (TCP) Load-Balancer somewhat confusingly called istio-ingressgateway but it is not a Kubernetes Ingress resource:

Kubernetes Engine Console — Istio’s Network Load-Balancer

And, the same Network Load-Balancer shown with the Cloud Console:

Cloud Console: Network Services — Istio’s Network Load-Balancer

The Knative installation was trouble-free:

kubectl apply \
--filename=https://storage.googleapis.com/knative-releases/serving/latest/release.yaml

Then:

kubectl get pods \
--namespace=knative-serving
NAME READY STATUS RESTARTS AGE
activator-9988b7887-vq69k 2/2 Running 0 1m
autoscaler-664f4986c9-8vnqh 2/2 Running 0 1m
controller-79f897b6c9-7tfzp 1/1 Running 0 1m
webhook-5c664c7c88-cfdj2 1/1 Running 0 1m

And:

kubectl apply \
--filename=https://storage.googleapis.com/knative-releases/build/latest/release.yaml

Then:

kubectl get pods \
--namespace=knative-build
NAME READY STATUS RESTARTS
build-controller-5cb4f5cb67-nb2n9 2/2 Running 0
build-controller-6b88fbb445-949zq 2/2 Running 0
build-webhook-6b4c65546b-tqprz 2/2 Running 2
build-webhook-988b797b4-dzfls 1/2 Completed 1

You can filter Kubernetes Engine Console by wildcard namespaces (Namespace:knative-*):

Kubernetes Engine Console `Namespace:knative-*`

The Knative Deployment results in the creation of another Network Load-Balancer by Istio:

Kubernetes Engine Console: `knative-ingressgateway`

NB The Knative “Ingress” (not a Kubernetes Ingress but a Network LB) in my case is at 35.233.225.250. Both LBs were created by Istio and are in the istio-system namespace.

It’s useful to prove to yourself which Network LB is being used subsequently. You can determine this with the following command:

kubectl get services/knative-ingressgateway \
--namespace=istio-system \
--output=jsonpath="{.status.loadBalancer.ingress[0].ip}"
# Your value will differ
35.233.225.250

And, using Cloud Console:

Network Services: 2 Network Load-Balancers

Removing the Namespace filter, should resemble:

Kubernetes Engine Console

NB I have a couple of unresolved issues: The grafana and kube-state-metrics deployments are balking on an init container (istio-init) problem: iptables v1.6.0: can’t initialize iptables table `nat’: Permission denied (you must be root). Not good but not causing me obvious problems.

Knative Deployment

I recommend you start with the straightforwardhelloworld sample. If you want the opportunity to tweak and test what’s going on, clone the sample and try stuff out.

You may want to economize on the Docker image size and, if you’re interested in exploring Google’s distroless project, I recommend the following Dockerfile:

Since you’re using Kubernetes Engine and we have Google Container Registry (GCR) on hand, I recommend you build and push this to rather than DockerHub; it’s closer, faster, cheaper:

docker build \
--tag=gcr.io/${PROJECT}/helloworld \
. # Don't forget the period
docker push gcr.io/${PROJECT}/helloworld

And then ensure your deployment reflects GCR and your ${PROJECT} perhaps:

echo "
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: gcr.io/${PROJECT}/helloworld
env:
- name: TARGET
value: 'Knative'
" | kubectl apply --filename=-

And you should see:

service.serving.knative.dev/helloworld created

Deploying a Service against this Knative API yields 2 Kubernetes Deployments: one named after our deployed service and affixed with -deployment, and another affixed with -autoscaler (and in the knative-service Namespace:

Kubernetes Engine Console: Workloads named “helloworld-*”

And:

kubectl get deployments --namespace=defaultNAME                          DESIRED   CURRENT   UP-TO-DATE
helloworld-00001-deployment 1 1 1

And:

kubectl get deployments \
--selector=serving.knative.dev/configuration=helloworld \
--namespace=knative-serving
NAME DESIRED CURRENT UP-TO-DATE
helloworld-00001-autoscaler 0 0 0

NB A small trick to filter the Deployments in knative-serving Namespace by the one labeled service.knative.dev/configuration with a value of helloworld.

There’s also a regular Kubernetes Service:

kubectl get service/helloworld --namespace=default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helloworld ClusterIP 10.31.246.198 <none> 80/TCP 2m
Kubernetes Engine Console: “helloworld” Service

And:

Kubernetes Engine Console: “helloworld” Service details

Hold that thought!

Now, depending on which documentation you’re following, you’re told to run the following command:

kubectl get services.serving.knative.dev
NAME CREATED AT
helloworld 20s

NB That is correct services.serving.knative.dev

Wait, what? services.serving.knative.dev? It turns out that, if you look to the top of the deployment, apiVersion: serving.knative.dev/v1alpha1 and Kind: Service helps explains this, along with a quick check kubectl get --help which suggests:

Usage:
kubectl get
(TYPE[.VERSION][.GROUP] [NAME | -l label] | TYPE[.VERSION][.GROUP]/NAME ...) [flags] [options]

So TYPE is Service, GROUP is service.knative.dev and VERSION is v1alpha1. So we can:

kubectl get services.serving.knative.dev
NAME CREATED AT
helloworld 20s

And we can also:

kubectl get services.v1alpha1.serving.knative.dev
NAME CREATED AT
helloworld 20s

This doesn’t tell us much beyond we did create a Knative Service thing called helloworld. We’ll tweak the get command to pull specific values from the result:

HELLOWORLD=$(\
kubectl get services.serving.knative.dev/helloworld \
--namespace=default \
--output=jsonpath="{.status.domain}") && echo ${HELLOWORLD}
helloworld.default.example.com

NB our Knative service has a fully-qualified (domain) name of helloworld.default.example.com. The helloworld is the name we provided. default corresponds to the Namespace which you may have missed and is there too; try creating a new Namespace and deploying helloworld to it.

To call the service, we will reference this fully-qualified name as a header and pass this to the knative-ingressgateway. Remember the IP address that we noted previously?

KNATIVE_INGRESS=$(\
kubectl get services/knative-ingressgateway \
--namespace=istio-system \
--output=jsonpath="{.status.loadBalancer.ingress[0].ip}")

And so we can:

curl \
--header "Host: ${HELLOWORLD}" \
http://${KNATIVE_INGRESS}
Hello Henry: Knative

Your value will be different but you should see Hello World and then whatever value you used for TARGET when you deployed the service.

Conclusion

Assuming the cluster, Istio and Knative were installed and ready for us to use, all we had to do was write some code and kubectl apply a spec file referencing our container in order to deploy our service. Knative took care of exposing the service through an Istio mesh and an Istio Ingress for us.

Istio played only a minor role in this scenario from our perspective but, it enables a wealth of functionality to the folks who are ostensibly managing the cluster on our behalf in terms of monitoring (w/ Prometheus and the ill-fated Grafana pod), security and traffic routing.

Hopefully this story added a little color to Knative for you. It’s an interesting technology and I look forward to learning more about it.

Tidy up!

If you created a Google Cloud Platform project just for the purposes of this exercise and you’re ready to delete everything above, you may simple (irrevocably!) delete the project:

gcloud projects delete ${PROJECT} --quiet

If you’d like to keep the project but (irrevocably) delete the Kubernetes cluster, Istio, Knative and the functions deployed, you may:

gcloud container clusters delete ${CLUSTER} --project=${PROJECT}

If you’d like to unwind everything above, use some subset of:

# Delete Serviceskubectl delete services.serving.knative.dev/helloworld \
--namespace=default
# Delete Knative Build|Servingkubectl delete \
--filename=https://storage.googleapis.com/knative-releases/build/latest/release.yaml
kubectl delete \
--filename=https://storage.googleapis.com/knative-releases/serving/latest/release.yaml
# Delete Istiokubectl delete \
--filename=https://storage.googleapis.com/knative-releases/serving/latest/istio.yaml
# Then delete the Cluster and or the Project

That’s all!

--

--