Knative 2/2

Adventures in Kubernetes

Daz Wilkin
Google Cloud - Community
6 min readJul 28, 2018

--

Yesterday, I explored Knative.

Today, I’m looking around to better understand it.

Setup

Istio

An implementation detail of Istio is that, components of our services are proxied by an Istio sidecar container. Returning to yesterday’s helloworld (NB mine’s called hellohenry) deployment:

Kubernetes Engine Console: hellohenry-00001-deployment

The Knative autoscaler, having seen no traffic recently to the service (and thus to the deployment), appears to have auto-scaled the service to zero pods. Let’s hit the endpoint with a request to see what happens:

HELLO=$(\
kubectl get services.serving.knative.dev/helloworld \
--namespace=default \
--output=jsonpath="{.status.domain}") && echo ${HELLO}
hellohenry.default.example.comINGRESS=$(\
kubectl get services/knative-ingressgateway \
--namespace=istio-system \
--output=jsonpath="{.status.loadBalancer.ingress[0].ip}")
curl --header "Host: ${HELLO}" http://${INGRESS}# Your output will differ
Hello Henry: Tada!

And then:

Kubernetes Engine Console: one pod created by the autoscaler

The autoscaler has created a pod to support this service. Because no pods were running from our cold-start, the latency on the first request is higher:

curl \
--header "Host: ${HELLO}" \
--write-out "
lookup %{time_namelookup}
connect %{time_connect}
appconnect %{time_appconnect}
pretransfer %{time_pretransfer}
redirect %{time_redirect}
starttransfer %{time_starttransfer}
total %{time_total}\n" \
http://${INGRESS}

resulting in:

lookup        0.000048
connect 0.020991
appconnect 0.000000
pretransfer 0.021039
redirect 0.000000
starttransfer 8.052199
total 8.052241

NB It’s taking ~8 seconds for Kubernetes, Knative and Istio to get us from 0 →1 but, thereafter, as you’ll see below it’s waay faster.

Once I have a single pod running, the timing’s are good:

lookup        0.000043
connect 0.020306
appconnect 0.000000
pretransfer 0.020380
redirect 0.000000
starttransfer 0.042394
total 0.042443

While we have at least one Pod, let’s drill into it and see what it comprises:

Kubernetes Engine Console: Knative App Pod w/ Istio

NB The Pod comprises 3 containers. The 3rd on the list is user-container and its image corresponds to my Knative app: gcr.io/dazwilkin-180728-knative/knative/hellohenry. The Istio proxy (aka sidecar) is there gcr.io/istio-release/proxyv2 as expected. This is Envoy. The 3rd container is called queue-proxy and is part of Knative (gcr.io/knative-releases/github.com/knative/serving).

Vegeta

Putting some pressure on the service using Vegeta (==awesome) from my workstation yields an interesting curve:

echo "GET http://${INGRESS}" \
| vegeta -cpus=12 attack -duration=300s -header "Host: ${HELLO}" \
| vegeta report -reporter=plot > plot.html
Vegeta Plot: 5-minutes

During the initial 70 seconds or so as shown on the plot, the cluster provisions 10 Pods:

Knative autoscaling 0 →10 Pods

As the cluster detects the load stabilizing, excess Pods are terminated and the service stabilizes with 5 Pods:

Knative stabilizing on 5 Pods

After one Vegeta run completes and before the cluster scales back to zero, let’s run another test, this time with text output:

echo "GET http://${INGRESS}" \
| vegeta -cpus=12 attack -duration=300s -header "Host: ${HELLO}" \
| vegeta report -reporter=text
Requests [total, rate] 15000, 50.00
Duration [total, attack, wait] 5m0.0312787s, 4m59.980142364s, 51.136336ms
Latencies [mean, 50, 95, 99, max] 438.596801ms, 28.053121ms, 3.451233603s, 6.45897554s, 7.577021529s
Bytes In [total, mean] 284544, 18.97
Bytes Out [total, mean] 0, 0.00
Success [ratio] 99.84%
Status Codes [code:count] 200:14976 503:24
Error Set:
503 Service Unavailable

This provides more accurate responsiveness data for us. Though the p95 at just under 3.5s seems high!?

Prometheus

Istio bundles Prometheus for monitoring. Prometheus is deployed to a monitoring namespace and the following will get your local port 9090 forwarded to the service’s port:

kubectl port-forward $(\
kubectl get pods \
--selector=app=prometheus \
--output=jsonpath="{.items[0].metadata.name}"
--namespace=monitoring) \
--namespace=monitoring \
9090:9090

And browsing localhost:9090/targets gives:

Grafana

Fortunately, Knative comes with a set of Grafana dashboards configured against Prometheus metrics:

kubectl port-forward $(\
kubectl get pods \
--selector=app=grafana \
--output=jsonpath="{.items..metadata.name}" \
--namespace=monitoring) \
--namespace=monitoring \
3000

and browsing localhost:3000(click Home):

and selecting “Knative Service — Revision HTTP Requests”:

Super nice! There’s even one to help with scaling debugging:

Zipkin

kubectl port-forward $(\
kubectl get pods \
--selector=app=zipkin \
--namespace=istio-system \
--output=jsonpath="{.items[0].metadata.name}") \
--namespace=istio-system \
9411:9411

And browse to localhost:9411:

Zipkin

You should be able to select the helloworld service from the Service Name dropdown, then Find Traces:

Zipkin: “hellohenry-00001”

Obviously there’s not much to the hellohenry sample as it simply responds directly to an incoming request. However, Zipkin provides some insight into the mechanics:

We can see here how the request hits knative-ingressgateway as expected before being routed to hellohenry-00001. At the end of this form, you can see that the recipient service is hellohenry-00001-service.default.svc.cluster.local and on port 80. Remember the code publishes to 8080.

The YAML for the service binds 80 to the Pods’ queue-port:

apiVersion: v1
kind: Service
metadata:
annotations:
serving.knative.dev/configurationGeneration: "1"
labels:
app: hellohenry-00001
serving.knative.dev/configuration: hellohenry
serving.knative.dev/revision: hellohenry-00001
name: hellohenry-00001-service
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30057
port: 80
protocol: TCP
targetPort: queue-port
selector:
serving.knative.dev/revision: hellohenry-00001
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

Drilling down into one of the Pods captured by the service, we can see that queue-port is managed by Knative not by Istio nor by our service:

image: gcr.io/knative-releases/github.com/knative/serving/cmd/queue
name: queue-proxy
ports:
- containerPort: 8012
name: queue-port
protocol: TCP
- containerPort: 8022
name: queueadm-port
protocol: TCP

IIUC this queue-proxy forms part of Knative’s auto-scaling. Presumably (!) the queue-proxy proxies to the istio-proxy too though it’s not obvious to me where this is done. The documentation is scarce.

Let’s have a look at the logs:

Logs for ‘hellohenry-00001-deployment’

We can see 8080 being caught by the Istio proxy along with 8012 and 8022:

INBOUND_PORTS_INCLUDE=8080, 8012, 8022

The latter two ports correspond to the ports shown above for queue-proxy:

queue-port: 8012/TCP (exposed on host)
queueadm-port: 8022/TCP (exposed on host)

These are programmed by the Istio (Envoy) proxy using iptables:

A  -A ISTIO_INBOUND -p tcp -m tcp --dport 8080 -j ISTIO_IN_REDIRECT
A -A ISTIO_INBOUND -p tcp -m tcp --dport 8012 -j ISTIO_IN_REDIRECT
A -A ISTIO_INBOUND -p tcp -m tcp --dport 8022 -j ISTIO_IN_REDIRECT

What I’m lacking is the diagrammatic flow for this little triad.

Istio-only

I’m going to send some time comparing deployments between Istio-only and Istio w/ Knative to see if I can then understand what specifically Knative serving is adding.

The equivalent deployment (and service) for Istio without Knative is:

NB The annotation in line #17 can be flipped between true to have an Istio proxy injected into the Pods and false for a non-Istio (no proxy) deployment.

With sidecar.istio.io/inject: “false”:

No Istio (no proxy)

With sidecar.istio.io/inject: “true”:

With Istio (proxy sidecar)

And, while the integration between the proxy and the hellohenry container remains unclear, this is a good starting point:

Logs for hellohenry and the istio sidecar

More soon!

Conclusion

Slightly wiser but lots of new functionality here to try to get my head around. I think, for my comprehension, it may be better to start with Istio and, once I grok what’s going on there, then add in Knative. Or, someone explains it all to me ;-)

--

--