PROGRAMMING
Knative Service with Kubernetes and Spring Boot
Install a knative service on top of kubernetes and deploy a serverless spring boot image
What is Knative
Quote
Knative provides an open API and runtime environment that enables you to run your serverless workloads anywhere you choose: fully managed on Google Cloud, or on Anthos on Google Kubernetes Engine (GKE), or on your own Kubernetes cluster
What is Serverless
The purest idea behind serverless is that you write the code then something takes this code, builds and deploys and runs it without you worrying about how or where.
Another idea is that no resources are used until requested. You only open the tap when you need to drink. So you pay as and when you use the resources. Resources scale up or down based on demand, referred to as elastic resource utilisation.
For more on serverless see:
https://en.wikipedia.org/wiki/Serverless_computing
Knative
We look at Knative which attempts to fulfil some of the above by creating a pod only when there is a request (although this too is configurable).
We will run up a Knative server on our local K8s cluster and deploy an existing image as a Knative service.
Set Up
MacOs
Desktop for docker: v 2.3.0.3
k8s: v 1.16.5
Install Knative Serving
Knative depends on Istio which we installed previously.
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.16.0/serving-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.16.0/serving-core.yaml
kubectl label namespace knative-serving istio-injection=enabled
Add knative istio connection
kubectl apply --filename https://github.com/knative/net-istio/releases/download/v0.16.0/release.yaml
kubectl --namespace istio-system get service istio-ingressgateway
Kn CLI
Follow the link below to install:
https://knative.dev/docs/install/install-kn/
Create a test Knative service with the Knative CLI kn
kn service create helloworld-go — image gcr.io/knative-samples/helloworld-go — env TARGET=”Go Sample v1"
kn service describe helloworld-go
Name: helloworld-go
Namespace: default
Age: 4m
URL: http://helloworld-go.default.example.com
Revisions:
100% @latest (helloworld-go-yljzr-1) [1] (4m)
Image: gcr.io/knative-samples/helloworld-go (pinned to 5ea96b)
Conditions:
OK TYPE AGE REASON
++ Ready 3m
++ ConfigurationsReady 3m
++ RoutesReady 3m
Run it up, there are no pods if no requests are present, so the first request will be slow.
curl -H “Host: helloworld-go.default.example.com” http://localhost:80
Hello Go Sample v1!
The required pods were created on demand. If no requests are present after a certain time (default 30 secs), the pods will be removed. Nice, you only use the resource when needed provided you are ok with the initial latency.
Remove the test service.
kn service delete helloworld-go
Deploy our Local image
We have an existing local docker image, vadal-echo, we built previously.
Unlike our previous blogs where we can deploy our local docker images directly to K8s, with Knative a docker registry is necessary.
The first attempt was to re-tag the local image as suggested by the documentation, and update the knative config to enable local images (see below).
Retag like so:
docker tag vadal-echo:0.0.1-SNAPSHOT dev.local/vadal-echo
But it still gave a failure:
`..failed with message: Back-off pulling image “dev.local/vadal-echo`
So plan B. Add a local registry, and push to it, plus add it to the config (see below):
docker run -d -p 5007:5000 --name registry --restart=always registry:2
The port 5007 is arbitrary.
Edit Knative Config:
kubectl -n knative-serving edit configmap config-deployment
Add the following after the data line:
data:
registriesSkippingTagResolving: ko.local,dev.local,localhost:5007
Now tag and push:
docker tag vadal-echo:0.0.1-SNAPSHOT localhost:5007/vadal-echo
docker push localhost:5007/vadal-echo
Check our docker registry:
curl localhost:5007/v2/_catalog
{“repositories”:[“vadal-echo”]}
Deploy with Kn:
kn service create vecho --image localhost:5007/vadal-echo
Creating service ‘vecho’ in namespace ‘default’:0.053s The Configuration is still working to reflect the latest desired specification.
0.176s The Route is still working to reflect the latest desired specification.
0.227s Configuration “vecho” is waiting for a Revision to become ready.
12.893s …
13.330s Ingress has not yet been reconciled.
13.456s Waiting for load balancer to be ready
13.750s Ready to serve.Service ‘vecho’ created to latest revision ‘vecho-jvjfg-1’ is available at URL:
http://vecho.default.example.com
Test it out:
curl -i -H “Host: vecho.default.example.com” localhost:80
HTTP/1.1 200 OK
content-type: application/json
date: Tue, 21 Jul 2020 01:15:22 GMT
server: istio-envoy
x-envoy-upstream-service-time: 16
transfer-encoding: chunked{“timestamp”:”2020–07–21T01:15:22.624",”headers”:{“host”:”vecho.default.example.com”,”user-agent”:”curl/7.64.1",”accept”:”/”,”accept-encoding”:”gzip”,”forwarded”:”for=192.168.65.3;proto=http, for=10.1.0.35",”k-proxy-request”:”activator”,”x-b3-parentspanid”:”884680e6f10832a3",”x-b3-sampled”:”1",”x-b3-spanid”:”f74293af107c73eb”,”x-b3-traceid”:”62d4adff218c6fe6884680e6f10832a3",”x-request-id”:”0ce4f3b5-ea69–9bd4–87bf-3aef0dfa52e1",”x-forwarded-proto”:”http”}}
Nice, API domain name, load balancing, services and deployment configuration all done in one line. You can see the pod being created and then removed after the timeout.
List Kn services:
kn service list
NAME URL LATEST AGE CONDITIONS READY REASON
vecho http://vecho.default.example.com vecho-jvjfg-1 7m9s 3 OK / 3 True
Remove the Knative service:
kn service delete vecho
Brute force remove if things get in a muddle:
kubectl proxy
then
curl -X DELETE http://localhost:8001/apis/serving.knative.dev/v1alpha1/namespaces/default/services/vecho
The default service timeout is 30 second after which it is removed until the next request.
The pod eviction time, number of replicas etc, is the subject of autoscaling and is discussed here https://knative.dev/v0.15-docs/serving/configuring-autoscaling/.
When there are no pods, where are the logs? For this we need to add logging.
Observability
Here we cover logging, tracing and metrics for the knative layer in our k8s cluster.
Logging
Based on the following:
https://knative.dev/docs/serving/installing-logging-metrics-traces/
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready=”true”
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.16.0/monitoring-core.yaml
Knative server config changes:
kubectl edit cm -n knative-serving config-observability
And add the following if you also need request/access logs:
data:
metrics.request-metrics-backend-destination: prometheus
logging.request-log-template: ‘{“httpRequest”: {“requestMethod”: “{{.Request.Method}}”, “requestUrl”: “{{js .Request.RequestURI}}”, “requestSize”: “{{.Request.ContentLength}}”, “status”: {{.Response.Code}}, “responseSize”: “{{.Response.Size}}”, “userAgent”: “{{js .Request.UserAgent}}”, “remoteIp”: “{{js .Request.RemoteAddr}}”, “serverIp”: “{{.Revision.PodIP}}”, “referer”: “{{js .Request.Referer}}”, “latency”: “{{.Response.Latency}}s”, “protocol”: “{{.Request.Proto}}”}, “traceId”: “{{index .Request.Header “X-B3-Traceid”}}”}’
Logs:
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.16.0/monitoring-logs-elasticsearch.yaml
Make sure all is well (desktop for docker may need the number of CPUs used to be increased if any pods are stuck in pending).
kubectl get po -n knative-monitoring
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-0 1/1 Running 0 34m
elasticsearch-logging-1 1/1 Running 0 32m
fluentd-ds-l6mgg 1/1 Running 0 105s
kibana-logging-669968b8d4-nc4b2 1/1 Running 0 34m
Check out the Kibana UI:
http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana
Setting as below and create.
Tracing
http://localhost:8001/api/v1/namespaces/istio-system/services/zipkin:9411/proxy/zipkin/
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.16.0/monitoring-tracing-zipkin.yaml
Metrics
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.16.0/monitoring-metrics-prometheus.yaml
kubectl port-forward --namespace knative-monitoring
$(kubectl get pods --namespace knative-monitoring
--selector=app=grafana --output=jsonpath=”{.items..metadata.name}”)
3000
Conclusion
We installed the Knative Serving component, added a local docker registry and deployed an existing service with Kn. As pods are ephemeral, Istio logs are of no use, but we can use Kibana to look at the logs. We also added tracing and metrics.
Originally published at https://blog.ramjee.uk on August 3, 2020.