Using the Knative Build system by itself

Paul Czarkowski
4 min readJul 28, 2018

--

Yesterday I wrote a post on using Knative which is designed for serverless type workloads. As I discovered it can work for web based workloads but the spin-up time from zero can be disruptive to the user experience.

However the build aspect of Knative is great, especially with the kaniko build system that lets you build docker images without needing special privileges. I decided to explore if you could install just the build components of Knative and then run your application using the standard deployment/service model in Kubernetes.

I discovered it to be very simple to do that, and quite useful! If you want follow along as I demonstrate, feel free to clone my demo from github and use the manifests already created there.

Prepare environment

You’ll need access to a Kubernetes cluster and kubectl installed.

Clone my Knative samples repository:

$ git clone https://github.com/paulczar/knative-samples.git
$ cd knative-samples/knative-build-only

Install Knative Build

Deploy the Knative build components to the knative-build namespace:

Note: this is the same manifest found at the official knative build documentation here.

$ kubectl apply -f install
namespace "knative-build" created
clusterrole "knative-build-admin" created
serviceaccount "build-controller" created
clusterrolebinding "build-controller-admin" created
customresourcedefinition "builds.build.knative.dev" created
customresourcedefinition "buildtemplates.build.knative.dev" created
service "build-controller" created
service "build-webhook" created
configmap "config-logging" created
deployment "build-controller" created
deployment "build-webhook" created

Check Knative is installed and ready:

$ kubectl -n knative-build get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/build-controller 1 1 1 1 25s
deploy/build-webhook 1 1 1 1 25s
NAME DESIRED CURRENT READY AGE
rs/build-controller-5cb4f5cb67 1 1 1 25s
rs/build-webhook-6b4c65546b 1 1 1 25s
NAME READY STATUS RESTARTS AGE
po/build-controller-5cb4f5cb67-8vdh4 1/1 Running 0 25s
po/build-webhook-6b4c65546b-ww2gs 1/1 Running 0 25s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/build-controller ClusterIP 10.100.200.41 <none> 9090/TCP 25s
svc/build-webhook ClusterIP 10.100.200.77 <none> 443/TCP 25s

Build Petclinic

Edit the file build/build-petclinic-secret.yaml to contain your docker registry username and password (base64 encoded).

apiVersion: v1
kind: Secret
type: kubernetes.io/basic-auth
metadata:
name: build-petclinic
namespace: build-petclinic
annotations:
build.knative.dev/docker-0: https://index.docker.io/v1/
data:
username: <docker hub username | base64>
password: <docker hub password | base64>

Edit the file build/build-petclinic.yaml to use your docker registry username.

apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: build-petclinic
namespace: build-petclinic
labels:
expect: succeeded
spec:
serviceAccountName: build-petclinic
source:
git:
url: https://github.com/paulczar/spring-petclinic.git
revision: docker-build-docs
template:
name: build-petclinic
arguments:
- name: IMAGE
# update this with your docker registry username
value: docker.io/<username>/spring-petclinic:latest

Deploy the components for building your app to the petclinic-build namespace:

$ kubectl apply -f build
namespace "build-petclinic" created
secret "build-petclinic" created
serviceaccount "build-petclinic" configured
buildtemplate "build-petclinic" configured
build "build-petclinic" created

Check on the Build:

$ kubectl -n build-petclinic get pods,build,buildtemplate
NAME READY STATUS RESTARTS AGE
po/build-petclinic-prw8j 0/1 Init:2/3 0 17s
NAME AGE
builds/build-petclinic 17s
NAME AGE
buildtemplates/build-petclinic 53s

After a few minutes you should be able to see the build logs (use the pod name from above):

$ kubectl -n build-petclinic logs -f build-petclinic-prw8j -c build-step-build-and-push
time="2018-07-28T19:09:05Z" level=info msg="Unpacking filesystem of maven:3.5-jdk-8-alpine..."
/proc/scsi /sys/firmware]"
time="2018-07-28T19:09:05Z" level=info msg="Unpacking layer: 6"
time="2018-07-28T19:09:06Z" level=info msg="Unpacking layer: 5"
...
...
2018/07/28 19:16:04 pushed blob sha256:dbbeffcc06abbacd5064d143c2550f37cbb288c27a2628fada13806b2bc38505
index.docker.io/paulczar/spring-petclinic:latest: digest: sha256:32e4cb3bf0c37495a428734c0b1ed205b7926e9f1d021068f3bbce97c26177de size: 1077

Run Petclinic

Run your freshly built Application using a standard Kubernetes deployment and service:

$ kubectl apply -f run
deployment "petclinic" created
service "petclinic" created

After a short while it should be running and accessible, since the service is of type LoadBalancer it should have an accessible endpoint:

k get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/petclinic 1 1 1 1 2m
NAME DESIRED CURRENT READY AGE
rs/petclinic-78cddd7d 1 1 1 2m
NAME READY STATUS RESTARTS AGE
po/petclinic-78cddd7d-hkglx 1/1 Running 0 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.100.200.1 <none> 443/TCP 7d
svc/petclinic LoadBalancer 10.100.200.232 35.192.99.186 80:30560/TCP 2m

Use the petclinic service’s EXTERNAL-IP to access the application:

$ curl -s 35.192.99.186 | grep PetClinic
<title>PetClinic :: a Spring Framework demonstration</title>

Cleanup

$ kubectl delete -f run
deployment "petclinic" deleted
service "petclinic" deleted
$ kubectl delete -f build
namespace "build-petclinic" deleted
secret "build-petclinic" deleted
serviceaccount "build-petclinic" deleted
buildtemplate "build-petclinic" deleted
build "build-petclinic" deleted
$ kubectl delete -f install
namespace "knative-build" deleted
clusterrole "knative-build-admin" deleted
serviceaccount "build-controller" deleted
clusterrolebinding "build-controller-admin" deleted
customresourcedefinition "builds.build.knative.dev" deleted
customresourcedefinition "buildtemplates.build.knative.dev" deleted
service "build-controller" deleted
service "build-webhook" deleted
configmap "config-logging" deleted
deployment "build-controller" deleted

--

--