Streaming Application Deployments on OpenShift

In this article, we’ll see how OpenShift can automatically deploy a new version of your application each time a new Docker image is pushed to the cluster.

“time lapse photography of roadway” by Mark Tuzman on Unsplash

One key feature of OpenShift compared to Kubernetes is its internal, integrated Docker registry called the OpenShift Container Registry, which allows for storing Docker images in relation with an OpenShift project. Basically, when a new Docker image is pushed to the container registry, this latter notifies the cluster about the new image by providing information such as its namespace (i.e., its project in the OpenShift terminology), its name and its metadata. The cluster can then react by triggering a rollout of the new image.

Such “streamed” rollouts can be used in the following contexts:

  • a Docker image is built by your CI platform and pushed to the container registry for automatic deployment on a staging or production environment
  • as a developer, you build a local Docker image then push it to the container registry and the application is updated on your local Minishift, on your dedicated OpenShift cluster or on OpenShift Online

The common denominator here is that the rollout/deployment of a new image is fully managed by the replication controller as soon as the container registry is updated.

Setting up a streaming deployment

The automatic image rollouts aforementioned are supported via two specific types of resources : ImageStreams and DeploymentConfigurations.

An ImageStream object for the webapp Docker Image can be created using the following simplistic manifest:

$ cat templates/webapp-imagestream.yml
apiVersion: v1
kind: ImageStream
metadata:
name: webapp
# apply the manifest
$ oc apply -f templates/webapp-imagestream.yml
imagestream "webapp" created

Here, the description of the ImageStream shows how the webapp Docker image is tied to the sandbox project, since it is associated with the 172.30.1.1:5000/sandbox/webapp repository in the container registry.

$ oc get is/webapp -o yaml
apiVersion: v1

kind: ImageStream
metadata:
name: webapp
namespace: sandbox
...
spec:
lookupPolicy:
local: false
status:
dockerImageRepository: 172.30.1.1:5000/sandbox/webapp

The second object to create for supporting automatic deployments of the webapp is a DeploymentConfig:

$ cat templates/webapp-deploymentconfig.yml
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: webapp
spec:
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: "webapp:latest"

...
replicas: 2
triggers:
- type: "ConfigChange"
- type: "ImageChange"
imageChangeParams:
automatic: true
containerNames:
- "webapp"
from:
kind: "ImageStreamTag"
name: "webapp:latest"
strategy:
type: "Rolling"
paused: false
revisionHistoryLimit: 2
minReadySeconds: 0

The main difference between a DeploymentConfig manifest and a standard Kubernetes Deployment manifest is the triggers element that describes how a new deployment is, well… triggered. In the example above, it happens each time the webapp:latest image changes in thesandbox organization in the container registry. Also, the ConfigChange trigger creates a new replication controller whenever changes are detected in the pod template of the deployment configuration. The strategy of deployment is of type rolling, which means that one pod at a time will be replaced. Also, up to 2 revisions of the Docker image will be kept in the registry, in case a rollback would be needed.

Note: the volumes and env sections of the spec.template element of the manifest above have been left out, since they were already discussed in previous articles and they did not change here.

$ oc apply -f templates/webapp-deploymentconfig.yml
deploymentconfig "webapp" created
$ oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
webapp 0 2 0 config,image(webapp:latest)

The DeploymentConfiguration is now ready, but since there is no image in the container registry yet, the current state is still at 0. Let’s take care of this now!

Triggering a deployment

With the ImageStream and DeploymentConfiguration in place, the last part is to build an image and push it to the container registry to trigger a deployment.

Building an image could be performed by a continuous integration service or by a makefile goal, but here the process will be explained step by step

Since the container registry runs in the OpenShift cluster, we first need to use the Docker daemon of the same cluster to build the image from the command line. In other words, this preliminary step ensures that all subsequent docker commands will be executed against this instance of Docker running on the OpenShift, as opposed to the default, local Docker instance of the host:

# change the settings of the docker CLI
$ eval $(minishift docker-env)

From now on, all docker commands will be processed by the cluster’s Docker daemon, which will be able to access the registry.

Note that the internal IP address of the registry can be retrieved with the following command (this will be used to tag the Docker image):

$ minishift openshift registry
172.30.1.1:5000

That being setup, the Docker image containing the application to deploy can now be built, tagged and pushed (after login to the registry):

# build the Docker image
$ docker build -f Dockerfile.openshift . -t url-shortener:latest
# tag the webapp image with the container registry URL 
# and the project name
$ docker tag url-shortener:latest $(minishift openshift registry)/sandbox/webapp:latest
# login to the container registry to be allowed to push
$ docker login -u developer -p $(oc whoami -t) $(minishift openshift registry)
# push to the container registry
$ docker push $(minishift openshift registry)/sandbox/webapp:latest
The push refers to a repository [172.30.1.1:5000/sandbox/webapp]
...
latest: digest: sha256:5b4a9516af90278fc1851aee9b849f26e7d7b890132b29958b9e265e3b375ed1 size: 2220

Immediately after the image was pushed, we can see that a new deployment was triggered:

$ oc get pods
NAME READY STATUS RESTARTS AGE
postgres-3122534418-mfzkv 1/1 Running 0 17h
webapp-1-5kmfk 1/1 Running 0 1m
webapp-1-vt2mr 1/1 Running 0 1m

Once the NodePort service is available, the web application can be reached from a terminal session:

$ oc apply -f templates/webapp-service.yml
service "webapp" created
$ minishift ip
192.168.99.100
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgres 172.30.27.62 <none> 5432/TCP 44m
webapp 172.30.163.80 <nodes> 8080:31317/TCP 1m
$ curl http://192.168.99.100:31317/status
build.time: 2018-11-10T16:30:03Z - build.commit: f21c627 👷‍

From now on, each time a new Docker image is built and pushed on the container registry, a new version of the application will be rolled out 🎉


The code for the url-shortener application, including the makefile and all the manifests to deploy on OpenShift is available at https://github.com/xcoulon/go-url-shortener/tree/build_deploy_openshift