5 Steps to Enterprise Secure Kubernetes Deployment with OpenShift Local

Explore a single hybrid cloud platform programming model

Jeremy Caine
AI+ Enterprise Engineering
7 min readMay 24, 2023

--

In this article we will explore how to achieve enterprise-ready secure container development in your local environment with OpenShift.

Companies pursue hybrid cloud platforms as an integrated target for cloud-first application delivery using virtual machines, containers, and serverless. The security and sovereignty of these applications and where they are hosted are of paramount performance. By introducing the Red Hat OpenShift Container Platform as a single platform programming model, enterprise technology teams can take advantage of “build once, deploy anywhere” execution strategies.

The Red Hat OpenShift Container Platform is an ecosystem for an enterprise-first, secure container development lifecycle. It is the downstream product based on the upstream open source OKD project. OKD is the community distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. Application container images built and deployed using OpenShift practices are fully Kubernetes compliant and can be deployed to any Kubernetes cluster.

The five key steps to exploring secure enterprise practices using OpenShift in your local development environment are outlined in detail at https://github.com/jeremycaine/hello-world

1. Build and run a containerised application using Podman

2. Understand the co-existence of Podman and OpenShift Local

3. Use best practices to create a universal application image

4. Configure pod security into your OpenShift deployment

5. Use OpenShift Templates to ensure repeatable build and deployment

Security is inherently built into OpenShift. When you first take a vanilla container application build and deploy to OpenShift you will witness various errors and warnings. The walkthrough uses these encounters to explain the build and deploy optimisation for OpenShift.

This walkthrough was built on a Macbook Pro M1 which uses `aarch64` chipset. With the Mac install of Podman you need to resolve a couple of networking issues and you need to ensure you download the right chipset version of OpenShift Local.

Build and run a containerised application using Podman

We need a sample application to demonstrate the features of OpenShift. In this walkthrough we will use a simple “Hello World” NodeJS application.

➜ ~ curl localhost:3000
Hello World ! (version v1)

Well-architected enterprise applications support many features so they are well-behaved, understood and can integrate into a wider enterprise technology operations ecosystem. This trivial web application serves a “Hello World” message when called and also implements signal interrupt handling. Later its Kubernetes deployment is configured for liveness and health probes that integrate into the app.

Podman is another open-source project led by Red Hat that is an alternative to Docker for build and execution of containerised applications. It is image and registry compatible with Docker and lets you use the familiar Docker commands but has an elevated in-built security model. Podman does not rely on a background daemon process, supports executing containers as non-root users, and containers are isolated in a user namespace by default.

First, we install Podman and show a simple build and deploy of the Hello World application which runs in a Podman (standalone) Virtual Machine.

https://github.com/jeremycaine/hello-world/blob/main/walkthrough/2-local-podman-build-and-run.md

Understand the co-existence of Podman and OpenShift Local

OpenShift Local was formerly known as Code Ready Containers, hence why its command line is crc. It installs a pre-configured OpenShift cluster on your local machine where a single node is used for both the control plane and worker node. Its container runtime is Podman. Later we will see OpenShift performing source to image builds as part of a deployment. Both OpenShift builds and podman build are using buildah to create OCI (Open Container Initiative) images.

https://github.com/jeremycaine/hello-world/blob/main/walkthrough/3-openshift-local.md

With OpenShift installed a different VM instance of Podman engine runs to support OpenShift Local container execution (CRC). Because Podman (standalone) and OpenShift Local (CRC) are two different system environments there are two different image repositories. Calling podman images shows different results depending on whether you use the default machine that standalone podman install creates or the machine that the CRC environment has when it is up and running.

Use best practices to create a universal application image

To show how OpenShift implements security we deploy a Podman built container image into OpenShift Local. Initially we use a simple Dockerfile, like many examples you see all over the web. When built to this Dockerfile the deploy flags up security warning and more dramatically the deployment fails with CrashLoopBackOff. The deployment is attempting to start the NodeJS HTTP server and this is failing because the standard container image built contains root-owned files.

https://github.com/jeremycaine/hello-world/blob/main/walkthrough/4-deploy-local-image-to-openshift.md

The security model of OpenShift does not — by default — allow applications to perform root actions. This is a key concept of OpenShift to implement guard rails for secure, enterprise ready container-based application operations.

We need to introduce a more sophisticated Dockerfile, one that implements the best practices for designing a universal application image.

# Two stage build

# Stage 1: builder image
FROM registry.access.redhat.com/ubi9/nodejs-16 AS builder

## Add application sources
ADD . $HOME

## Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i; \
else echo "Lockfile not found." && exit 1; \
fi

# Stage 2: deployment image
## 1. Unversial Base Image (UBI)
FROM registry.access.redhat.com/ubi9/nodejs-16-minimal

## 2. Non-root, arbitrary user IDs
USER 1001

## 3. Image identification
LABEL name="jeremycaine/hello-word" \
vendor="Acme, Inc." \
version="1.2.3" \
release="45" \
summary="hello world web application" \
description="This application says hello world."

USER root

## 4. Image license
## Red Hat requires that the image store the license file(s) in the /licenses directory.
## Store the license in the image to self-document
COPY ./licenses /licenses

## 5. Latest security updates
## RUN dnf -y update-minimal --security --sec-severity=Important --sec-severity=Critical && \
## dnf clean all

## on minimal UBI images use microdnf
RUN microdnf -y upgrade && \
microdnf clean all

## 6. Group ownership and file permission
RUN chgrp -R 0 $HOME && \
chmod -R g=u $HOME

USER 1001
RUN chown -R 1001:0 $HOME

## 7. Application source
## Copy the application source and build artifacts from the builder image to this one
COPY --from=builder $HOME $HOME

## Set environment environment variables and expose port
ENV NODE_ENV production
ENV PORT 3000
EXPOSE 3000

## Run script uses standard ways to run the application
CMD npm run -d start

The Dockerfile used creates efficient (smaller memory footprint), high-quality (secure) images that are 100% Kubernetes compliant and execute in the OpenShift environment.

Configure pod security into your OpenShift deployment

At this stage we have an enterprise grade container image of our application that can deploy and execute in OpenShift. In the next step we enhance the security model of the deployment so that is ready for a true enterprise deployment.

In any complex technology environment of an enterprise, or SaaS, or platform provider we are going to be existing in a multi-tenant world with many consumption points and access control regines e.g. different divisions and lines of business within an enterprise; and/or consumers in different geographic and industry segments.

The developer codes the app to access protected functions e.g. read and write to a file system. The deployer creates a deployment manifest that requests the access an application requires. The cluster administrator controls granting of access to those protected functions.

The DevOps team building and deploying applications to OpenShift will each have their own set of identities which can be organised into “service accounts”. The deployment of an application bound by the privileges of a service account that allow for a finer grained control of what protected functions you want the application to access. OpenShift introduces a custom resource call Security Context Constraint (SCC) as a foundation for this.

https://github.com/jeremycaine/hello-world/blob/main/walkthrough/5-configure-pod-security.md

In the walkthrough we create the linkages between the SCC, role-based access control and the service account the DevOps team will deploy the application under.

For the development system we use simple dummy ids for user and group roles. In a real-world deployment these values will be replaced with actual values for security design of the environment. The additions at this point in the walkthrough show how those definition points fit into the deployment actions and Kubernetes configurations.

Use OpenShift Templates to ensure repeatable build and deployment

Templates are a simple way to package together a repeatable build and deploy process for your OpenShift Local environment. There are a myriad of continuous integration and deployment tools and processes (e.g. Tekton, ArgoCD etc). Typical enterprise and provider DevOps team will implement a secure lifecycle CI/CD release toolchain. These might include use of OpenShift Templates.

In a local development environment OpenShift Templates give you a parameter driven approach to triggering the build and deployment of the application into the cluster.

https://github.com/jeremycaine/hello-world/blob/main/walkthrough/6-using-templates.md

This approach is facilitated by the OpenShift custom resources BuildConfig, DeploymentConfig, and ImageStream.

Wrapping Up

The final step of the walkthrough shows the security and size efficiency we have achieved and its Kuberentes compliance.

Security scan of container images inbuilt with Quay.io where the optimised OpenShift container build has lower image size and no security vulnerabilities
Optimised OpenShift reduced container image size and no security vulnerabilities

To demonstrate this, we can take the final OpenShift optimised container image and deploy it without any further modification into a non-OpenShift environment. Using Minikube in the local environment we use kubectl to deploy and execute the “Hello World” application proving it works it alternative Kubernetes environments.

This article has only touched the surface of the secure, enterprise practices that can be introduced to the earliest point of the lifecycle. The OpenShift Container Platform give us a 100% Kubernetes compliant target with a domain specific deployment and cluster configuration model. Its goal is to build in enterprise security and efficiency across the full lifecycle of container-based application development and delivery.

(Views in this article are my own.)

--

--

Jeremy Caine
AI+ Enterprise Engineering

Using technology, creativity and insight for positive change in the world.