How to create a multi clusters secure supply chain (SLSA 3) in 10min (OSS edition)

Jean-Philippe Gouin
11 min readMar 15, 2024

--

TL:DR : https://github.com/jp-gouin/multi-cluster-supply-chain

In today’s fast-paced world of software development, where continuous integration and continuous deployment (CI/CD) pipelines reign supreme, ensuring the security of your software supply chain is paramount. With the proliferation of containerised applications, Kubernetes has emerged as the de facto standard for orchestrating these containers at scale. However, as the adoption of Kubernetes grows, so do the security challenges associated with it.

One critical aspect of Kubernetes security is securing the supply chain, which involves safeguarding every step of the software delivery process, from code development to deployment. This includes ensuring that only trusted code and containers make their way into your Kubernetes clusters, thereby mitigating the risk of vulnerabilities, malware, and other security threats.

In this article, we’ll delve into the importance of securing your supply chain in Kubernetes and provide a step-by-step guide on how to create your own secure supply chain in less than 10 minutes. By implementing the best practices outlined here, you’ll be better equipped to fortify your Kubernetes environment against potential security breaches and maintain the integrity of your applications throughout their lifecycle.

We’ll be adhering to the principles outlined in the Supply Chain Levels for Software Artifacts (SLSA) framework.

What is SLSA ?

SLSA (pronounced “salsa”) is a security framework from source to service, giving anyone working with software a common language for increasing levels of software security and supply chain integrity. It’s how you get from safe enough to being as resilient as possible, at any link in the chain.

https://github.com/slsa-framework/slsa

SLSA, as of today, provide 4 security levels that are all describe here.

In this article we are building the SLSA L3, the most advance security available at the time of the writing.

What does that means?

  • Package has provenance showing how it was built
  • Provenance artefact is signed
  • Build platform runs on dedicated infrastructure
  • Build platform prevents runs from influencing one another, even within the same project
  • Prevent secret material used to sign the provenance from being accessible to the user-defined build steps.

The environment

All files and setups are available on Github

Environment

Let’s look at what is inside:

  • Kind : Tool to setup our two Kubernetes clusters in this scenario
  • Argo Workflow : Is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes.
  • Argo Event : Is an event-driven workflow automation framework for Kubernetes
  • Kpack : Extends Kubernetes and utilises unprivileged kubernetes primitives to provide builds of OCI images as a platform implementation of Cloud Native Buildpacks (CNB).
  • vCluster : Helps you create fully working Kubernetes clusters that run on top of other Kubernetes clusters
  • Syft : A tool for generating a Software Bill of Materials (SBOM) from container images
  • Cosign : A tool to sign OCI containers (and other artifacts) and to generate in-toto attestation

In order to achieve the L3, we need two Kubernetes clusters, one handling builds and one handling running apps, for the purpose of the demo let’s use two Kind cluster on separate VMs.

  • Cluster chain-prod
  • Cluster chain-dev

To isolate build from each other, we use vCluster as it offers a strong isolation between environments and access control needed to achieve SLSA L3.

We are going to use SDPX as standard for the SBOM (Software Bill of Materials) which is generated using Syft .

Cosign is used to generate the attestation using the in-toto attestation framework and also to sign all artefacts produced in the supply chain.

A software attestation is an authenticated statement (metadata) about a software artifact or collection of software artifacts. The primary intended use case is to feed into automated policy engines, such as in-toto and Binary Authorization.

More on In-toto and SLSA https://slsa.dev/blog/2023/05/in-toto-and-slsa

The complete platform deployment can be found on https://github.com/jp-gouin/multi-cluster-supply-chain

The supply chain

Because we want to focus on the developer experience and be user centric, we don’t want this supply chain to be a burden for the user.

All the user needs to bring is his Github repository and source code

Supply chain

The supply chain will react to a push in the developer Github repository using Argo Event.

When a push is performed, the Sensor is triggered and it creates a Workflow while injecting the name of the repository, the url of the repository and the SHA of the most recent commit.

Note: All the supply chain is performed from the chain-prod platform and signing material is only available in this platform. The developer do not have access to it.

apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: github
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: test-dep
eventSourceName: github
eventName: spring-petclinic
filters:
data:
# Type of Github event that triggered the delivery: [pull_request, push, issues, label, ...]
# https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads
- path: headers.X-Github-Event
type: string
value:
- push
triggers:
- template:
name: github-workflow-trigger
k8s:
operation: create
source:
resource:
...

parameters:
- src:
dependencyName: test-dep
dataKey: body.repository.name
dest: spec.arguments.parameters.1.value
- src:
dependencyName: test-dep
dataKey: body.repository.html_url
dest: spec.arguments.parameters.2.value
- src:
dependencyName: test-dep
dataKey: body.after
dest: spec.arguments.parameters.3.value

The supply chain will setup the temporary build environment, build the code, attest of the build and code, store the container image , attestation and signature in DockerHub.

Finally it’ll create a deployment in the chain-prod cluster and expose it.

To run, the supply chain needs 3 secrets:

  • dev-kubeconfig: containing a kubeconfig with limited access to the chain-dev cluster
  • tutorial-registry-credentials: containing your DockerHub credential
  • cosign: containing the private key to sign materials

The supply chain is composed of several parameters:

  • container-repo: Your DockerHub repository
  • app-name: Injected from the Sensor (Github repo name)
  • app-repo: Injected from the Sensor (Github repo url)
  • app-revision: SHA of the most recent commit
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-
spec:
entrypoint: my-awesome-supply-chain
onExit: exit-handler
serviceAccountName: argo-workflow-sa # Set ServiceAccount
arguments:
parameters:
- name: "container-repo"
value: "<dockerhub repo>"
- name: "app-name"
value: "spring-petclinic"
- name: "app-repo"
value: "https://github.com/spring-projects/spring-petclinic"
- name: "app-revision"
value: "3be289517d320a47bb8f359acc1d1daf0829ed0b"
volumes:
- name: my-secret-vol
secret:
secretName: dev-kubeconfig
- name: my-registry-secret
secret:
secretName: tutorial-registry-credentials
# This spec contains two templates: hello-hello-hello and whalesay
templates:
- name: my-awesome-supply-chain
# Instead of just running a container
# This template has a sequence of steps
steps:
- - name: vcluster # hello1 is run before the following steps
template: vcluster
- - name: deploy-kpack # double dash => run after previous step
template: deploy-kpack
- - name: build-code # double dash => run after previous step
template: build-code
- - name: attest # double dash => run after previous step
template: attest
arguments:
parameters:
# Pass the hello-param output from the generate-parameter step as the message input to print-message
- name: build-sha
value: "{{steps.build-code.outputs.parameters.build-sha}}"
- - name: create-deployment # double dash => run after previous step
template: create-deployment
arguments:
parameters:
# Pass the hello-param output from the generate-parameter step as the message input to print-message
- name: build-sha
value: "{{steps.build-code.outputs.parameters.build-sha}}"
- - name: create-svc # double dash => run after previous step
template: create-svc
- - name: create-ingress # double dash => run after previous step
template: create-ingress

Now let’s take a closer look at each steps

Disclaimer

In the following section, you’ll see a lot of steps and shell inside. I do not recommend particularly this, instead you should define WorkflowTemplate and use container template.

vcluster

Note: This step is executed on the chain-devcluster

This step uses the helm chart to deploy a vCluster in a dynamic namespace in the chain-dev cluster. For more detail on the vCluster deployment take a look at the documentation.

It runs a vCluster in isolated mode which means that pods that try to run as a privileged container or mount a host path will not be synced to the host cluster.

- name: vcluster
container:
image: alpine/helm
command: [helm]
args:
- --kubeconfig
- /secret/mountpath/chain-dev.config
- upgrade
- --install
- my-vcluster
- vcluster
- --repo
- https://charts.loft.sh
- --namespace
- vcluster-{{workflow.name}}
- --create-namespace
- --repository-config=''
- --set
- isolation.enabled=true
# To access secrets as environment variables, use the k8s valueFrom and
# secretKeyRef constructs.
volumeMounts:
- name: my-secret-vol # mount file containing secret at /secret/mountpath
mountPath: "/secret/mountpath"

deploy-kpack

Note: This step is executed on the chain-devcluster.

This step deploys Kpack in the vCluster previously created.

In this scenario, Kpack is deployed using a ClusterStore, ClusterStack and a Builder. For more details on Kpack ressources and deployment take a look at the documentation.

It’s a basic deployment and the ClusterStore only contains build material for Java and NodeJS. The ClusterStore serves as a central repository where all the buildpacks used in the build process are stored. This is not a mandatory resources and you can use either Buildpack or ClusterBuildPack instead.

The ClusterStack specifies the order in which the buildpacks from the ClusterStore are applied during the build process and the base image to use for the resulting container image.

The Builder references a ClusterStore and a ClusterStack to determine which buildpacks to use and how to build the container image. Builders are responsible for converting application source code into runnable container images automatically.

The Builder is stored in our container repository (DockerHub) and uses the tutorial-registry-credentials secret to push the image into the repository.

- name: deploy-kpack
script:
image: jpgouin/vcluster-cli
command: [/bin/sh]
source: |
export KUBECONFIG=/secret/mountpath/chain-dev.config

serviceaccount=$(cat <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: tutorial-service-account
namespace: default
secrets:
- name: tutorial-registry-credentials
imagePullSecrets:
- name: tutorial-registry-credentials
EOF
)

clusterstore=$(cat <<EOF
apiVersion: kpack.io/v1alpha2
kind: ClusterStore
metadata:
name: default
spec:
sources:
- image: gcr.io/paketo-buildpacks/java
- image: gcr.io/paketo-buildpacks/nodejs
EOF
)

clusterstack=$(cat <<EOF
apiVersion: kpack.io/v1alpha2
kind: ClusterStack
metadata:
name: base
spec:
id: "io.buildpacks.stacks.jammy"
buildImage:
image: "paketobuildpacks/build-jammy-base"
runImage:
image: "paketobuildpacks/run-jammy-base"
EOF
)

builder=$(cat <<EOF
apiVersion: kpack.io/v1alpha2
kind: Builder
metadata:
name: my-builder
namespace: default
spec:
serviceAccountName: tutorial-service-account
tag: {{workflow.parameters.container-repo}}/builder
stack:
name: base
kind: ClusterStack
store:
name: default
kind: ClusterStore
order:
- group:
- id: paketo-buildpacks/java
- group:
- id: paketo-buildpacks/nodejs
EOF
)
vcluster connect my-vcluster -n vcluster-{{workflow.name}} -- kubectl apply -f https://github.com/buildpacks-community/kpack/releases/download/v0.13.2/release-0.13.2.yaml
vcluster connect my-vcluster -n vcluster-{{workflow.name}} -- kubectl create secret -n default docker-registry tutorial-registry-credentials --from-file=.dockerconfigjson=/config/config.json
echo "$serviceaccount" | vcluster connect my-vcluster -n vcluster-{{workflow.name}} -- kubectl apply -f -
echo "$clusterstore" | vcluster connect my-vcluster -n vcluster-{{workflow.name}} -- kubectl apply -f -
echo "$clusterstack" | vcluster connect my-vcluster -n vcluster-{{workflow.name}} -- kubectl apply -f -
echo "$builder" | vcluster connect my-vcluster -n vcluster-{{workflow.name}} -- kubectl apply -f -
volumeMounts:
- name: my-secret-vol # mount file containing secret at /secret/mountpath
mountPath: "/secret/mountpath"
- name: my-registry-secret # mount file containing secret at /secret/mountpath
mountPath: "/config"

build-code

Note: This step is executed on the chain-dev cluster.

The code is built using the Kpack CLI and waits for the image to be built. The build process is streamed in the standard output and in an output file that will be used to determine the SHA of the built container image.

The kp image create command initiates a build process using the specified builder, service account, and source code repository. It fetches the source code from the Git repository, uses the specified builder to build the container image, and tags the resulting image with the provided repository and application name. Finally, it waits for the build process to complete before returning.

For more details on Buildpack process , take a look at the documentation.

- name: build-code
script:
image: jpgouin/vcluster-cli
command: [/bin/sh]
source: |
export KUBECONFIG=/secret/mountpath/chain-deb.config
vcluster connect my-vcluster -n vcluster-{{workflow.name}} -- kubectl wait -n default --for=condition=ready builder my-builder
vcluster connect my-vcluster -n vcluster-{{workflow.name}} -- kp image create tutorial-image --builder my-builder --service-account tutorial-service-account --tag {{workflow.parameters.container-repo}}/{{workflow.parameters.app-name}} --git {{workflow.parameters.app-repo}} --git-revision {{workflow.parameters.app-revision}} -w | tee /tmp/kp.build
cat /tmp/kp.build
awk '{match($0, /sha256:[0-9a-f]+/); if (RSTART) {hash=substr($0, RSTART, RLENGTH)}} END{print hash}' /tmp/kp.build > /tmp/app.sha
volumeMounts:
- name: my-secret-vol # mount file containing secret at /secret/mountpath
mountPath: "/secret/mountpath"
outputs:
parameters:
- name: build-sha # name of output parameter
valueFrom:
path: /tmp/app.sha

attest

Note: This step is executed on the chain-prod cluster.

Once the build is complete, the next step in the workflow involves signing, scanning, and attesting the container image, followed by verification of the attestation.

Cosign signs the built container image using the private key from the chain-prod cluster.

Syft generates the SBOM of the built container, the ouput is in sdpx-json format. But it could be in cyclonedx or syft format.

cosign attest generates an in-toto attestation. Here is a natural example of attestation from SLSA. In our case, the Subject is the container image build, the Predicate is our SBOM and Sign using Cosign key. The Envelope is an OCI artefact pushed in the registry.

Natural example of an attestation (source)

The attestation is verified using cosign verify-attestation .

- name: attest
inputs:
parameters:
- name: build-sha
script:
image: jpgouin/vcluster-cli
command: [/bin/sh]
source: |
cosign sign --key k8s://argo/cosign {{workflow.parameters.container-repo}}/{{workflow.parameters.app-name}}@{{inputs.parameters.build-sha}} -y
syft scan {{workflow.parameters.container-repo}}/{{workflow.parameters.app-name}} -o spdx-json=spring-project-spdx-json.json
cosign attest --predicate spring-project-spdx-json.json --type spdxjson --key k8s://argo/cosign {{workflow.parameters.container-repo}}/{{workflow.parameters.app-name}}@{{inputs.parameters.build-sh
cosign verify-attestation {{workflow.parameters.container-repo}}/{{workflow.parameters.app-name}}@{{inputs.parameters.build-sha}} --key k8s://argo/cosign --type spdxjson --output-file /tmp/attestation.json
env:
- name: DOCKER_CONFIG
value: "/config"
volumeMounts:
- name: my-registry-secret # mount file containing secret at /secret/mountpath
mountPath: "/config"

Deployment

Finally , the supply chain deploys the solution in the chain-prod platform.

A deployment is created using the sha of the OCI created. The deployment is exposed through a service service and an ingress.

- name: create-deployment
inputs:
parameters:
- name: build-sha
resource: # indicates that this is a resource template
action: apply # can be any kubectl action (e.g. create, delete, apply, patch)
manifest: | #put your kubernetes spec here
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-petclinic
namespace: app
spec:
replicas: 1
selector:
matchLabels:
app: spring-petclinic
template:
metadata:
labels:
app: spring-petclinic
spec:
containers:
- name: spring-petclinic
image: {{workflow.parameters.container-repo}}/{{workflow.parameters.app-name}}@{{inputs.parameters.build-sha}}
ports:
- containerPort: 8080
- name: create-svc
resource: # indicates that this is a resource template
action: apply # can be any kubectl action (e.g. create, delete, apply, patch)
manifest: | #put your kubernetes spec here
apiVersion: v1
kind: Service
metadata:
name: spring-petclinic
namespace: app
spec:
selector:
app: spring-petclinic
ports:
- protocol: TCP
port: 80
targetPort: 8080
- name: create-ingress
resource: # indicates that this is a resource template
action: apply # can be any kubectl action (e.g. create, delete, apply, patch)
manifest: | #put your kubernetes spec here
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spring-petclinic-ingress
namespace: app
spec:
rules:
- host: spring-petclinic.chain-prod.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: spring-petclinic
port:
number: 80

Supply Chain in Action

After a commit&push to the Github repository, we can see some pods running in chain-prod argo namespace :)

We can also see the supply chain running in Argo UI

Once the supply chain is complete we can access our pet-clinic

Hi from the pet-clinic :)

Hi from pet-clinic

What’s Next?

On an upcoming article we are going to see how we can enforce secure policies in our chain-prod cluster by verifying signatures and the content of all SBOMs.

And thank you for reading this article !

Every files (and more) are available on Github

--

--

Jean-Philippe Gouin

Kubernetes enthusiast, love supply chain , workflows and pipelines