Scaffolding sigstore

Andrew Block
sigstore
Published in
10 min readJul 12, 2022

Co-authored by Andrew Block(Red Hat), James Strong (Chainguard) and Ville Aikas (Chainguard)

The sigstore project contains an array of tools that aims to make signing and verifying content easy. By default, many of these components integrate with public-good, a hosted set of services that support the sigstore ecosystem. Each day, these services help promote the concepts of integrity and provenance for thousands of developers and their projects.

While these hosted services help streamline many of the concepts implemented by these tools, there are circumstances where one would not want to communicate with a public instance, such as in the case of disadvantaged networking or regulatory requirements, and instead be able to run them in their own environment.

In a prior article, sigstore, the local way, it was described how each of the supporting sigstore components can be run on a single local machine. While this approach illustrated how to bring the sigstore experience locally, it did require numerous manual operations including building components from source or installing the remaining assets. Instead, to eliminate the burdens emphasized previously, the set of hosted sigstore components can be installed to a Kubernetes environment. This article will describe how the installation of sigstore can be achieved to a Kubernetes environment along with the signing and verification of content using the newly deployed assets which will certify the deployment.

Scaffolding Components

Deploying sigstore to a Kubernetes not only reduces the burden, but also includes the benefits that are provided by a Kubernetes based deployment, including health checking and load balancing. To assist with this deployment, a repository called scaffolding within the sigstore GitHub organization is available and offers resources to aid in the setup and configuration. While you may be familiar with hosted sigstore services including Rekor, an immutable, tamper-resistant ledger, and Fulcio, a code signing Certificate Authority that issues short lived certificates, also included are a Certificate Transparency Log which tracks the certificates generated by Fulcio and Trillian, a verifiable log that makes use of Merkle Tree data structures. The diagram below describes the relationship between the different components that are associated with the scaffolding architecture.

sigstore scaffolding architecture

An in depth analysis of these components and their interaction with each other can be found within the scaffolding repository documentation.

Deploying sigstore

The deployment of the sigstore assets in conjunction with the tooling provided by the scaffolding repository to a Kubernetes cluster is facilitated by a set of Helm charts produced by the sigstore project and contained within the helm-charts repository. While these charts and sigstore itself can be deployed to any Kuberntes environment, for the sake of testing in a localized environment, let’s walk through a deployment to a Minikube environment. The next section will describe the tools and utilities that are required.

Required tools:

Performing the Deployment

Once all of the tools have been installed, the first step is to start the minikube instance by running the following command. The--extra-config parameter is required to set the issuer of the OIDC provider which will be specified within the sigstore deployment within the Kubernetes API server.

minikube start \
--extra-config=apiserver.service-account-issuer=https://kubernetes.default.svc

Once the cluster is deployed, the next step is to enable external access by deploying an Ingress Controller. Minikube includes an addon for deploying an NGINX based Ingress Controller. Execute the following command to enable the addon which will create and deploy resources in the ingress-nginx namespace.

minikube addons enable ingress

Next, create a ClusterRoleBinding so that the OIDC Well-Known Configuration Endpoint for Kubernetes can be queried by any unauthenticated user. The ClusterRole system:service-account-issuer-discovery provides the necessary policies to enable this action. Execute the following command to create the ClusterRoleBinding.

cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-reviewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:service-account-issuer-discovery
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:unauthenticated
EOF

Next, since there are resources exposed using an Ingress, we would want to ensure the communication is secure with TLS certificates. These could be provided from a trusted Certificate Authority or generated manually. cert-manager can be used to assist with the generation of the certificates and integrates with Ingress resources based on the declared definition.

Deploy cert-manager with Helm using the following steps:

First add the Jetstack helm repository:

helm repo add jetstack https://charts.jetstack.iohelm repo update

Then install the cert-manager chart

helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.8.0 \
--set installCRDs=true

Once installed, cert-manager can be used to integrate with Certificate Authorities, such as Lets Encrypt, but for the purpose of this demonstration, we will use a SelfSigned ClusterIssuer called selfsigned-issuer which will generate self-signed certificates for our Ingress resources.

cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-issuer
spec:
selfSigned: {}
EOF

Now that all of the prerequisites have been accounted for, the next step is to deploy sigstore. Start by adding the sigstore Helm repository:

helm repo add sigstore https://sigstore.github.io/helm-chartshelm repo update

Finally install the scaffold chart:

helm upgrade -i scaffold sigstore/scaffold \
--set rekor.server.ingress.hosts[0].host=rekor.$(minikube ip).nip.io \
--set fulcio.server.ingress.http.hosts[0].host=fulcio.$(minikube ip).nip.io \
--set fulcio.server.ingress.http.hosts[0].path=/ --set rekor.server.ingress.hosts[0].path=/ \
--set ctlog.createctconfig.backoffLimit=30 \
--set rekor.server.ingress.tls[0].hosts[0]="rekor.$(minikube ip).nip.io" \
--set rekor.server.ingress.tls[0].secretName=rekor-tls \
--set fulcio.server.ingress.tls[0].hosts[0]="fulcio.$(minikube ip).nip.io" \
--set fulcio.server.ingress.tls[0].secretName=fulcio-tls \
--set rekor.server.ingress.annotations."cert-manager\.io\/cluster-issuer"=selfsigned-issuer \
--set rekor.server.ingress.annotations."cert-manager\.io\/common-name"="rekor.$(minikube ip).nip.io" \
--set fulcio.server.ingress.http.annotations."cert-manager\.io\/cluster-issuer"=selfsigned-issuer \
--set fulcio.server.ingress.http.annotations."cert-manager\.io\/common-name"="fulcio.$(minikube ip).nip.io" \
-n sigstore \
--create-namespace

There a lot to take in as part of this command, so let’s break it down line by line:

  1. Install the scaffold chart from the sigstore repository with the release name scaffold
  2. Set the hostname for the Rekor Ingress resource using the IP address of the minikube instance. We are leveraging the service nip.io which allows for dns names to be resolved based on IP addresses.
  3. Set the hostname for the Fulcio Ingress resource using the IP address of the minikube instance. We are leveraging the service nip.io which allows for dns names to be resolved based on IP addresses.
  4. Set the the context path for the Rekor ingress resource
  5. Sets the number of attempts for the CTLog configuration job
  6. Sets the TLS hostname for the Rekor Ingress resource
  7. Sets the name of the secret containing the certificates for the Rekor ingress resource
  8. Sets the TLS hostname for the Fulcio Ingress resource
  9. Sets the name of the secret containing the certificates for the Fulcio ingress resource
  10. Sets an annotation on the Rekor ingress resource with the name of the ClusterIssuer. By setting this value, cert-manager will use the parameters provided in the Ingress, generate a certificate, and store it in the referenced secret
  11. Sets the annotation on the Rekor ingress resource with the common name to apply to the certificate. The value will match the TLS hostnmae specified earlier
  12. Same as #10, but targeting the Fulcio Ingress resource
  13. Same as #11, but targeting the Fulcio Ingress resource
  14. Create the chart in the sigstore namespace
  15. Create the sigstore namespace if it does not already exist

Once the chart has deployed, four (4) namespaces will be created:

  • ctlog-system
  • fulcio-system
  • rekor-system
  • trillian-system

A series of Kubernetes jobs will be initiated to complete the initialization process. Once each of the jobs have completed successfully, sigstore is ready to use.

Verifying the sigstore Deployment

No automated process would be complete without performing some form of verification. Since sigstore is designed to sign and verify content, let’s walk through the process of signing and verifying an image. The easiest method to accomplish the validation is to execute a Kubernetes job to run within the cluster which will first sign an image and then perform a verification based on previously signed content.

Before the job can be created, a few supporting components need to be provided:

  1. CTLog public key
  2. Configurations associated with Fulcio

Copy the secrets from their respective namespaces into the default namespace by executing the following commands.

kubectl -n ctlog-system get secrets ctlog-public-key -o yaml | sed 's/namespace: .*/namespace: default/' | kubectl apply -f -kubectl -n fulcio-system get secrets fulcio-server-secret -oyaml | sed -e 's/namespace: .*/namespace: default/' -e 's/name: .*/name: fulcio-secret/' | kubectl apply -f -

Finally, while we could look to push signatures to a public registry, for demonstration purposes, we will deploy a lightweight container registry to serve as the designation for our signed content.

Execute the following commands to create a new Certificate for which cert-manager will provision along with a Service and Deployment resource associated with the registry:

cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: registry-tls
namespace: default
spec:
commonName: registry.default.svc
dnsNames:
- registry.default.svc
usages:
- digital signature
- key encipherment
- server auth
secretName: registry-tls
issuerRef:
group: cert-manager.io
name: selfsigned-issuer
kind: ClusterIssuer
EOF
cat << EOF | kubectl apply -f -
kind: Service
apiVersion: v1
metadata:
name: registry
namespace: default
spec:
ports:
- name: registry
protocol: TCP
port: 5000
targetPort: registry
type: ClusterIP
selector:
app: registry
EOF
cat << EOF | kubectl apply -f -
kind: Deployment
apiVersion: apps/v1
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
volumes:
- name: data
emptyDir: {}
- name: certs
secret:
secretName: registry-tls
containers:
- resources: {}
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/tls.crt
- name: REGISTRY_HTTP_TLS_KEY
value: /certs/tls.key
readinessProbe:
httpGet:
path: /v2/
port: registry
scheme: HTTPS
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
name: registry
livenessProbe:
httpGet:
path: /v2/
port: registry
scheme: HTTPS
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
ports:
- name: registry
containerPort: 5000
protocol: TCP
imagePullPolicy: Always
volumeMounts:
- name: data
mountPath: /var/lib/registry
- name: certs
mountPath: /certs
image: 'docker.io/registry:2'
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
EOF

With the necessary configurations in place, let’s create a Job that will take an already available container image and sign it using certificates generated dynamically in Fulcio. The signature that is produced will be stored in the registry we deployed previously due to the presence of the COSIGN_REPOSITORY environment variable on the job. In addition, a record will also be stored in the Rekor transparency server. All of these actions are performed using the cosign utility.

cat << EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
name: sign-job
spec:
template:
spec:
restartPolicy: Never
automountServiceAccountToken: false
containers:
- name: cosign
# Built from ci on 2022-03-15
image: gcr.io/projectsigstore/cosign:v1.8.0
args: ["sign", "--fulcio-url", "http://fulcio-server.fulcio-system.svc", "--allow-insecure-registry", "--rekor-url", "http://rekor-server.rekor-system.svc", "--force", "ghcr.io/sigstore/scaffolding/checktree@sha256:9ca3cf4d20eb4f6c85929cd873234150d746e69dbb7850914bbd85b97eba1e2f"]
env:
- name: COSIGN_EXPERIMENTAL
value: "true"
- name: SIGSTORE_CT_LOG_PUBLIC_KEY_FILE
value: "/var/run/sigstore-root/rootfile.pem"
- name: COSIGN_REPOSITORY
value: "registry.default.svc:5000/sigstore"
volumeMounts:
- name: oidc-info
mountPath: /var/run/sigstore/cosign
- name: keys
mountPath: "/var/run/sigstore-root"
readOnly: true
volumes:
- name: oidc-info
projected:
sources:
- serviceAccountToken:
path: oidc-token
expirationSeconds: 600
audience: sigstore
- name: keys
secret:
secretName: ctlog-public-key
items:
- key: public
path: rootfile.pem
EOF

Once the job has started, track the logs of the running container to confirm the image signing action was successful:

kubectl logs jobs/sign-jobGenerating ephemeral keys...
Retrieving signed certificate...
**Warning** Using a non-standard public key for verifying SCT: /var/run/sigstore-root/rootfile.pem
Successfully verified SCT...
tlog entry created with index: 0

As illustrated in the log entry output from the job, cosign was able to request a certificate from Fulcio, sign the image, and store the entry in Rekor.

Now that we have proven that the sigstore infrastructure can be used to sign an image, let’s assume the role of a consumer and verify the contents. cosign not only can be used to sign an image, but it also contains the capabilities to verify signed content. Execute the following command to create a Job to verify the previously signed image:

cat << EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
name: verify-job
spec:
template:
spec:
restartPolicy: Never
automountServiceAccountToken: false
containers:
- name: cosign
# Built from ci on 2022-03-15
image: gcr.io/projectsigstore/cosign/ci/cosign@sha256:8f7f1a0e7cef67c352f00acd14791d977faa8d1cd47a69f9c880a5185c44ffbb
args: ["verify", "--allow-insecure-registry", "--rekor-url", "http://rekor-server.rekor-system.svc", "ghcr.io/sigstore/scaffolding/checktree@sha256:9ca3cf4d20eb4f6c85929cd873234150d746e69dbb7850914bbd85b97eba1e2f"]
env:
- name: COSIGN_EXPERIMENTAL
value: "true"
- name: SIGSTORE_TRUST_REKOR_API_PUBLIC_KEY
value: "true"
- name: SIGSTORE_ROOT_FILE
value: "/var/run/sigstore-fulcio/fulcio-public.pem"
- name: COSIGN_REPOSITORY
value: "registry.default.svc:5000/sigstore"
volumeMounts:
- name: oidc-info
mountPath: /var/run/sigstore/cosign
- name: keys
mountPath: "/var/run/sigstore-fulcio"
readOnly: true
volumes:
- name: oidc-info
projected:
sources:
- serviceAccountToken:
path: oidc-token
expirationSeconds: 600
audience: sigstore
- name: keys
secret:
secretName: fulcio-secret
items:
- key: cert
path: fulcio-public.pem
EOF

Confirm that the job completed successfully by verifying the output of the pod log:

kubectl logs verify-job-mkrlcVerification for ghcr.io/sigstore/scaffolding/checktree@sha256:9ca3cf4d20eb4f6c85929cd873234150d746e69dbb7850914bbd85b97eba1e2f --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The claims were present in the transparency log
- The signatures were integrated into the transparency log when the certificate was valid
- Any certificates were verified against the Fulcio roots.
...

As another form of validation, we can also retrieve the record that was stored in Rekor. From the final line in the output from the signing job, observe that the record was stored at index 0 within the transaction log. We can obtain this record by querying the Rekor using the Ingress resource.

curl —-cacert <(kubectl get secrets -n rekor-system rekor-tls -o jsonpath='{ .data.tls\.crt }' | base64 -d) https://rekor.$(minikube ip).nip.io/api/v1/log/entries?logIndex=0 | jq -r

The output will contain the entire stored record. Within the record includes details related to the signature that was placed upon the image including the public key that was generated by our Fulcio instance. Perform another query to inspect the public key:

curl —-cacert <(kubectl get secrets -n rekor-system rekor-tls -o jsonpath='{ .data.tls\.crt }' | base64 -d) https://rekor.$(minikube ip).nip.io/api/v1/log/entries?logIndex=0 | jq -r '.[keys_unsorted[0]].body' | base64 -d | jq -r '.spec.signature.publicKey.content' | base64 -d | openssl x509 -noout -text

Within the returned value, we can observe attributes within the certificate associated with the identity token from the OIDC exchange that Fulcio performed.

X509v3 Subject Alternative Name: critical
URI:https://kubernetes.io/namespaces/default/serviceaccounts/default
1.3.6.1.4.1.57264.1.1:
https://kubernetes.default.svc
1.3.6.1.4.1.11129.2.4.2:

At this point, we have successfully deployed sigstore to a minikube instance, performed the signing of an image using the certificate provided from Fulcio and confirmed the values from the Rekor transparency server. Deploying sigstore to your environment has never been easier! In the next post, we will extend this implementation by illustrating how an end user or system can use the sigstore deployment to sign and verify content.

--

--