Deploy scripts are now in scope for smart contract audits

As we see more and more exploits coming from the developer operations side of security, we need to start addressing this issue head-on during security reviews.

Patrick Collins
Cyfrin
6 min readApr 30, 2024

--

Deploy scripts are now in scope for smart contract audits
Most of the time, deploy scripts are out of scope, resulting in massive damage to web3.

Introduction

In the last ~30 days (since I started writing this), rekt.news has reported on three exploits in Web3, with smart contract developer operations (a.k.a. “DevOps”) being the categorized issue.

With many more happening historically.

We can determine a common pattern in these exploits by looking into each issue.

Note: I’m simplifying complex attacks and am not placing fault anywhere on doing/not doing something. However, we need to use every exploit in web3 as a chance for us to improve, and the way to do that is to implement new standards on how security is to be done in web3.

$2M Private Key Leak

Smart contract auditing: Deployment scripts
The owner of the exploited contract was a single private key

This smart contract could be exploited by a private key leak because of two true scenarios.

  1. It was an Ownable contract that used an externally owned account as its owner, which was the original deployer address
  2. The contract had onlyOwner modified functions, such as what happened in the malicious transaction

Ownable and Upgradeable smart contracts have been controversial for some time, but we will not address that in this post. If you do choose to have an Ownable or Upgradeable smart contract, you need to make sure you take measures in case one of your private keys is leaked.

The issue here was with the developer operations (DevOps) of the protocol.

Potential Solution

This could have been prevented if the contract owner had been a 2 of 3 multi-sig. Having the deploy script automatically transfer ownership to the multi-sig would have been easy to add and verify.

This exploit, in particular, happened where the codebase had zero security reviews (as far as I know) and could have helped prevent this, too.

$62.5M Rouge developer exploit

Smart contract auditing: Smart Contract DevOps
The owner of this contract was once again, a single EOA

This next attack had a similar scenario, except that the key wasn’t leaked, but instead, a rouge employee had access to the smart contract and manipulated it for their gain. This, again, was on a smart contract that was:

  1. Ownable
  2. owner was a single externally owned account (EOA)

One might argue that the real issue here was the rouge employee. This might be true, but there are ways to mitigate even this scenario.

Unlike our first exploit, this did undergo an audit (password: “ESMunc@24!”), and they called out the traditional “Centralized Risk for Trusted Owners” issue. However, this issue is ignored by almost every protocol on every audit.

Image from the audit report

The issue here was with the developer operations (DevOps) of the protocol.

Potential Solution

If, in addition, this contract had the deploy script in scope, the security team could have verified that the contract’s ownership was going to be transferred immediately to a multi-sig, which could have potentially prevented this attack. If the rouge employee had only 1 of the many keys, then this attack could still have been prevented. The auditor on this codebase could have very well told this protocol not to use an EOA for the owner of the contract, but since the deploy script isn’t in scope, it doesn’t show up on the audit report.

Of course, doing due diligence on the employee would have helped here, too. But a much easier solution would have been to add the transfer of ownership to being in-scope on the audit report.

$2.1M Botched upgrade

Smart contract audit: Deploy script
Image from the rekt.news article on the exploit

The last attack here had nothing to do with private keys leaking but, instead, with an upgrade that introduced a vulnerability. The codebase had been previously audited; however, no such security review had been done on the upgrade itself.

The upgrade introduced an issue, and boom, attacks ensued.

The issue here was with the developer operations (DevOps) of the protocol.

Where else do we see these attacks?

Additionally, we take the time to go through deployment and upgrade scripts on a number of Cyfrin audits even when they are out of scope. Doing so, on more than one occasion, has led us to issues with the codebase or the protocol’s understanding of where attack vectors are and required their deployment scripts to be modified.

On a personal note, any time I see someone firing their smart contract into production without testing their deployments, it feels like they are putting a blindfold on, screaming “YOLO,” and praying to whatever deity they believe in that the deployment “just works”. This is unacceptable for web3 to continue and scale. Which is why I chose the thumbnail as seen here:

Deploy scripts are now in scope for smart contract audits
Hence, the title image

And finally, we’ve seen malicious

Recommendation: Audit Scope Increase

For some time, we’ve taught on Cyfrin Updraft that codebases should always include their deploy scripts in their test suite, but we think we need the Web3 security community to introduce a few new standards into our audit practice (if you’re not already following this).

  1. Deployment and Upgrades should now be in the scope of security reviews
  2. If a contract is ownable, then in the deploy/upgrade script itself, it should always transfer ownership to a multi-sig/DAO/other vehicle where a single private key isn’t the sole owner
  3. All DevOps (Developer operations, like scripting, deployment, upgrades, etc) should be included in the test suite

These three additions to all security reviews and a protocol’s security checklist will help prevent these in the future. Moving forward, all protocols and security researchers should add these checks to their process, including competitive audits.

Rebuttals

Let me now get ahead of some rebuttals to adding these checks.

“But Patrick, we don’t need this because….”

We should just never use Ownable/Upgradeable smart contracts

At this point in Web3, there is a good chance that you can take every precaution and still get hacked. If you accept this scenario, then you need to be able to have contingency plans in place for when that happens. Most protocols have some emergency powers, such as the Compound Pause Guardian.

The issues you described involved rouge developers, lack of audits, and private key leaks. They have nothing to do with DevOps, so this wouldn’t help.

If a single simple-to-implement solution can address a wide range of downstream issues, then we should absolutely implement it.

Protocols won’t want to pay for the increased scope

Convince them. You can use the three examples above or any other hacks that have come from private key leaks or botched scripting.

We don’t deploy with scripting. We manually deploy with Remix

You’ll need to stop doing that.

Testing our deploy scripts is too hard

Go to Cyfrin Updraft and finish the “Advanced Foundry” curriculum, we teach you how to do this.

To learn smart contract security and development, visit Cyfrin Updraft

To request security support/security review for your smart contract project visit Cyfrin.io or CodeHawks.com.

To learn more about top reported attacks in smart contracts, be sure to study up on Solodit.

--

--

Patrick Collins
Cyfrin

Lover of smart contract engineering and security