A DeFi Security Standard: The Scaling Bug Bounty

Mitchell Amador
Immunefi
Published in
11 min readFeb 17, 2021

DeFi now has over $40 billion dollars in Total Locked Value (TVL). More than $200 million was lost to hacks and scams in 2020, and that trend is continuing through the first quarter of 2021. Existing security practices have proven insufficient at keeping the community safe.

These circumstances lead us to propose a new security standard for DeFi and smart contract projects: the scaling bug bounty. The whole Immunefi project is structured around it.

Here’s the TLDR: Critical bug bounty payouts should be priced as a percentage of the economic damage they would have caused, so that even blackhats are incentivized to review code and disclose vulnerabilities, thereby reducing malicious exploits. For most DeFi projects, that’s 10% of TVL at risk.

Incentives to Steal Are Massive

Incentives to steal funds in crypto are massive, due to the nature of bearer assets (see a long history of exchange hacks). DeFi increases this already massive incentive, because now the money is easier to steal, and the volumes are larger. Hundreds of millions have been lost so far. With DeFi accelerating, the opportunities for theft — and the payoffs for doing so — grow dramatically.

The desire for privacy and anonymity inherent in crypto culture make these assets even easier to steal. Mixers are becoming more, not less, common and usable.

It’s the worst possible confluence of factors. Value in crypto is exploding in volume, assets are becoming easier to steal, and it’s becoming easier to hide your tracks. These combine to create massive incentives to steal that are only going to grow from here.

Smart contracts Are Hard to Protect

Smart contracts live on-chain, so anyone can poke around, interact with them, and make use of them. This makes it hard to offer protection.

The code is open by default, and that means your vulnerabilities are there for all to see. That’s also why public blockchains are dark forests, as samczsun has described. As soon as a vulnerability is exploited, it becomes visible to many parties simultaneously, and they will act on that knowledge.

There are a whole bunch of ways the situation gets worse from here:

  • Smart contracts are young as a technology. Most of the developers who work with them are young. The tooling is young. Just about everything about working in DeFi could be generously described as ‘immature’.
  • The talent pool working with smart contracts is very small. There aren’t a lot of people who’ve been coding in Solidity or Vyper for years. Of the people who do, most would not self-describe as security inclined.
  • Forking is an ever-present feature of DeFi culture. When this goes well, it means shared standards and battle-tested codebases. When it goes badly, it means a vulnerability in one root ends up present in multiple forks. This happens regularly. We’ve dealt with this problem numerous times at Immunefi already.
  • Smart contracts are increasingly interacting with each other, which we’ve come to call ‘composability.’ This composability is the feature driving DeFi innovation. It also creates more complexity and more attack surface, guaranteeing new, unforeseeable attack vectors and systemic risks.
  • DeFi teams tend to be very small and very lean. Many teams are literally just a few developers. There isn’t much expertise redundancy on these teams, and not a lot of slack in the system. When a crisis comes, who you have is what you have, and that’s not a lot. What’s more, the attacks execute so quickly that preventing them in real time is near impossible.
  • DeFi smart contracts deal almost exclusively with money, so when something goes wrong, there’s quite literally money at risk. The fault tolerance of DeFi applications is close to none.

The combination of massive incentives to steal, a young community, and contracts that are hard to protect means that every DeFi project operates on the razor’s edge by default — just a few minor mistakes from catastrophic exploitation.

So What Do We Do About It? We Make Scaling Bug Bounties

What we have here is an incentive problem: lots of money that’s relatively easy and safe to steal. There are two ways to solve this problem.

The first path is to make it hard to steal by making smart contracts robust and secure by default. Auditors are working on this, tooling projects are working on this, anything on formal verification is contributing to this. And all these initiatives are great, but they need time. Lots of time.

The second path is to weaken the incentive to steal by creating a bigger incentive to do something else, preferably something that also makes smart contracts more secure. The scaling bug bounty does this.

We propose DeFi projects adopt a scaling bug bounty to incentivize developers and researchers worldwide to find and disclose vulnerabilities in their smart contract code. Bug bounty rules and best practices still apply (proof of concept first, please), but valuations for critical-level bugs scale with the size of the potential economic damage. We recommend 10% to start this experiment, to be adjusted up (if the incentive proves insufficient) or down (if it’s overly generous) as we discover the results.

This means that if you have $10 million at risk to a particular bug, the bug bounty payout should be up to $1 million, caveated by the particular circumstances. That’s life-changing money for almost anyone, and a massive incentive to review code and disclose vulnerabilities that grows with the amount of capital at risk. Not only will scaling bug bounties attract more code review than just about anything else could (thereby contributing to the security of projects), they’ll also surface new security talent to the community faster than anything else could.

Now, you might be thinking: that’s a ridiculous sum of money to pay for what may be a minor bug. How is that at all fair? Isn’t that a ridiculous overvaluation? And the answer is definitely not.

Here’s the truth: a vulnerability in a smart contract is an asset, and its value is a function of the assets at risk in that smart contract. The particular characteristics of that bug, or the ‘effort’ that went into its discovery, don’t matter. What matters most is what impact it has on TVL in the contract, and this alone determines its fair market value (FMV).

Blackhats understand this perfectly well. Every hack on a DeFi project is a public statement on the FMV of the vulnerability used. Those blackhats had the chance to disclose the vulnerability, and they chose to risk jail time instead. You couldn’t get a clearer statement as to what they think the value of that vulnerability is. The hard truth is that the value of a bug is determined by its impact on user funds.

The FMV of smart contract vulnerabilities is self-evident to DeFi users, who need only be asked, “If a vulnerability is found in a contract, would you rather risk all of your funds on a hacker’s arbitrary preference, or give x% of your funds to alleviate the risk?” DeFi users will tell you that it absolutely makes sense to pay to alleviate the risk; it’s not even a question. Scaling bug bounties are win-win for everyone, whereas the lack of a disclosure incentive proportional to the FMV of the vulnerability leaves non-whitehats only one way to monetize their knowledge: via exploitation and theft.

Now, you might be thinking: why would a blackhat disclose the vulnerability for $1 million when they could hack the contract for $10 million? Why would they do that? Because the risks that come with exploits require serious discounts and huge laundering fees, whereas responsible disclosure is risk-mitigated cash.

Criminal earnings are never worth what they appear to be. Criminal earnings generate a long tail of very bad consequences, including jail time, risk of revenge, and never-ending paranoia. These consequences are both bad and probable, and so this discount is large. Further discounts need to be applied to account for the costs of slippage, money laundering, and asset freezing. A savvy blackhat doesn’t need to model these costs; the evaluation is intuitive: Crime is expensive.

Once you’ve factored in the costs of risk and cleaning fees, that 10% starts to look tempting to even the most ardent blackhat hacker. After all, that 10% bug bounty is clean, legal money that he can spend right away. What’s more, responsible disclosure is the best way to build a reputation as a security professional; any great bug hunter knows that critical vulnerabilities are frequently the prelude to job invitations, consulting contracts, and speaking offers. Scaling bug bounties can very well turn blackhats white by providing a better path for their skills.

The endgame of scaling bug bounties is the creation of massive incentives for skilled whitehats to do more code review, and for grey and blackhats to turn white for financial reasons. This incentive has an added benefit: it attracts mainstream security professionals to secure smart contracts like nothing else will, and that increase in security community size is one of the needed components in the maturation of smart contract security (a topic for another time).

This is the right approach to security in crypto. The reality is that we cannot enforce good behavior in the community, and we cannot choose which people acquire knowledge of contract vulnerabilities. The openness of crypto precludes that. So, if we want to maximize security in the ecosystem, our only option is to maximize good behavior by incentivizing it, and this re-aligning of security incentives toward positive-sum outcomes is our most effective protection for users until DeFi security culture, standards, and tooling matures.

Objections and the Law of Crypto Salvage

Here are some objections to this standard we’ve faced so far, and why we believe they don’t hold water.

Objection #1: Aren’t you creating an incentive for auditors to hide and exploit vulnerabilities they discover, seeing that they may make more money from bug bounties than from audits?

Answer: No, we are not creating that incentive, though it’s absolutely true that there are incentives for auditors to deceive and exploit their clients today. The only way to solve this problem is multi-pronged code review from independent sources, which can include multiple audits, doing pre- and post-audit code review, and having trusted partners review auditor recommendations. This question misses the fact that auditor (and all code reviewer) corruption is already a real risk that must be protected against, and a scaling bug bounty does not meaningfully augment that risk.

Objection #2: Aren’t you creating an incentive for projects to rugpull users by implanting a malicious bug to claim their own bug bounty?

Answer: As in the previous answer, there is already a potent incentive for projects to insert malicious code and rug themselves as a ‘hack’ (similar to the exchange ‘hack’ exit scam). And these events already occur. By creating a credible incentive for the community at large to protect these projects, we are not effectively adding to the massive incentive existing projects already have to scam their users. Note that the solution to this problem (be it a rogue team or a sole rogue developer) is the same as the previous one: multiple layers of code reviews from trusted parties, and most projects already have a process for this in place because the danger is obvious.

Objection #3: Isn’t 10% of $1 billion USD too much to pay for a bug bounty? That’s $100 million USD!

Answer: This is a confused question. What makes something ‘too much to pay’? Only that the value of the product or service isn’t sufficient for the price. In the case where there is $1 billion USD at risk of exploitation, and the bug hunter discloses the vulnerability and a fix, how can there be any doubt as to the value of the service meriting up to 10% of the money saved? If such a bug is found, then all can discover it, it is probable many know it, and the funds will be stolen as soon as one of those parties act.

Additionally, there are other solutions to this problem. At Immunefi we recommend clients price bounty payouts as ‘up to $XX.00’, since every bug is unique. It’s perfectly fine, and rewarding, to have a 10% maximum payout and then to make a 1% payout if the situation calls for it. Everyone understands that the 10% is a maximum reward figure to incentivize and compensate extraordinary acts of community protection.

Objection #4: What if my project doesn’t have 10% of TVL in USD on hand to provide a bug bounty payout? Most projects aren’t capitalized to that extent.

Answer: There are many solutions to this. For protocols, one solution is to mint the reward in tokens for the bug hunter. Another solution is to make a governance proposal acknowledging and approving a payment of up to 10% of the contract funds to the bug hunter in advance. Another is to add an extra fee that accumulates to a bug bounty fund. Another is to pay out in tokens that vest over time. There are many possible ways to solve this problem, most of which involve going to the users with funds at risk and having them approve the bug bounty payout in advance.

Objection #5: Won’t this create a security talent brain drain from auditing to bug bounties?

Answer: We don’t believe it will. Bug hunting is inherently risky with no expectation of reward, which doesn’t work for most auditors (after all, you only make money if you find bugs). But even if that were the case, would it be a bad thing? Having auditors working for long periods to crack DeFi protocols is the best thing that could happen to DeFi. Security standards would rise dramatically, because everyone would know that there are top specialists taking an adversarial approach to your smart contracts 24/7.

But where the auditor brain drain is speculative and uncertain, it is absolutely clear that scaling bounties would cause security talent to brain drain from mainstream cybersecurity into smart contracts, and this is something crypto absolutely needs going forward.

Objection #6: Isn’t this the law of marine salvage applied to crypto, whereby any person who helps recover another person’s property at peril is entitled to a reward commensurate with the value of the property saved?

Answer: Why, yes it is. In the law of salvage, the salvor is rewarded providing he acts voluntarily and successfully to salve a vessel in peril, either current or forthcoming. The circumstances match very closely to the DeFi dark forest. Just as in the law of salvage, the bug hunter takes all initial risk, spending time and money analyzing code with potentially no return (but much ensuing benefit to the project they’re reviewing). Unlike the law of salvage, the DeFi dark forest is a far more dangerous environment than the sea, which is merely indifferent to human survival. In DeFi, there are predators constantly looking for these bugs, so that they can exploit them first and steal instantaneous fortunes, driven by limitless human greed.

A law of crypto salvage will help weaken this incentive to steal, by redirecting much of that greed toward work that benefits the community as a whole. A law of crypto salvage must become a major part of the immune system of DeFi.

Why We’re Building Immunefi

A world driven by DeFi is coming, and it needs to be secured. We believe we should use incentives as a major part of the DeFi security stack, and bug bounties are the right place to start. That’s why we created Immunefi, a project born to facilitate scaling bug bounties and positive-sum security incentives.

We’re creating the ultimate tools for DeFi bug hunters to make it easy to find, disclose, and fix severe vulnerabilities in DeFi applications. We aim to give control of Immunefi to the security community itself as our product matures.

Our first initiative is driving scaling bug bounties across the space, targeted specifically at reducing malicious hacks and loss of user funds. More initiatives will follow. If you’d like to help us protect the space, the two most helpful things you can do are:

  1. Do more bug hunting and responsible disclosure on DeFi projects.
  2. Champion adoption of scaling bug bounties for your projects.

We’re happy to help anyone with both of these at immunefi.com, and you can get in touch with us at team@immunefi.com or fill out the form on our services page to host your bug bounty program with Immunefi. We hope you join us in making crypto a safer place.

P.S. Hackers subscribed to our newsletter are 35.8% more likely to earn a bug bounty. Click here to sign up.

--

--