Fighting censorship attacks on smart contracts

Ed Felten
Offchain Labs
Published in
7 min readJan 23, 2020

A common design pattern in smart contracts is to require a client to take some action before a deadline. If the deadline (measured in block numbers) is reached without a call from the client, the contract will take some alternative action that is presumably not as good for the tardy client.

I’ll focus in this post on something like an interactive rollup protocol, where one party makes an assertion and other parties have until some deadline to challenge the assertion if they think it’s wrong. If the deadline is reached without any challenge being submitted, the assertion is accepted as valid.

This design pattern risks facing a censorship attack, where an adversary tries to prevent a challenge from being posted until after the deadline. In interactive rollup, an attacker might post a false assertion, then try to censor any challenge transactions, until the deadline passes and the false assertion is confirmed.

We’ll assume that the would-be attacker has placed a deposit which he will lose if the attack fails. So we don’t need to drive the success probability of an attack all the way to zero, we just need to make it small enough to strongly deter attempts.

In this post I’ll try to summarize what is known about censorship attacks and how to defend against them, and I’ll give a perspective on how we should respond to this risk.

Types of censorship attacks

I’ll focus on four types of censorship attacks:

  1. forking attacks, where miners conspire (or are bribed) to suppress blocks that contain challenges, by forking the chain so that an alternative chain not containing a challenge is accepted;
  2. shunning attacks, where miners conspire (or are bribed) to omit challenges from the blocks they make;
  3. jamming attacks, where the attacker launches a traditional denial of service attack against the parties who want to create a challenge, to prevent those parties from transmitting a challenge;
  4. speed demon attacks, where the attacker generates on-chain assertions so rapidly that other parties don’t have time to check them all before their deadlines.

Let’s take these attack types one by one.

Forking attacks

In this attack, an adversary gains control of a majority of the mining power in a proof-of-work blockchain, and uses that power to fork the chain as necessary, to orphan any block that contains a challenge.

This attack should be difficult to launch, because it requires the attacker to control a majority of mining power — and if it is easy for an attacker to control a majority of mining power, then your blockchain has serious problems already. Or to put it another way, a cartel that controls a majority of mining power (a) can trash your blockchain’s credibility, and (b) probably has better ways than censorship to squeeze the system for short-term profit.

But wait, you might say, a majority mining cartel might want to avoid noisy attacks, but they might try to slip a censorship attack through unnoticed. If they could do that, they might be willing to fork for censorship in the hope that that wouldn’t trigger a meltdown in user confidence in the system.

The first question this raises is whether a forking censorship attack would be evident to observers. To illustrate why it would be evident, I simulated a forking censorship attack where the attacker controlled 60% of mining power. Within the first 30 blocks produced, there were three forks, of length 1, 6, and 5 respectively. That’s nothing like what you would see in a normally functioning chain. I did another simulation, with 55% attacker power, and that one had an early fork 48 blocks long. A simple mathematical model predicts that with 60% attacker power, forks would happen every 2.5 blocks in normal operation, and the average length of an orphaned branch would be 5. With 55% attacker power, forks would happen every 2.2 blocks in normal operation, and the average length of an orphaned branch would be 10.

Not only would forks, and especially longer forks, become much more likely, but the forks would have something in common: the first block of the killed branch would always include a valid challenge transaction and the eventually-winning branch would never include such a transaction — and the party submitting the censored challenge transactions would be loudly pointing this out. (The attacker could fork farther back in the chain to avoid this first-block commonality, but that would make the forks considerably longer, and every killed fork would still contain a targeted transaction.) So the notion that a censorship attack wouldn’t be noticed doesn’t hold up.

I don’t know about you, but I would find the existence of a majority mining cartel, which was using its power to fork in order to corrupt application-level protocols, pretty alarming. If others felt the same, the result would seriously undermine confidence in the entire blockchain — as any successful 51% attack should.

In other words, the problem with this attack is not the censorship of one application-level transaction. The problem is that your blockchain is controlled by a mining cartel willing to break the rules for profit. That is devastating news for all applications, whether they rely on the deadline design pattern or not. If this attack is possible on your blockchain, you should consider switching to a better blockchain.

Shunning attacks

What if a cartel of miners wants to censor but isn’t willing to actively fork the chain in an easily detectable way? Then you have a shunning attack. The participating miners simply refuse to include challenges in the blocks they mine, and the attacker hopes that all of the blocks before the deadline are made by members of the cartel.

How likely is a shunning attack to succeed? If the attacker controls a fraction f of the mining power, and the deadline is n blocks in the future, then the attack will succeed with probability f ⁿ. For example, if the attacker controls 90% of mining power and the deadline is 50 blocks, the success probability is about 0.5%. (At 95% attacker power, you need a deadline of about 100 blocks to get the same 0.5% success probability.) If the attacker pays a substantial penalty for a failed attack — as they would in a well-designed interactive rollup protocol — they would be foolhardy to try this attack. And if the penalty for failure goes to the would-be victims, the victims might well be happy to see the attempted attack.

So the solution to shunning attacks is to make sure your deadline is long enough to compensate for the worst-case attack success rate you’re willing to tolerate. In general, if you need the attack success rate to be r, and the attacker can get at most a fraction f of the mining power to cooperate, a deadline of at least log(r)/log(f) blocks will be safe.

In practice, this will often be reasonable. Even if we assume very conservatively that the attacker has 99% of mining power and that deterrence requires a 0.1% chance that an attack will succeed, the safe deadline would be log(0.001)/log(0.99) = 687 blocks. That’s a bit under three hours on Ethereum.

Jamming attacks

In a jamming attack, the adversary launches an “old-fashioned denial of service attack” against a party to prevent that party from posting any transactions. This is “censorship by DoS”.

The main problem a jamming attacker faces is that they need to jam every party who might try submitting a challenge. If there are many such parties, the attack will already be difficult to scale.

Worse yet for the attacker is the fact that any any interested party might have hired a silent watcher: an agent that lurks quietly, watching the protocol operate, and steps in with a challenge when the primary participants seem to be slow in challenging an invalid assertion. The attacker won’t know whether there are silent watchers, nor who they are, so there won’t be a practical way to DoS them before they act.

For these reasons, jamming attacks don’t look attractive for the attacker.

Speed demon attacks

In a speed demon attack, the attacker emits assertions faster than other parties can check them, so that the checkers can’t check everything before its deadline.

Any rollup protocol will need a safety mechanism to prevent speed demon attacks. One way to do this is to rate-limit the creation of assertions to ensure that at any time the total work required to check the pending assertions and challenge one if necessary will fit comfortably within the protocol’s deadline.

Any mechanism like this will impose a de facto “speed limit” on the progress of the smart contracts in a single rollup chain. A super-fast party who is cranking out assertions as fast as it can will eventually have to be slowed down somehow, to make sure that normal-speed parties can keep up.

One valid scalability metric for a rollup system is what its safe speed limit is. This is not a measure of how fast a party can blast out assertions if unconstrained, but instead it measures the maximum safe rate of progress.

Summary

To sum up, three of the four attack types can be handled by proper design or practices.

  • Shunning attacks can be handled by making deadlines long enough, based on assumptions about the attacker’s resources and risk tolerance.
  • Jamming attacks can be handled by hiring (or credibly threatening to hire) silent watchers who can emerge to issue challenges if something seems to be wrong.
  • Speed demon attacks can be eliminated by careful design of the rollup protocol.

That leaves forking censorship attacks. These are difficult to analyze, in part because a successful attack would create strong public evidence that a majority mining cartel was actively forking the chain to carry out attacks. The same cartel would be in a position to carry out other attacks, such as double-spends. Any chain that sees such a majority mining cartel in operation is already in trouble.

--

--

Ed Felten
Offchain Labs

Co-founder, Offchain Labs. Kahn Professor of Computer Science and Public Affairs at Princeton. Former Deputy U.S. CTO at White House.