Some thoughts, some numbers: double-spend attacks on Ethereum
Though I’m not big on disclaimers in cryptocurrencies — I just assume that everyone is always conflicted — given the somewhat provocative title of this post and the follow-on posts that I intend to share, it is probably worth noting that I hold approximately 0.4 Ether (ETH) ~$60 strictly for research purposes through my personal holdings and via Platt Advisors SARL. I do not have, and do not plan to take, any other ETH positions either personally or through related holdings (long or short), directly or through derivative exposure. Believe it or not, I am just in this for the intellectual exercise. Furthermore nothing in this post, or any other materials provided by myself or Platt Advisors SARL, should be considered financial advice.Trading cryptocurrencies is risky and prices may go up as well as down. Please do your own research before putting your funds at risk.
The recent explosion in the introduction (as well as the high-profile cancellation) of new stablecoins on Ethereum has made me very curious, about their effect on the health of the Ethereum network. One such effect is balancing the economic considerations paid to Ethereum miners through seigniorage (block rewards) and transaction fees, and the potential economic benefit of dishonest behaviours by the same miners. This is the so-called “Top-Heavy” issue, noted in 2014 by my friend Tim Swanson. I started to write a post on the subject, but the further I dove into the subject, a) it became more complex, and b) it became a beast to write/read. So I’ve decided this journey up into smaller pieces in an attempt to build a model on which to analyse this risk. Step 1, let’s take a look at 51%, or double-spend, attacks in Ethereum. I am conducting a lot of the research on this as I write, while I have a plan on how I will present it, that is subject to change if I find something that needs further focus along the way. This post was written to be more educational and to provide a background for subsequent posts.
Craig S Wright’s 2008 Whitepaper, laid out a combination of Adam Back’s Hashcash, and a linked list of time stamped transactions distributed across member nodes participating in the network. The breakthrough in this technique was that by combining the real-world economic cost imposed by Hashcash, and sharing a full database of transactions hashed into a chain of batched amongst all interested participants, could help align participants in the network and disincentivize a double-spend (when a dishonest actor attempts to send one digital asset more than once). Obviously, this is Bitcoin, obviously it’s slightly more complex than that, and obviously Craig Wright isn’t Satoshi, but we soldier on because we highlighted the important part: Proof-of-work
The Bitcoin Whitepaper elaborates on how Proof-of-work incorporates a cost to the “voting” mechanism that determines which participant is able to present the next valid block to the network:
Proof-of-work is essentially one-CPU-one-vote. The majority decision is represented by the longest chain, which has the greatest proof-of-work effort invested in it. If a majority of CPU power is controlled by honest nodes, the honest chain will grow the fastest and outpace any competing chains. To modify a past block, an attacker would have to redo the proof-of-work of the block and all blocks after it and then catch up with and surpass the work of the honest nodes.
One thing to note here is the discussion on attackers, who need to redo previous work and outpace other blocks to become the longest chain, and thus become the valid reference for subsequent points. It bears repeating that the fact that “bitcoin is backed by nothing” — technically not entirely true, but that’s for the next post — makes a reorganisation in an anarchic blockchain possible, thus less reliant on centralising parties. Neato!
This attack, whereby an attacker redoes the work to create a competing chain which becomes the longest by work, is the infamous 51% (or double-spend) attack. Because Bitcoin, and really all permissionless blockchains, rely on statistics it is possible that an attacker could mount a double-spend attack with less than hashing power that rest of the network. The paper discusses the statistics and likelihood of this in section 11, inventively titled: “Calculations”. I’ll spare you the details, but the paper gives readers a proof using the Gambler’s Ruin problem, and the Poisson Distribution for the probability of an attacker with q-probability of finding the next block, will catch-up to the longest chain from z blocks. Which we know should come roughly every 10-minutes. The calculation presented by Satoshi is as follows:
Long story short, it is very useful, though not entirely correct (What? Satoshi wrong? blasphemy!). Ozisik and Levine (2017) correct it to calculate, not for catching up, but for becoming the longest chain, here. We’ll ignore, for the time being, their expansion into budget constraints for energy and upgrades and go with their modified equation for catching-up, then surpassing the honest chain:
I’ve also written a modified version of Satoshi’s code using this formula.
The short version of what this is telling us is that the likelihood of an attacker being able to catch-up, and surpass the longest valid chain should diminish as the number of blocks which they need to catch up to, increases, ceteris paribus. It also shows that as an attacker’s probability of being the next block increases (hashing rate relative to honest miners), the first effect is less pronounced. As the probability of the attacker finding the next block exceeds that of the remaining honest miners, the model gets a bit funny and probabilities exceed 1, but we’ll ignore that part.
If you’re more into graphical representations of these things like me, you can see this below:
Note that I’ve greyed out everything above 0.525 as we assume that anytime an attacker has more resources than the rest of the network they could ride roughshod over it.
So recapping up to here, Bitcoin is designed as a solution to the double-spend problem. It is effective because it imposes a cost on becoming a miner, but a miner which has the resources to outspend the rest of the network can overpower the honest miners and write a double-spend.
Cool. So you’ve led me to here with promises of Ethereum FUD and spent 1000 words on Bitcoin. Get on with it Colin!
The goals of Ethereum have always been very different from those of Bitcoin, because of this it was deliberately designed to coincide with the notion of a world computer, as opposed to a peer-to-peer electronic cash.
One of the design choices made in Ethereum was significantly faster blocktimes (or the average time between the grouping of transactions into blocks by miners), currently around 14–15 seconds.
Going back to our previous discussion, all else being equal, faster blocktimes will mean that within the same period of time it is less likely that an attacker could catch-up with an increased number of blocks (z) to successfully perform a double-spend.
Of course we know that the world is not all equal and that there are trade-offs. The first, discussed by Vitalik Buterin in a very considered 2015 post, is the problem of an increase ‘stale rate’ associated with faster blocktimes. Stale blocks occur when two or more miners propose valid candidate blocks to the network to become the next block in the longest chain, but which are ultimately not accepted. Stale blocks can be a sign of an attempted double-spend, however usually this occurs naturally, within a level of tolerance because of the probability of finding two blocks very close in time to each other. The fact that miners around the world need to communicate with each other to share blocks means that the time it takes to send blocks through the network (network propagation) is a consideration. Jameson Lopp had an interesting chart of the propagation time in the Bitcoin network for a block to reach 50% and 90% of all nodes:
Because of this, faster blocks will face this issue more frequently than slow blocks. To counter this fact, Ethereum introduced “uncle rewards”, for blocks which were stale but ultimately included in the blockchain.
How much more often you ask? To give a sense of scale, I found 10 occurrences of stale blocks on the Bitcoin since June 2017 (~0.03%) at blockchain.info, whereas Ethereum ranged between 3.5% and 28.2% per day since July 2015, averaging ~11.1% since June 2017. I’ve included a handy chart below:
The stale rate is important with regard to double-spend attacks for a few reasons. First of all, a high rate of stale blocks will further incentivise miners to centralise under pools to reduce risk of stale blocks. The second is more psychological, a network accustomed to a higher stale rate may become less reactive to stale blocks resulting from a double-spend attempt. Furthermore, the uncle rewards may give a would-be attacker partial compensation for ‘testing the waters’ to estimate their potential ability to successfully implement a double-spend, by allowing them to claim part of the block reward by submitting the stale block for the uncle reward. I’ll caution, however, that both these topics deserve much more attention than I can give them here.
The third reason, which we’ll discuss, is that stale blocks lower the “effective” hashing power of the network. To explain this, we should first briefly reiterate how “Nakamoto Consensus” works. Andreas Antonopoulos gave a very detailed explanation in London a few years ago, video here:
When a new candidate block, building on the last known valid block, is submitted to the network, miners effectively drop whatever they are doing, pick up that block and start working anew considering that candidate block as the latest block. If a situation arises where two potentially valid candidate blocks (block A, and block B) are submitted by honest miners to all other miners, those miners will need to choose which version to support. If we assume that half of the miners support block A and the other half support block B, we have an equal likelihood of next block to be found to have worked off of block A or block B. Let’s say that miners working on block A find block C_A, a valid block built off of block A (with block B as an uncle) and submit it to the rest of the network, before miners on block B find a block. Miners working on block B to find the next valid block will stop what they are doing, and begin to work on finding a valid block built on block C_A, and block B will become an uncle block.
All’s well that ends well, right? Yes and no. During the time when miners are working on block A and block B, the network is “effectively” operating at half capacity as it would have been if everyone was working one only one or the other. If we assume that valid Ethereum blocks arrive four times per minute (5,760 per day) and that every 9 blocks are stale (~11%) and result in a perfect 50–50 split of miners on what becomes the next valid block and the uncle block, and that these forks never go beyond one block deep (i.e. we don’t have a situation where both block A and block B produce a next valid block and continue to work on those) we see the effective hashing of the network fall by approximately 5.5% relative to a scenario with no stale blocks. Again probably not catastrophic when the attacker has a relative, and possibly offset by the positive effect of having more blocks for an attacker to catch up with (z) in Satoshi’s Gambler’s Ruin problem.
Returning again to Vitalik’s excellent blog post on this (really do go read it), he looks at the probability of reaching ‘requisite security margin’, or the confidence that you have that a transaction won’t be overwritten by a block reorg. He goes into depth on the effect of rewards and security margin by time elapsed since the transaction. That chart that I found interesting was where he assumed that the required security between a fast and a slow blockchain would be equivalent and the time relative certainty of having achieved that:
Vitalik showed that the relative certainty of having achieved “good enough” security over a short period is likely lower in a blockchain with shorter blocktimes than with longer blocktimes, but ultimately surpasses it. This will be quite important in my subsequent posts on the subject.
Taking a slight modification on this, if we again look at our probability of successful double-spend attack in Bitcoin, but only with higher probabilities for the attacker winning the next block (q) we have our familiar chart:
If we compare this with Ethereum, on a time basis, where we expect Bitcoin blocktime = 600 seconds, and Ethereum blocktime = 15 seconds (i.e. 40x more blocks in Ethereum). And we attempt to account for stale blocks (assume Bitcoin has no stale blocks), where Ethereum has 11.1% stale blocks with the stale blocks being spread over the honest miners’ likelihood of reaching the next block (p), we see the following:
As predicted, Ethereum seems to be less susceptible to double-spend, ceteris paribus, when an attacker is less likely to produce the next block. However as the likelihood (and determination) of the attacker increases, so too does the probability of success, with a lower degree of reliance on the number of blocks to catch-up with. Eek.
Our last question. How much does it actually cost to run a 51% attack on Ethereum?
This post is already getting rather long, so I will cover this in more depth later. But https://www.crypto51.app/ provides an answer, based on NiceHash rates. The easy answer is, at this point probably at least $100k per hour. Worth noting that Bitcoin is estimated at roughly 2.7x the cost, at the time of writing.
Given that there are a lot of moving parts and constraints however, this is an insufficient answer, albeit that is another subject for another day.
It leads us quite nicely to the conclusion that double-spend attacks are costly, and the required effort would be considerable. However in the Ethereum situation, there may be even more considerations.
So let’s recap, faster blocks mean more stale blocks (bad), but more certainty over time (good). And with what we saw earlier an attacker with more hashing power will be less concerned about the number of blocks they need to catch-up with (quite bad, but mitigated by the financial and resource implications of actually achieving this). So, FUD, I guess. I joke, a) I hate that term, and b) don’t completely write this off yet.
From this point I want to use my next post to look at pricing a “mining swap” based on some of the principles and analyses presented here. After looking at mining, I want to return to double-spends on Ethereum but focus on application tokens (e.g., ERC20), bringing it all back together to stablecoins to, hopefully present a model showing the method and expected profitability of double-spending Ethereum stablecoins.