Blockchain consensus examined in the context of bootstrapping a blockchain

Martin Worner
TgradeFinance
Published in
12 min readApr 30, 2019
Photo by Andy Holmes on Unsplash

Co-authored with Ethan Frey

We examine bootstrapping a blockchain and what needs to be considered around consensus mechanisms in particular and the impacts on launch.

There has been much discussion about different consensus mechanisms and the merits of each one. The established chains like Bitcoin, Ethereum, Tezos, Lisk or EOS have been the subject of much debate and in those discussions some of them even compare to a enterprise chain.

What is often neglected in these discussion is the important one at the beginning, namely what is the best, most secure consensus mechanism in the context of the bootstrap phase.

When we refer to the bootstrap phase this is when a blockchain is just born.

At bootstrap the concern is that few people know about the blockchain, when there is low market value, it is key to have a consensus mechanism that works for it in this phase.

Bitcoin’s works because there are millions of dollars computing power running algorithms to make it secure.

EOS, for example, is secure because they picked 21 Validators who have millions, or even hundreds of millions of dollars at stake.

What if we only have a supply of tokens worth €1 million? What does that mean for securing a blockchain?

We examine the different consensus mechanisms focusing on proof of work, proof of stake and their variations such as proof of of authority and how they work from bootstrap to a mature blockchain.

Photo by Matthew Henry on Unsplash

Perhaps the best known, most established and stable consensus mechanism is proof of work and this is the mechanism used by Bitcoin.

It is or was ingenious, and what it solved was a Sybil problem. The Sybil problem is something that affected particularly BitTorrent, in the early 2000s, before Bitcoin came about and there were issues around distributed hash tables.

The issue is that if everyone has a vote and anyone can make a node, what happens in the scenario where there are a thousand people, and we can make another fake 1000 people to win the vote? The observation was that it should not be possible to create a persona that has a vote at no cost.

To solve this problem observed in BitTorrent it was established that one CPU cycles equates to one vote. It was determined whoever has a hash and is doing hashing work, can vote on the blockchain fork which is the longest.

Now for the clever things. First of all, there’s a monetary cost to a person casting a vote. Secondly, that same vote can only be spent on one play, so that we cannot vote on two forks the same time with the same compute power.

This solved the problems experienced and at the same time introduced the concept of no gatekeepers. There are no checks, so that anyone anonymously can start running this hashing algorithm and become part of the voting set. There is nobody granting authority, meaning it is a very decentralized model and it was a genius move.

There are some downsides, and as widely discussed in the media around wasting electricity which comes about when you have teraflops in the range of 10 to 12th and 10 to 15th hashes to the second.

Let’s consider that we can start a node with me and two friends running it. Someone else likes it, they want our software, and it starts hashing on their computer. This is how it grows by more people joining so that they all have a few more votes, we start buying big computers to get more votes and other people start to and is quick to see the network effect. The fact that it’s very easy to join represents the interest in the network, the power.

Thus in terms of how you finance your blockchain you begin with a low barrier to entry by anyone who can start running a node and this easily scales by adding more people.

Ideally, proof of stake would emulate that too, but the downside is the security of the blockchain. Bitcoin is ultra secure, it does that because it maintains probably 90% of the world’s share of hash power currently being used. No one can start unless the miners themselves start to fork it meaning nobody can come in with a whole bunch of unused power and start building another chain.

This has been observed in the forks of Bitcoin, for example some Bitcoin miners could go to Bitcoin Cash and start hashing on it over on their chain quickly and this is not stable.

Photo by David Clode on Unsplash

There is a table published on the website https://www.crypto51.app/ which shows how much it costs to do a 51% attack in the chains. What this means is if I can find the necessary compute hashing power, I can run the chain and thus I could go back and produce a parallel chain with the result that it would become longer than this chain. What that allows me to do is double spend and that is the critical fork. It is worth remembering that if a chain is vulnerable to a 51% attack then it is decentralized, the question is whether it is cost effective to mount a 51% attack.

Let’s consider the following; I pay you in Bitcoin and make a trade with you for a bunch of stuff from you, you wait six blocks, or one hour to get done.

Now, if they are a bunch of big trades of high value items which have been sold and moved for Bitcoin. I could use that money to pay for a parallel chain to go back one day to run more than the entire hash power of Bitcoin in one day, and then I will play a longer chain where rather than sending you that money I sent the money to someone else. The result is that you don’t get the money Because the parallel chain (longer chain) wins. Note this is really hard to do in Bitcoin, as it would be very costly.

For other, smaller, Bitcoin derived blockchains such as Bitcoin Cash it is easier to do. By running parallel chains you can break a lot of security, the longer back you can fork it, the less secure the network is so we have to wait a day, a week, a month, or a year to trust that the money has actually arrived and is not going to change.

Looking at this further Bitcoin Gold set out it’s stall to go with an anti-ASIC and CPU optimised algorithms for hash generation. What they were addressing was the ease of attacks by CPU bound proofs.

The scenarios are that anyone can rent CPU cores from Amazon to compete against your local machine, even though there is a cost to this. The bigger threat is Botnets which can run CPU bound stuff on lots and lots and lots of people’s machines.

Running botnets is much cheaper than buying CPU on Amazon, Google etc, and developing this method if you want to up hashing power.

There are a number of ways of running botnets;

There have been botnets that run JavaScript in the browser triggered by cyber ads, as they load they will start running mining in your browser using your processing power (CPU). So there are lots of ways of hacking CPU from onboarding clients.

If you have a CPU optimised platform with a few thousand people, honestly mining and earning a few hundred bucks and then someone sets up a few million computers using botnets they would out hash the honest miners and wipe your chain out as the take over your chain. This then shows that the chain is not secure.

Some algorithms are GPU optimized, so they can run very quickly, GPUs are like graphics cards and can run 1000 times faster than CPUs. It is clear that one GPU can do the equivalent of 1000 CPUs, and thus the CPU rented attack becomes less useful.

In a similar way that CPUs were harvested to up the hashing power, GPUs are rented out to mining pools and the more that join the greater the hashing power.

The next step up in hashing power is the specialist chips known as ASIC which have been optimised for hashing algorithms and the run 1000 faster than GPUs and these ASICs are being used by the main Bitcoin miners.

Photo by Viktor Forgacs on Unsplash

ASIC integrated circuits chips have the advantage that there is all the hash power available for the algorithm.

The CPU and indeed GPU is vastly inferior to the ASICs meaning that there is a significant investment needed to build a mining operation with dedicated ASICs and the number of people or organisations running these is relatively stable. This brings some stability against random people jumping in (and possibly with bad intentions), however, it runs against the point of proof of work in that it should be possible for anyone to run a node, and the bigger the network the more stable it is.

To summarise the proof of work consensus mechanism introduces a cost to hashing and that makes it robust, the challenge at the bootstrap phase is that making it accessible to a few hundred people and not requiring too much hash power at the beginning helps with adoption but this exposes the chain to a determined attack until the chain gets to a critical mass where it becomes cost prohibitive to mount an attack.

To address the concerns about high resource costs an alternative consensus was proposed, proof of stake.

Photo by Marivi Pazos on Unsplash

Proof of stake came about when some clever people realised that it was possible to secure a chain by creating scarcity of tokens and linking voting rights to the tokens.

There are two approaches to this, firstly was to adopt the Nakamoto consensus as used by Bitcoin where the longest chain wins. This works well where the was a lot of computing power required and you can only commit to one fork. Problems began to emerge with Lisk, and Bitshares who adopted the Nakamoto consensus and there were a number of attacks that were manifested. It is possible to create a number of forks and at a later point bring one in and declare it the longest fork, there are “nothing at stake attacks” and “long range attacks” which can happen at anytime where a chain can get re-written. These show that the naive proof of stake consensus model is broken.

The second approach is a Byzantine fault tolerant proof of stake which has been implemented by Cosmos and their Tendermint engine, and the Ethereum CASPER designs are attempting it but it is questionable whether they will ever deliver. EOS, too, claim to have to have some Byzantine fault tolerance checkpoints while Poa.network proof of authority has a very clever honey badger implementation which draws on the tendermint implementation literature.

The power of the Byzantine fault tolerance implementations is that they can tolerate a third of the nodes to be malicious actors and can resist many of the attacks thrown at it. The consensus algorithm involves voting on every block multiple times, this means that people are quickly punished for voting on two forks. Again, if you were found to be voting on 2 blocks this would quickly be detected and punished, thus establishing that a malicious attack has just taken place.

There is no forking and every block is final, this means that there is a quicker finality and ensures that people can be quickly punished for voting on the wrong chain.

To guarantee the chain against attacks, bonded tokens are held for a period. For example, if you held the tokens for two weeks and you can only go back two weeks to to prove a recent header to improve the next header thus averting potential long range attacks.

Photo by Michael Longmire on Unsplash

Let’s examine what we need to secure the chain. Let’s take a set of people who have the initial set of tokens, are willing and able to secure the chain. The assumption is that they are trusted people and have the interest of the chain at heart and the have to be able to run validators. They then stake the tokens to the validators they run and determine the voting rights. The assumption is that the token holders are benign actors, but what happens if they are malicious actors? How hard is it to be a malicious actor? Firstly there is a requirement that the there is a level of technical competence to running a validator set.

There is a risk that once the chain is established, the tokens are then listed on exchanges. Let’s consider the following scenario, I have invested in a blockchain and I see a rival blockchain as a threat. What can I do? I buy up a block of tokens, the minimum is a third plus one or a half to be sure to succeed, and I stake them. I then take the validators offline and the whole chain comes to a grinding halt and nothing moves. If I want to be vindictive or malicious I buy up two thirds of the tokens and begin forking the chain which allows me to re-write history and other such shenanigans.

As we have seen the initial token holder validator sets need to have the technical ability to be able to run rouge and there remains a question about the ability for bad actors to buy up tokens to manipulate the chain (1/3+ to halt it, 2/3+ to alter the history).

Photo by James Sutton on Unsplash

The economic design is very important too in the initial stages, we want ensure that we build a system that incentivises people to run a chain for the greater good over their own self interest. The design of the rewards must be such that they give the economic incentive to run the chain efficiently and uphold the values of the chain, while ensuring that the rewards are not too high or the users of the chain see it as a high tax for doing business.

If there is not sufficient reward paid to the validators then the quality of the software running the chain declines as the validators have much less incentive to invest and maintain the chain.

Delegated proof of stake, is where token holders can delegate their tokens to validators to share in the rewards while the validators do the work. The validators encourage the delegators as they then increase the size of their stake and thus the rewards. This was seen in Cosmos where validators were running nodes and getting token holders to delegate, the economics were that validators would typically take 20% commission of the rewards for doing the work and the delegators would get the remaining net amount. A mutually beneficial system.

Some of the validators began dropping their commission rates down to zero as they had sufficient tokens and did not need the funds from delegators. This caused other validators to drop their commissions to attract stakers as they needed the tokens and began to run at a loss. The only option these validators have to break the cycle is to buy a bunch of tokens so that they rely less on the delegators.

Thus in the initial phase it is very important to align the interests of the validators and balance the economic incentives between validators and stakers. We must also consider in the early days that it is easier to buy up tokens for nefarious activity while the total chain value is low.

Photo by Denny Müller on Unsplash

There are similarities where smaller companies are not listed on stock exchanges as they would be vulnerable to being bought out. Smaller capitalised companies have the option to remain private and sell equity to investors aligned to their interests, this is not an option in crypto as there is no concept of a privately traded token. Tokens exist in public chains and are bought by anonymous investors (some STOs have a concept of whitelisting to ensure they comply to local regulations).

How does a small chain survive in the early days using a proof of stake consensus? Good economic design and a strong initial validator set, however, even with the most robust platform in place the chain is still vulnerable to an opportunistic or malicious actor who can buy big chunks of tokens.

References

51% Attack on Ethereum Classic https://mobile.twitter.com/hosseeb/status/1082815549132816384

Cost of a 51% on various chains https://www.crypto51.app/

--

--