The DNA of Blockchain Technology and How Bitcoin Works. Part 2: Consensus Algorithms
As we described in the previous article “The DNA of Blockchain Technology and How Bitcoin Works. Part 1: Terms and Definitions”, blockchain is a decentralized system in which there is no central authority, no central point of vulnerability, but there is a serious task — to reach general agreement between all the network users. Consensus algorithms were created to solve exactly this task. They are one of the most important aspects of blockchain technology we want to help you understand. Transactions confirmation speed on blockchain, its security, range of applications and scalability, as well as environmental friendliness and many other important characteristics depend on the algorithm of consensus.
Consensus algorithm controls the way new blocks are added to the chain, ensures that information recorded in a block is correct, as well as maintains the security and efficiency of the network. Consensus algorithm distributes tasks between network nodes, checks the quality of their execution and makes sure the transactions are not corrupted in any way.
Blockchains are intentionally made to be immutable. Therefore, when a transaction is made and the data has already been recorded in the block, this data cannot be changed or tampered with. The validation of transaction data is performed by a consensus algorithm. The peculiarity of this principle is that the process of confirmation of information about each transaction is carried out by 51% of network participants.
Let’s take a look at the most common consensus algorithms and how they work:
- PoW (Proof-of-Work)
- PoC (Proof of Capacity)
- PoS (Proof-of-Stake)
- LPoS (Leased Proof-of-Stake)
- PoI (Proof-of-Importance)
- PoA (Proof-of-Authority)
- DPoS (Delegated Proof-of-Stake)
- BFT (Byzantine Fault Tolerance)
Proof-of-Work
The theory of the Proof-of-Work algorithm belongs to Cynthia Dwork and Moni Naor, who developed it as an instrument of dealing with spam.
PoW is carried out through complex computing processes using serious processing hardware. In this way, PoW protects the network from unscrupulous participants, prevents spam and DoS attacks.
Each network node (miner), responsible for checking blocks, works to solve complex cryptographic hashes (mathematical calculations, the accuracy of which can be verified by any network user), using their own computing resources. The first miner to solve the hash, confirms the transaction and writes the block into the chain. Thus, miners compete with each other for the right to create the next block of transactions in the blockchain. The order of the created blocks cannot be broken, which also increases the security of the system.
The miner who created the block first receives cryptocurrency tokens as a reward for the time and energy spent on solving the hash (for example, a bitcoin miner will receive bitcoins as a reward).
With every block created and added to the chain, cryptographic hashes become harder and harder to solve as the number of users grows and the load on the network increases. This reward system encourages miners to make the right decision and ensures the security of the network, while the newly minted cryptocurrency is added to the total turnover of coins in the network.
Drawbacks of PoW
Given the heavy use of computational resources required for mining, PoW is considered costly, wasteful and inefficient. When thousands of miners work separately on solving the same cryptographic hash, it is obviously an excessive use of resources, since the only block that ultimately has any value is the block produced by the miner who was the first to solve the hash. Therefore, with each new block, a bunch of practically useless by-products are mined.
By some estimates, the total cost of electricity consumption for Bitcoin mining is more than $500 million a year. In fact, back in 2014, the research showed that the consumption of power used for mining of bitcoins equated to the average electricity consumption of Ireland.
And in 2017, the power consumption for a single transaction in the Bitcoin system averaged 163 kWh. This amount of energy would be enough to sustain a family of three living in a small one-story house for five and a half days. And this is solely the Bitcoin network. Today, there are many new cryptocurrencies that use some form of PoW algorithm as well.
The hardware used in the mining process is usually a complex and expensive proprietary kit. Miners are actively buying this equipment, which, in turn, stimulates the development and mass production of the most advanced mining solutions, but also leads to a significant increase in prices for the already expensive equipment.
Most miners don’t have enough resources to afford the fastest and most powerful ASICs, but those, who do, get much more profits and influence. As a result, today Bitcoin is not as decentralized as originally intended.
PoW is not immune to the risk of excessive centralization of power, which conflicts with the fundamental principles of blockchain technology. Few things can prevent the miners from conspiring to take control of most of the network’s computing power and add false data to the blockchain for their own benefit.
Proof-of-Capacity
The basic principles of how PoC algorithm works are not much different from PoW. The only difference is that instead of the computational power of the node, PoC uses your memory or free disk space. The algorithm creates large blocks of data on the disk using hashing. The more hashes you have, the higher the chance of rewards you get.
Thus, instead of expensive and energy-intensive equipment (GPUs, power supply units and motherboards), all the miner needs is a simple laptop and some free disk space. Due to the universal nature of data storage and significantly lower energy costs, Proof-of-Capacity is considered to be a fairer and more environmentally friendly alternative to Proof-of-Work.
Proof-of-Stake
The idea of Proof-of-Stake is to solve the issue of high energy consumption in the Proof-of-Work algorithm. It is more energy efficient and, in theory, less vulnerable to cyber attacks than PoW.
PoS is an alternative mechanism where changes in the network are carried out not by means of computing power, but by participants’ obligations to the system. To confirm the transactions made and create new blocks, Proof-of-Stake requires participants to pledge a share of their own coins.
PoS selects a miner (validator) in a random order and entrusts him with transaction processing, block confirmation on the chain for a specific period of time. The success of mining depends not on the megahashes per second your hardware can process, but on the stacking — the number of coins that the owner “froze” to prove his honesty. Accordingly, the more coins the validator sent to stake, the greater the confidence in them. If the validator turns out to be corrupt, they risk losing all the frozen funds in their account. Proof-of-Stake prevents bad behavior on the network by delegating block validation authority to the most active users.
PoS is often much more preferable than PoW because it requires much less computational power, which means that it is less energy-intensive and less wasteful. Therefore, with less calculations needed, the cost of maintaining PoS is significantly lower than PoW. This removes a serious barrier to entry for participants who want to become block validators. The absence of mining requirements eliminates the need to purchase special mining equipment. Such mechanism can easily work on an ordinary PC or laptop.
Instead of forcing participants to buy expensive computer equipment and engage in a race of solving hashes, each validator node buys and stakes coins in the blockchain network they use anyway.
However, such a mechanism still does not entirely solve the problem of honesty. Even though PoS implies fair processing of blocks in the network, it does not oblige the miners to vote for blocks in a certain chain. Miners are free to choose which blockchain to support, because it is cheap (there are no complicated calculations, and therefore energy costs). And that is where another vulnerability lies called nothing-at-stake. This is when a miner, in case of creating an alternative successful version of the blockchain (fork) by accident or intentionally, can start double spending and profiting from both main chain and the fork.
Such attacks can be easily noticed, and they do not always pose a threat, since in order to create a fork, an attacker needs to start a new chain long before the moment he decided to gain dishonest profits. But the fact remains that the double spending problem is not solved and, in theory, the attack is still possible.
On the one hand, the PoS algorithm is designed to abandon the usual mining in favor of energy efficiency and greater resistance to attacks. The most active and interested in the network are considered more reliable, and are unlikely to perform such an attack, since for it to be successful, they will have to acquire 51% of the coins of the entire network. It is so expensive that the user will have no incentive to attack the network in which he invested so much money, which makes such an attack meaningless.
On the other hand, some nuances make developers move on from this algorithm of consensus, since, in theory, PoS is subject to monopolization and centralization. That is, anyone who possesses the majority of coins in the network will be able to dispose of the blockchain in their own interests.
Nothing-at-Stake problem
One of the most frequently mentioned problems with Proof-of-Stake is known as the “nothing-at-stake” problem.
In Proof-of-Work, miners have an incentive to continue to support the longest chain, since the chain will be considered the main version of the truth and bring rewards to the miners for their work. Therefore, the miners are clearly interested in the continuation of a single chain.
At the same time, in Proof-of-Stake there is little-to-nothing that can prevent the miner from supporting the existence of multiple chains (forks), given the fact that the cost of mining is insignificant. Hypothetically speaking, the PoS miner, who works on several forks simultaneously, can make it difficult for the system to reach consensus, and even attempt to rewrite the history of transactions.
In a situation where blockchain forking occurs (as happened with Ethereum when Ethereum Classic appeared), PoS validators can stake on both chains, which makes it easier for them to change the truth in their favor and brings more profit.
Leased Proof-of-Stake
The essence of the LPoS algorithm is that the network user (node), having a small amount of coins on their balance, can lease them to another node in order to increase its “weight” in the blockchain network. Thus, after receiving the reward, the node confirming the block returns the “borrowed” funds to the user with a small fee on top of the amount leased.
Proof-of-Importance
The PoI algorithm is very similar to PoS. The only difference is that PoI, to determine the validator node, takes into account not only the amount of coins on their wallet, but also their activity in the network. In order to become a validator, the user must not just store coins on the wallet, but actively transact. No less important is the time spent on the network. According to the developers, this approach gives more information about the “usefulness” of every network participant.
Proof-of-Authority
PoA is intended for use in centralized networks where there is a specific activity regulator. Therefore, its successful application can be observed in business. The essence of this algorithm is that only the authorized nodes can validate blocks. These can be CEOs, CFOs, heads of local branches, etc.
Delegated Proof-of-Stake
The DPoS consensus algorithm allows free voting, where every participant of the blockchain network has equal rights to vote on setting and regulating rules in the community (on general agreement and within the law, respectively).
Delegated Proof-of-stake was developed by Daniel Larimer in 2014. The similarity of the name with the algorithm Proof-of-Stake has nothing to do with the principles of its work, because their mechanisms of reaching consensus in the two algorithms differ significantly from each other.
The main objective of the DPoS algorithm is to get away from the race of computing and financial power, making confirmation of transactions as independent as possible from a single node in the network. Thus, the quality and speed of the blockchain will not depend on the number of coins validators stake, but on their conscientiousness and diligence.
In DPoS, validators do not stake their own coins to create a block, instead network participants choose the block validators through voting by adding their own coins to the validator’s stake. The weight of each vote is determined by the amount of coins added. There is a constant open vote in a DPoS system, so if the owner of a validator node violates the rules, neglects the quality of the service, or abandons the project completely, the community (delegates) have the right to withdraw their votes (the coins invested in the validator), effectively “firing” the unreliable node. Therefore, the validator must follow the rules set by the community, maintain the network at a high level and provide the best service. In other words, a validator gets the most profit by working for the interests of the community.
In case of doubt in any validator nodes, coin holders may re-elect them. This makes it easier to achieve high network stability. If most of the nodes are doing their job poorly or attempt to attack the network, the community will immediately vote to replace them.
DPoS remains decentralized in the sense that all users on the network participate in the selection of nodes. However, there is an element of centralization in this algorithm, since all the decisions on the network are made by a group of delegates who are selected by network users and have their own nodes for creating blocks. Since these delegates are considered to be trustworthy, this partially solves the problem of nothing-at-stake, because the working capacity of the system is to select those who are valuable for the network, while removing the ill-doers. Thus, the success of the application of this algorithm depends on the activity of voters.
Byzantine Fault Tolerance
The objective of the Byzantine Fault Tolerance consensus algorithm is to establish trust between unrelated parties in the network.
The Byzantine Generals’ Problem as a problem of ensuring reliability in decentralized systems was first formulated in 1982 in the scientific article by Leslie Lamport, Robert Shostak and Marshal Pease.
The Byzantine army attacks the city and completely surrounds it. The problem is that the generals are scattered around the periphery of the city, and they need to agree on a strategy, come to a single decision whether to attack or not. To communicate, they send messengers to each other. At the same time, one or more generals can be traitors intending to sabotage a siege by sending false messages to the rest. And if some generals lead their armies into battle without the help from the rest, the outcome of such siege will be tragic.
The Byzantine Generals’ Problem is very similar to the issue of consensus in the blockchain technology, where all nodes must agree on a specific set of rules and specific data in a block before adding it to the chain. This is not an easy task, as thousands of people use the network, and all of them must make sure that the information added to the chain is valid. At the same time, users must have a means of preventing sabotage attempts by unscrupulous participants.
BFT allows a consistent group of network objects to work together to update the registry in a secure way. Trust on the network is established by the community. This approach allows validators to make high-speed transactions, manage each state of the network and exchange messages with each other in order to ensure reliability of the transaction record and integrity of the network. The BFT algorithm ensures that the next block to be added to the chain will be the only true version in the system.
Unlike PoW-based blockchains, BFT blockchains are immune to attacks, unless network users themselves coordinate such an attack. BFT is considered to be an advantageous algorithm because it scales well, has high bandwidth and low transaction costs, but, as in DPoS, these advantages are achieved at the expense of a certain element of centralization.
Conclusion
Obviously, there is no ideal algorithm of consensus to date, but the search for a universal solution is still going.
Currently, consensus algorithms balance between scalability (processing speed), efficiency, and the degree of centralization of the system, while developers are actively trying to improve the positive factors and eliminating the flaws. We can only observe how the mechanisms will adapt to the realities, and the communities — to the technological advancements.
Since every day, developers are finding more and more ways to utilize blockchain technology, many new algorithms continue to appear, with each solving problems unique to a particular application.
It is very likely that the number of appearing consensus algorithms will actively grow, eliminating old flaws, complementing and improving each other. This will also open up new opportunities for the use of blockchain technology by businesses and governments. Still, it is hard to say whether it will be a single global blockchain with numerous independent systems integrated into it, or it will be a lot of private blockchains. Time will show what approach will best stimulate large-scale participation and stable project management.
Read our next article “The DNA of Blockchain Technology and How Bitcoin Works. Part 3” to get a deeper understanding of blockchain technology, the principles of its work and application in cryptocurrencies.
Posted by Denys Tsvaig.