There are at least 4–5 approaches for scalability and out of them 2–3 will succeed and their success is multiplicative i.e. if say 3 approaches succeed then something that makes all these work together is going to be mega-successful.
If we look at the approaches broadly they are:
1) Design better consensus algorithm :
If we look at all the networks, there are two fundamental constraints :
- Bandwidth — We have a lot of nodes and these nodes must send blocks to each other and bandwidth is constrained because if blocks are massive they won’t propagate fast enough through the network.
- Processing power — We don’t want the nodes to be a vey high powered server (to avoid mining centralisation), we want them to be a normal machine BUT a normal machine can process only so many transactions per second.
We could improve on the number of transactions per second by just improving on the consensus algorithm, so if we look at Bitcoin, POW needs to have a very long block time i.e. 10 minutes but what that ends up doing is that lot of spare bandwidth is wasted.
Eg — My Bitcoin node receives a block, processes it in 20 seconds and then is waiting for the next block to be solved for 10 minutes and doing nothing.
My node for 9 min 40 secs is not working in anyway, so bitcoin doesn’t use bandwidth and processing power of the nodes very effectively.
That’s what the first type of approaches aim to accomplish i.e. to design something better which uses bandwidth and processing power of nodes much better.
POS, Casper, PBFT (Used in Cosmos among others) , Hashgraph’s consensus algorithm, the fundamental edge of these consensus protocols is that they use bandwidth and processing power much more effectively.
Say, you are going to be a cosmos node- your node is going to continuously send and receive transactions i.e. receive, sign, send and it’s always going to be kept busy.
This allows them to be effectively 10–100x more scalable than bitcoin.
This approach is definitely going to emerge because the engineering of this pretty much solvable.
2) Proofs of correct block processing (Eg - Tezos) :
Let’s say you have a node in India and I have another node and you are participating in some consensus protocol & you downloaded a huge block- say a 20 GB block and my node verified that the accounting in the 20 GB block was correct.
Normally what happens in bitcoin is you’ll broadcast that block to me and I’ll also verify that 20 GB block BUT using toolsets which are called ZERO KNOWLEDGE PROOFS (Or succinct computational proofs) we can do something interesting.
What can happen is you download a 20 GB block- you verify that block and then you create a succinct proof that basically states that your node downloaded this block and verified it and found it to be correct, this is the proof that you performed the verification correctly.
Then, when you send the 20 GB block and you send the proof, I no longer need to verify those 20 GBs worth of data, I can assume they’re correct because I got the proof.
If one node verifies the accounting in the block- the rest of the network can free ride of that work, that would save enormous amount of processing power.
Eg — For a 1 MB Bitcoin block 10k nodes need to process that block.
In the future, it will only be a 100GB block, only one node will verify that 100 GB block & the rest of the nodes will free ride off that one.
3) Using multiple Blockchains
This is the idea that we should have hundreds of Blockchains and each of these blockchains should process transactions in parallel (eg : cosmos).
Say, Alice & Bob are entrepreneurs and launched their own blockchains and a have a coin on top of it i.e. Bob coin & Alice coin.
We then teach the blockchains to communicate with each other in such a way that if there is a user with 1000 Bob coin, that user can move 1000 Bob coin from Bob chain to Alice chain; take advantage of an application on Alice chain and then move back to Bob chain.
We can have hundreds of blockchains, each processing in parallel — so that if one blockchain does 100 transactions per second and there are 1000 blockchains like these working in parallel, in total we can have 100,000 transactions per second.
This is called the internet of chains approach & we might see the adoption of this technology this year itself.
4) Making smart contracting efficient in a single Blockchain (Eg- Rchain)
The way Ethereum works is suppose we are in India & there are a bunch of Indians sending Ether around and then there are bunch of Chinese participating in their local community.
The global Ethereum network must order all the Indian and Chinese transactions i.e. it needs to arrive at consensus on the order of all of these transactions, eg:
Transaction one from India, two from China, three from China, four from India & so on…
What Rchain will allow is for the ethereum economy of India & China to operate independently.
Only transactions going between India & China need to be ordered and arrive at consensus.
This shrinks the amount of transactions that need to be processed on the main chain.
If today Ethereum is processing like a million transactions we can shrink it to be needing to process only 10k transactions while achieving the same effect cause many of the transactions are independent of each other.
So Rchain like approach can basically shrink the number of transactions.
5) Offchain approaches: Lightning network, Raiden network , Truebit
In a way all 5 of these would succeed and all of these things are going to be multiplicative, better consensus = 100x, shrinking number of transactions = 100x, interoperability = 1000x, offchain = 100x & zksnarks is another 100x.
If somehow we increase all these together, we increase transaction capacity a million fold and that’s going to be enough for the next 10 years.
P.S.-This piece is taken from my interview with Epicenter’s Meher Roy.