Polkadot vs. Cosmos vs. Ethereum 2.0 — for real idiots

Patrick Wieth
Published in
98 min readFeb 17, 2021


Hey, it is this time again. Cryptocurrencies are freaking out and articles pop out everywhere about what you should buy and what is great. So it is no surprise that I write an article. As usual, it is written from an idiot for idiots. The idea for this article is very old, exactly 3 years ago someone responded to my Cosmos article, asking what is the difference between Cosmos, Ark, Aion, ICON, and Wanchain. I was surprised that the question did not include Polkadot. So I also responded that Polkadot should be in the comparison since it is the only project en par with Cosmos. The others seemed to be not a real competition. Dfinity was also on my mind back then as something that has its own valuable ideas and does not jump on the interoperability hype train without its own ideas. However, now 3 years later, turns out most of these mentioned projects do not have much relevance left except for Cosmos and Polkadot, so let’s have a look. And Ethereum of course. What else can you expect from this article here? More understanding of shards, interoperability, and state-of-the-art proof-of-stake. Nice!

Here we will be enriching the text with high-information density images like this.

Why exactly these 3 projects? To me these seem to be the three main players when it comes to being a platform that has interoperability, scaling and the appropriate ecosystem. I have done a lot of research, re-read their whitepapers, all kind of other documentation and tried to get a decent insight. However since this is so much stuff, this article will only scrape the surface and I will always be biased. I try to be not biased, but I’m a human and also work with one of the technologies in my blockchain based game project (www.crowdcontrol.network), so at the end of the article you can guess which technology I actually use. If you guessed it right, please flame me in the comments.

Blockchain 3.0

The development we are looking at is commonly called “Blockchain 3.0” and there are several definitions out there, but I think the Polkadot whitepaper has it quite nicely, which defines:

  1. Scalability
  2. Isolatability
  3. Developability
  4. Governance
  5. Applicability

as the key areas, where Blockchain 3.0 needs to revolutionize the industry. This article will be segmented into 3 sections. At first we will try to understand what all of this means and what the state of the art is. Then we will look at general solutions to these challenges and their implications. Finally, we have a look at how our three big players do it. At the end, I will try to give some insights how this applies to real-world dApps (decentral Applications).

1. Scalability

Well, I like this definition the most: Scalability means how much extra workload can be processed by providing extra workforce. For example having additional excavators on a construction site means faster excavating (surprise!). However, at some point it might not help much putting an additional excavator in the scenery, because there might not be enough space left to operate. Scalability is limited, you cannot use 1000 excavators to have 10 times the throughput of dirt compared to 100 excavators.

Chinese construction site — Testing the scalability of excavators.

Looking at Bitcoin or any other Nakamoto Consensus based system (this means Proof-of-Work + Longest-Chain-Rule), we define the throughput of transactions as the workload and the number of nodes or miners as the workforce. Then the scalability is a surprising 0. Zero. For excavators quadrupling the number might still give double the throughput or more, but for Bitcoin miners or nodes, quadrupling the number of nodes/miners does not give double the throughput but rather no additional throughput at all. I explain this in my PoW vs. PoS article, but here we only need to know, that all of the additional nodes or miners totally contribute to the security of the system. This means there is not very much that can be done to increase the ~7 tx/s (transactions per second) Bitcoin can process. Well, actually it is a bit more, because of some awesome improvements like SegWit, but nothing that fundamentally changes the situation. In essence this means that Bitcoin is not suited for being a payment network. WHAT? WHEN I GO TO BITCOIN.ORG IT SAYS IT IS AN INNOVATIVE PAYMENT NETWORK? WHAT ARE YOU TALKING? Hold up. The people running this website might know it better, but marketing is not about saying 100% correct things, but rather saying 100% catchy things. But Bitcoin has the highest marketcap, what are you talking? The reason for this is not that paying with bitcoin is so nice. How often do you think, when buying a ticket for a train or bus “I would like to pay for this with bitcoin now” and in case you do, how often do you consider the transaction fee of Bitcoin and come to the conclusion that you really want to do it? I hope never. Because it does not make sense to buy a $2 ticket with a $10 tx fee.

A “funny” meme summing up what we just learned.

Keeping all that in mind, Bitcoin is still very good at one thing and that is being an innovative store-of-value network. For store-of-value you don’t need to be very good at doing many transactions, you need to be very good at being secure. Visa/Mastercard is usally mentioned here with its ~2000 tx/s as a reference, that should be reachable with blockchain. Ok, fine. Now we understand why scalability might be a thing. It becomes even more daunting, once we think of Ethereum, where not only currency is transacted but also computation is done, which leads to much more workload.

Monks training for day trading with Uniswap.

2. Isolatability

Maybe, I would rather call it compatiblity. It basically means how many needs of different applications and/or users are satisfied. Well, Bitcoin only allows for transactions, but even there are different needs. Beside the mentioned $2 tickets, there are also things like escrow and multi-party payments (Bitcoin provides the functionality, yay). But there is much much more than just payments. Since Ethereum is turing-complete you can code anything you want with it. However that is only true as long as you can pay the fees for computation. If you want to build Minecraft on Ethereum, it might not work, since the computational demand is just too much. The limit here is again throughput, but there are also other things, like randomness, zero-knowledge and interoperability, where a platform might not provide the necessary functionality. So this point comes down to the wish that a platform should be compatible and perfectly suited for everything.

If this concept is not yet clear, that is ok. We will have more examples.

3. Developability

There is not only User Experience (UX) but also Developer Experience (DX). This point adresses that and it makes a big difference on which platform you develop. Ethereum is the network that started it and there is the great upside that you do not need to host any infrastructure. You write the smart contract and deploy it. That’s it. The blockchain runs your stuff. This is very great. No server management, no Kubernetes clusters. On the other hand smart contracts might not exactly do what you intended. They might not be safe. At one point in time really big shit happened in Ethereum. Well, actually several times. But the first thing that comes to mind is the DAO fork. Where the network needed to fork to repair a hack, that would have stolen many millions of Ether otherwise. The network did split into Ethereum and Ethereum classic (where the hack became successful). Such things can happen. Another example is the Parity wallet, which was hacked, which we will mention later again. So this point mainly comes down to
a) How good is the ecosystem? Are there nice projects you can use? Is the code layered and parts can easily be exchanged or is it spaghetti-hell?
b) How good is the infrastructure? Can you use it?
c) How strong is the foundation of the platform. Does it support what you want to do and if not, does it allow you to fix it and plug the fix in?
These three points can differ greatly. This comes from the fact, that some things exclude each other. Namely, if you have a strong infrastructure that you can use (Ethereum smart contracts) you cannot change anything of the foundation. Even if you have coded a nice solution how to roll random numbers or do zero-knowledge transactions, it might be years until this is included in Ethereum and becomes useful for you. This also applies to point 2. Isolatability.

This meme works even better for developer experience…

4. Governance

How governable is a blockchain or platform? Being a highly underestimated point, it is no surprise old projects lack it and every important project now understands why this is very important. In distributed applications there is no single entity, that is in charge and has to face the consequences alone. Everyone suffers if something goes into the wrong direction, but nobody can single-handedly change these things. Tezos for example even decided that this is the main point of their product, everything else can be upgraded into Tezos. So if we look at Bitcoin the trouble might not be visible at first glance. But actually there are three parties, the miners, the developers and the users. And there is no way to force an agreement between these parties. Hopefully their goals align, but that is not necessarily the case. For example miners might be happy with very high fees, but users want the exact opposite. When it comes to upgrades (BIP) users have no real say in deciding what or when to upgrade.

When it comes to protocol upgrades reducing the fees in a blockchain, miners love democracy.

Users can basically only threaten the miners to leave the ship and use something else. Developers also must hope for miners to like their work, but basically should do stuff that improves the user experience. There might be several reasons why there are so many Bitcoin forks out there, but I claim this is the main reason. Ethereum has a bit less trouble, the main reason might be that an entity like the Ethereum foundation exists, which navigates theship quite a lot. Hardcore fans of decentralization might not like this, but it might be a rational voice, advocating necessary upgrades. Just have a look at the current process of EIP-1559, this is the old story, in Bitcoin we had this quite often in the past. The miners do not want to support upgrades, which lower the fees and make the system better for the users. However the Ethereum user and developer base has a better position when facing opposition from miners.
In the end Governance comes down to processes to determine the interests of the members (mostly voting), deciding on software upgrades/changes, orchestration of these upgrades and eventually electing entities that represent or cooperate with the network.

Remember how we wanted to give more insight into the concept of expectation vs. reality? Here we go.

5. Applicability

This point mostly says “Does the platform have a killer feature?”. The Polkadot whitepaper describes it with “does the technology actually adress a burning need of its own”. This is quite funny, because later the whitepaper argues, that Polkadot should be minimal and simple, with no unnecessary functionality, even smart contracts should not run on the relay chain, but on Parachains. I think this is the right approach. There is no need to have a killer feature on its own for a platform. If the platform is well built, killer apps will come. Ethereum does not have a killer feature of its own, but Cryptokitties, Uniswap, several DEXes and lots ‘n lots of ICOs showed up organically. So let’s just cut this point. Being a good platform is the killer feature and this comes down to points 1–4.

Main Part — Understanding the basics

We managed our way through this first passage and now come to the part how these things are generally addressed. But what does generally mean? The thing is, that no project just does everything on its own. All of them stand on the shoulders of giants and are influenced by each other. Ethereum was such a big success, that there is no project that does not look at it.

1. Scalability

There are 2 types of scaling. Vertical and horizontal scaling. This does not apply to blockchain only, it also applies to software development in general and it also applies to everything. May it be businesses or whatever processes you imagine. Let’s take out our excavators again. If we scale horizontally, then we put more and more excavators on the field. If we scale vertically, we buy faster, bigger, better excavators. Their arms move faster and have bigger shovels, nice. For such excavators it’s quite obvious that horizontal scaling implies more friction, they are more likely to interfere with each other so that the scaling is no longer linear. Vertical scaling does not have this problem in the same way, since it is just the same unit being upgraded, so no additional communication costs and no space limitation (as long as the excavator fits on the construction site :D). The problem with vertical scaling lies in the circumstance that there are technological limits. At some point it becomes complicated to find a bigger hydraulic system, that is still able to move the shovel. Once you have put the biggest hydraulic system from the market into your excavator, you cannot further scale it vertically. You can buy more hydraulic systems and operate these in parallel, this is horizontal scaling and the excavator might still be a single unit and being scaled vertically, but the hydraulic system will be subject of diminishing returns, or say sub-linear scaling (typical for horizontal scaling). So internally the excavator is scaled up horizontally. The same applies for every component and the more you add to the excavator the more expensive it becomes and more heavy-weight and so on.

My feeling when researching for this piece of text here.

Software has a special property, which basically means that copying it is very cheap. So one would say, that horizontal scaling comes for free? Well, there is still communication necessary. You can fork Bitcoin 100 times and then process 100 times the transcations, but it is no longer the same network. You might have a BTC on network #15 and someone else has a wallet on network #45, how do you interact with each other? You cannot. This is the big problem of horizontal scaling. In contrast for vertical scaling, we have the technological limits. Proof-of-Work and Longest-Chain-Rule can be scaled up by increasing block size and reducing block time. Seems to be very easy, but it is actually not an option. A fair competition and a real randomness of blocks being found is only guaranteed, if there is enough time to run the hunt for the next hash. The more you reduce the block time the more the network centralizes to those actors, which can communicate fastest to the others, the same applies with block size and bandwidth. Sad.

But what is the general approach to this? Do the cool blockchain kids of today try to scale vertically or horizontally? Turns out, both. The default solution for vertical scaling is Proof-of-Stake (PoS). The default solution for horizontal scaling is having multiple blockchains interoperating with each other. I will not explain all the fine details of PoS here, since I have done that in another article that is linked further above. We also don’t need to understand this stuff here, what we need to know is: PoS does not run a probabilistic puzzle, therefore the next block can be produced much faster. There is no need to have a long block time, the limit is basically how fast the members of the network can process a block and how fast they can communicate. Process time of a block is just a matter of CPUs or GPUs, something that can be scaled quite easily. Network delay is something that is ultimately limited by the speed of light, so a photon needs roughly 133 ms to travel around the earth. Signals in copper are 3 times slower, photons in fiber optics only lose 33% of their speed, so 200 ms is possible. However there are also routers in between and fibres might not be perfectly laid out and we need to have packets go back and forth, so let’s say 1s is reasonable.

It might be really hard to have a distributed application where all of it’s participants are informed of an update and agree to it in less than 1 second. It also depends on bandwidth compared to the block size, but for example Bitcoin blocks are 1 MB and this is not a bandwidth issue here. So if we assume Bitcoin block size and calculate the increase of throughput by running something that really has this 1s time, then we arrive at 600 times speed up. I will try to lay out how this is calculated and hopefully make it easy enough to understand. Bitcoin block time is 10 minutes and a minute has 60 seconds, so if we multiply that, we end up with 600 times the number of blocks in the same timespan. Nice speedup. Another complex calculation of Bitcoin’s 7 tx/s multiplied with 600 gives us 4200 tx/s, so in essence, Visa level is achievable by having a 1MB block every second. In reality though the timing is more between 4s and 7s, but we can also increase the block size as well, so vertical scaling looks good. Proof-of-Stake has another nice advantage, which is that it doesn’t increase global warming more than necessary. So we can get rich with it, but without destroying our planet. A nice addition on the list.

But here is one problem to mention and this has to do with this communication thing. In Nakamoto Consensus a new block can be broadcasted and you don’t need to wait for anything. The nodes communicate with a gossip network, so everyone tells its neighbors their new data (blocks). Why is there no need to wait for anyone? Because anybody can verify blocks without asking anyone. This is a very great feature and with PoS this is not possible. There is no cryptographic puzzle, so you cannot just verify a hash. You need to know if all the others have agreed. This means if there are 100 block producers, each of them needs to communicate with 99 others. If there are 10.000 block producers, each of them needs to coordinate with 9999. This basically means for N participants each one needs to communicate with N-1. So the communication scaling is N*(N-1), for large N the 1 does not matter, so we end up with N². We call this the N²-Problem and it means we can’t scale the block producers, or better say validators, indefinitely with PoS.

So is there a solution for the N²-Problem? Sure. A very common one is the validator delegator split. The blocks are produced by validators, who run nodes and propose blocks one after each other and vote if these blocks are valid. Their voting power is determined by their own stake plus the delegated stake from the delegators. The delegators do not run a server, they just bond their stake to a validator, who runs a server. The N²-Problem is solved, because there is only network communication necessary between the validators, so the number can be small, for example 100. What is the reason why we want to have as many nodes as possible? There are 2, one is prevention of cartel formation/collusion and the other is partition tolerance. So are we performing worse now? Well, the cartel formation or collusion part is actually still prevented by delegators. Since these can scale indefinitely, to form a cartel one needs to bribe the delegators stake as well. In case some cartel is forming, then the delegators should withdraw their stake from this cartel and redistribute it to other validators. For the partition tolerance the delegators do not help and partition tolerance is also lowered by another property of most PoS implementations. Based on practical Byzantine Fault Tolerance the liveness of such networks is different to Nakamoto Consensus. When too many nodes go down, then the network halts and cannot produce a new block. In contrast Nakamoto consensus does not really care. When half of the miners go down, then the next block will take longer — 20 minutes if half of the mining power is affected — but this will relax back to the targeted 10 minutes, if the network remains split. So here is a key take home message: PoS lowers partition tolerance.

It is sad to have less partition tolerance, but on the one hand events where half of the block producers go offline, are so severe that it might be good to stop and wait for a moment anyway and on the other hand we get a nice feature for this tradeoff and that is finality. It means that blocks become final at some point. So in Nakamoto Consensus it is in theory possible that any block might be reverted later by a concurrent fork of the blockchain. That is not the case if some variant of pBFT is implemented.

Nice, so we are happy? Well, PoS has some other problems, too. Namely Long-Range-Attack and Nothing-at-Stake, which do not exist in PoW or not to that extent. These problems need to be solved in order to deploy a functional cryptocurrency. Let’s have a short look into these problems. If we implement PoS naively, which means like Nakamoto Consensus, but swapping PoW with PoS, then it is possible for an attacker to create an alternative reality. At some point in the past, an attacker could fork off and create their own reality. For example when the attacker has 1% of the stake, every 100th block is produced by the attacker. In this block the attacker maliciously increases their stake until after this has happened often enough the attacker has enough stake to take over the network. Once taken over, the attacker can produce many blocks very fast. Of course the attacker cannot fool other validators, who don’t agree to this reality. But imagine someone joining the network and wanting to download the blockchain history. This newcomer cannot decide if the reality presented by our attacker is real or not. Even more so, if he also downloads a competing reality from another validator, then using the Longest-Chain-Rule would lead to the faked chain to be more trustworthy. Since this attack works better the earlier it fakes blocks, it can basically start from the genesis block forging a different reality. That is why this is called Long-Range-Attack. Why doesn’t this work in PoW? Because you need to have the mining power to forge many blocks with high difficulty. You cannot just make up a blockchain in PoW with very high difficulty and being very long without having a lot of mining power. So any outsider can easily see your alternative fork is not the longest (heaviest) chain. Contrarily to what I just said the name Long-Range-Attack originally comes from PoW but what it means for PoW is a bit different (if you want to know, you can read it on the ethereum blog).

Ok, so how can we solve this? Well, actually we can’t use Longest-Chain-Rule anyway, so we need to change some parts anyway. The solution can be checkpointing. This basically means we decide that there is no way to revert some blocks. There cannot be an alternate fork overtaking these blocks anymore. This block is final. We have heard of this feature already and it is called finality. We also see that for Bitcoin it is hard to have such a checkpoint, since there is no one to decide when to do it. So what does it imply? Well, at these final blocks there is a fixed state, which we can write down and now it is sufficient for any outsider to just download this fixed state. This also means an outsider does not need to download the whole blockchain history anymore. Since all effects of the history are condensed in the state at this point. This is also called pruning. For example in Ethereum, without pruning the blockchain would be 4 TB big and with it, it stays quite small, only a few hundred GB. This still sounds like very much, but it is a big difference when it comes to what a server can easily handle.

However we see another point here, why it is hard for Bitcoin to do that. There is no state in Bitcoin. In Bitcoin blocks do collect a number of transactions which result in UTXOs on Bitcoin addresses. To know the balance of an address one needs to sum up all blocks. Ok, so how do these PoS/BFT consensus systems allow checkpoints? Basically they are defined in the protocol. So it would have been possible for Bitcoin to define such checkpoints as well, but it would mean a lot of implications, that Satoshi Nakamoto most likely wanted to avoid. In PoS/BFT this comes for free, since these implications are already bought. The most important one is waiting for all participants until they responded or at least the majority has responded. You cannot just go and say every 500th Block is final or so in Bitcoin, because that would give the lucky one to find such a block too much power. At such a point you need to make sure that everyone agrees to this checkpoint, since the good reasons to allow for competing forks still hold. In PoS/BFT we don’t need to insert such a thing into the consensus mechanism, since every block is produced by a vote. When 2/3 vote a block valid it becomes a block. In pBFT it even becomes a final block. So using this approach to make PoS a reality, the solution to the Long-Range-Attack comes for free. Nice.

Those of of you who can’t relate, please think twice.

Leaves us with the Nothing-At-Stake problem and it describes that there is nothing at stake when it comes to fork choice. In PoW if there are concurrent forks, you have to decide which to follow, since you cannot mine all of them. If between these forks there is just 1 transaction different, the hash will be different and you can only mine on one fork. This makes fork choice necessary. In PoS your coins get duplicated for each fork, so you can “mine” or better say validate on every single fork. This is even something that is not just possible but incentivized. It makes sense for you to follow any fork and receive the block rewards. Not a very good prospect if there might be 100 forks soon and nobody wants to solve that forkin’ mess. Even though such an attack never happened it might be the fear of it happening being one of the main reasons why early PoS Coins failed. There are some other factors, maybe the high pre-mine that was hated back in these days not giving “proper decentralization” or the lack of seeing and implementing the advantages of PoS. Discussing that is an interesting alternative topic, but let’s come back to what we are looking at here. We have an attack vector and it should be solved. How do we disincentivize validators from following each possible fork? Well, by punishing them for doing so. So in order to get the desired behavior, the protocol defines a punishment. So forks can still be done, but you have to decide. All members of your fork, slash the coins (punishment) of the members of the other fork and vice versa. If someone follows both, then slashing will happen on both. Other misbehavior is also punished by slashing coins and that is double spending and being offline for too long. With this combination of BFT + PoS + Slashing we have a system that makes misbehavior costly. The very same applies for PoW + Longest-Chain-Rule (Nakamoto Consensus). If you follow the wrong fork there, you lose your invested work, which means paid electricity, which means money. In case you misbehave and attack the blockchain properly you might also destroy the value of the coin you are invested in.

In PoS this is even tighter connected, since you must own the coin and not the hardware which might lose a lot of value, when the coin becomes worthless (for ASICs more than for GPUs). Ok great, so is both the same? Not the same, both approaches achieve the same goal, but there is a very important difference. In PoS you need to lock up coins and get punished in case shit happens, in PoW you need to burn electricity and only get something for it, if you don’t misbehave. Both achieves security for the decentral network, but one does not destroy the planet by wasting a lot of energy, which is an advantage. Just imagine how nice that would be.

With PoS this can become a reality without the part, where the planet is destroyed.

Now that we understand why PoS makes a hell lot of sense, we also want to understand why it is much faster. In Nakamoto Consensus the actual processing of transactions is not very costly. The costly part is solving the cryptographic puzzle. To make it fair the time to solve the puzzle must be many orders longer than the propagation time of a block. If there is not enough time, then either the finder of the last block is highly advantaged (extreme case) or the most central nodes in the network have an advantage (more realistic case).

Well, this is not 100% true, since for example in Ethereum there are Uncles. Uncles are blocks that were found and are valid, but do not belong to the main line of the blockchain. So just like that strange uncle on a family gathering these are not really wanted in the first place, but belong to the family anyway. For Ethereum this means the Uncle gets included, if it is valid and for the finder of a block this means you can get it into the blockchain even if someone else was just 1 second faster than you. This allows for a drastic reduction in block time (15s for Ethereum). But still Ethereum is not significantly faster than Bitcoin. So let’s go back to PoS. There we don’t need to run such a long timing for the puzzle, since there is no puzzle, we only need the time to let everyone get to know from everyone else that the block is valid. Then we can go on producing the next block. There are different ways of selecting the next block producer, but it does not really make a difference. The important part is that on a planet like the earth and with the speed of light the whole process can be done in a second or two. The size of the block is mostly limited by bandwidth, so this might increase in the future. For Bitcoin this also holds true, but there are more adverse forces being against bigger blocks and the reason is mostly to keep decentralization at maximum.

Now that we have a basic understanding of the main approach to vertical scaling, PoS, let’s now look at horizontal scaling. There are 2 candidates and they are quite similar. The one is interoperability and the other one is sharding. Interoperability also belongs into the compatibility category, since it connects different blockchains together. So if Interoperability is solved we can make incompatible things compatible. But if we can do that, we can also create 100 forks of a given blockchain and let each fork talk to each other and connect the stuff. 100 forks process 100 times the number of blocks of a single fork, so why this scales is obvious. The big problem here is: How to have trustless Transactions between two blockchains? This is a good question and I will try to answer it, by giving a simple example how one can connect 2 blockchains without inventing any new technology. By the way, I find it very strange that in this blockchain world you find 1000 articles explaining how awesome and important interoperability is, but almost never it is expained how it actually works. Even if someone goes on a specific reddit and asks, please explain it to me, then users keep responding this “Yeah the IBC protocol makes it possible to have 2 blockchains securely talk to each other”, ok fine, but how?

Somehow all different interoperability memes are about how people talk too much about it, without understanding it.

Let’s assume we have Bob and Alice, where Bob has a Bitcoin and wants to trade his Bitcoin with Alice, who has an Ether. Both understand that exchanges like Mt.Gox do exist or say do exist only a limited amount of time and might go pop tomorrow and don’t want to put their valuable coins on an untrustworthy exchange. Not saying the exchange is a scam, but what can it do? In the end it is a central entity, that can always fall to individuals mistakes, bad architecture or just someone randomly dying in an airplane, being the only one with the private keys of the exchange’s wallet. So both of them really want to do their trade in a secure decentral manner. Now they have heard about such a great concept that is DEX (decentral exchange). But unfortunately this only works if they exchange shitcoin A with shitcoin B and both are from smart contracts of the same (Ethereum) blockchain. In the case of 2 different blockchains, this is not possible.

So they have another idea. There are a lot of friends of the two, that are willing to help them, some of them know each other, but most of them don’t. So let’s take 5 of them, that don’t know each other. Now they create a multi-sig wallet on Bitcoin and do the same on Ethereum. What is a multi-sig wallet? This is a wallet, where all of the 5 have to sign a transaction to send it into the blockchain. You can also have a multi-sig wallet, where only 3 of 5 need to sign, but for this example it does not really matter. So these 5 create two multi-sig wallets. Now Bob sends his bitcoin to the multi-sig wallet on the Bitcoin blockchain and Alice sends her Ether to the multi-sig wallet on the Ethereum blockchain. If both do it, we go to the next step, if not, then the 5 create a transaction that sends the Bitcoin back to Bob or the Ether back to Alice. Note that no one can single handedly run away with either the Bitcoin or the Ether here.

Now that both have transacted their coins, one member of the 5 creates a transaction, which sends the Bitcoin from the multi-sig wallet to Alice’s bitcoin address and another transaction that sends the Ether to Bob’s Ethereum adress. Now the other 4 members of the 5 see that both transactions were created just as planned and sign both of them. After that both coins are sent to the other one and they have successfuly made a decentral trade. Again nobody could have walked away with the coins single-handedly. This would only have been possible if all of the 5 colluded. Still there might be reasons for them to do so, for example a large transaction, where stealing the coins and dividing by 5 is more valuable than the reputation they lose in this process. That is why we maybe want 100 instead of 5. So basically we have solved interoperability now? Well, it might be a bit troublesome to find 5 or even 100 multi-sig wallet holders for every trade two blockchain users want to make. One approach could be to just set up a team of 100, that keeps doing this thing for everyone. Unfortunately this increases the security demands on these even more.

Is there some way to ensure that they act properly? Well, punishment is the key again. How about letting these 5 or 100 lock up some coins and if they misbehave, these coins get slashed. This makes it much more dangerous to try and start a collusion with others, since these might report such a behavior. However it might be hard to find 100, where each of them is willing to lock up a big amount of coins. But wait, really? We actually have some candidates that already did so and these are the validators of our PoS blockchain. It becomes interesting here. These validators are even running a server anyway and even more so handle transactions. So great, we just use the validators and problem solved? Unfortunately problems arise again. One problem might be forks. Again. So even if these 5 or 100 see that both have sent their coins, then it might still happen, that in 20 minutes there is another fork of the blockchain taking over and in that Alice maybe did not provide their part of the deal. But if the other part of the deal was already sent, then there is no way to stop it. So actually one needs to wait until there will be no more forks possible. In theory this is not possible for blockchains like Bitcoin. What we need here is finality. Aha. So PoS also provides a nice feature for interoperability as well.+

The faster finality is the faster a cross blockchain transaction can be finally settled. But does that mean, it is not possible at all to include Bitcoin into our whole interoperable world? Well, actually it is possible to create something that is called a Peg-Zone or Bridge, which takes Bitcoin and creates pegged or wrapped Bitcoin for that. The pegged Bitcoin is now rooted on a PoS blockchain and has finality and the real Bitcoin is waiting in a wallet controlled by the Peg-Zone. When someone wants to have the real thing back, the minted/pegged Bitcoin is burned again and the real Bitcoin is sent to a Bitcoin address. In this case the Peg-Zone can never be 100% sure, that there will not be a competing fork taking over, but this is basically the same problem exchanges have today, when they accept coins of blockchains without finality. So this means in a very strict theoretical thinking, the state of the chain is never final, but after many blocks, for example 10, it becomes extremely costly to try an attack that is successful. It depends on many parameters, but the basic thing here is, that you need to be faster than the miners of the blockchain to forge an alternate reality. So you need to invest into more miners that already exist and then attacking might crash the price of that coin, rendering your attack unlucrative. If you have less miners, for example only 10% of the total mining power, then you can still try your statistical luck, since you can still be succesful in finding a competing block, where you steal the coins from the Peg-Zone, but doing this for 2 blocks becomes less likely. For example if you have a 10% chance to find the next block, then you only have a 1% chance to find the block after that as well and if you need to find 10 blocks to do the attack, you have a 0,00000001% of succeeding. In order to try that you still need to run these miners, which might cost some billions of dollars. It is unlikely that you can easily steal that much money, so in essence and for practical reasons, it is not possible to attack this Peg-Zone without losing money.

Those of you who can’t rela… wait we had this caption already.

Unfortunately this implies we have to wait for several blocks. In contrast to both sides having PoS with instant finality, we can do the thing with 1 block. Ok nice, is there something else? Yes. If we have these PoS chains set up, then it might be even possible to have shared security, which means we have shared validators on both blockchains. These are very interesting for our example, because they can be punished on both sides if they misbehave. Making bribing more complicated thus increasing security. Ok nice, so this is everything I need to know? No. There is even more. Basically we don’t need to send coins. We can also just send transactions with information across different blockchains. This means smart contracts can talk to each other. This is also a nice feature. Ok, but now we have everything? Well, let’s say we understand the basics and there are many details, for example why it makes sense to have a light client of each blockchain, allowing the members of the other blockchain to easily verify things. But let’s leave it at that for now.

And let’s go to sharding, which is a bit more accessible now that we have read so much and understand a lot more about how all of this stuff interacts. Sharding scales for the same reason interoperability scales, but instead of having multiple independent blockchains in parallel and make them communicate, we want to find a way to split up a single blockchain into multiple parts, or say shards. Sounds like it would end up with the same result, but it doesn’t. The source of the differences is that the first approach copies the blockchain and gives each copy its own infrastructure and the second approach has the same infrastructure trying to split up its workers over many parts. This is why for interoperable blockchains the big question is how to make these disjoint parts interact safely with each other and for shards the question is how to organize all these workers so that single entities cannot corrupt the whole thing.

After we see this approach as something where the workers of the infrastructure are divided and no worker does all of the work, we get an idea why sharding is good for scaling. The work is producing blocks and the workers are the validators. Sharding allows to split up this work, so that only a small fraction of the validators work on the same part of the network. The reason why Nakamoto Consensus does not scale up with more nodes is that every node has to do all the work. If we split up the work over many nodes, then having more nodes means we can split up the work in more pieces, making every piece smaller and thus enabling scaling. So the idea is that an incoming transaction goes to a shard and is processed there. It gets included in the block on this shard and the other shards do not really need to bother about this transaction. But wait, how does this work, if the transaction transfers some coins from an address that is not on this shard? Well, this indeed does not work. There are many possible ways to organize these shards, but luckily this is not a totally new topic for blockchains.

There is massive parallel computing, where many computers solve a heavy computational task together, for example to find signals of extraterrestial intelligence in the radio noise that comes from space (SETI). The other field is databases especially with clustered file systems and massive data storage. The different parts of huge databases, are often called shards, so this is where the naming comes from. In these fields smart researchers have already racked their brains over these problems and the solutions differ depending on what exactly your system should be good at. If you write very often in the database and read only rarely, it makes sense to organize the system so that writing speed maximizes. This means incoming data from a single source often ends up on totally different shards, but that is what allows very fast writing. If you have a lot of requests and these requests represent some kind of condensation of data, it might be a lot better to put data that belongs together on the same shard. For example if you have a database, which stores all data of people living on this planet and requests are something like “give me the average age of people living in all cities”. If the people’s data is distributed according to their living place on shards, then each shard can calculate the data for its cities by itself and then just report back the results for each city. If the data was lying around everywhere, then each shards needs to share the data with each other shard and then the calculation can be done, which is more work.

It even gets more complicated, for example if we want to correlate this information with some other information, like how tall these people are. Then we can’t just sum up their ages and send the sum together with the number of people sampled to the next shard and so on. Then we might need to collect all of that data at some place first and then calculate the complex correlations. If we look at massive parallel computation it is the same story. Basically Bitcoin mining is a huge massive parallel computation operation and synchronizing it is very easy. The reason why this is so easy, is because all of the work does not matter at all, except for the one attempt, that finally found the block. It is somewhat similar for protein folding computation, which is also an example for such a distributed computation task. There the most important result is the one randomly generated parameter set that gives the most stable protein. For SETI the recorded noise can also be distributed quite well on different computers, but there is also the question of frequency involved. If a signal has a wavelength so long that the signal is sent over a duration of 10 years, then it might not be possible to find it, if the recorded noise is split into 100 second long pieces. Of course there are many clever things to be done, but that is not the aim of this article here :D

So how about sharding in blockchains? Well, there is obviously data storage involved and there is computation involved, great. So we have the worst of both worlds. And as luck has it, there is another big pile to be shit on top of that and that is secure computation. In the other examples there is no single entity that acts maliciously or even might be incentivized to do that. So we have data storage, computation and security. All of these must be nice. But we are lucky and we know quite a lot about the data to be stored and the computation to be done. For example for a transaction of coins, one must only check if there are enough coins available for the sender, the receiver is always fine. And the computation of smart contracts can work so that all the data is saved in the smart contract. So whenever someone sends a transaction to a smart contract, then all the data is in one place and not scatterd everywhere. Well, not necessarily all of the data, since a smart contract can also access the data of other smart contracts, but in most cases smart contracts care mostly about it’s own history and transactions sent to it. In addition we know that blockchains are not write heavy tasks. Since writing costs some fee and even though sharded blockchains aim for handling 1000 times more transactions than classical systems, it is still not a database writing GB/s of data. An examples for such a data storage si the CERN Large Hadron Collider experiment.

Looking deep into these eyes, you can see the reflexion of prices moving up

Still we can see now that there are 2 bottlenecks and one important constraint. The bottlenecks are computation and data storage, the constrain is everything must be secure. Taking this into account we end up with an architecture that does not randomly write wherever possible, but rather organizes data together into the same shards that belongs to the same smart contract and addresses interacting with each other and these smart contracts. Ok nice, so we have many shards and the addresses and smart contracts are split up over these shards. For each shard we assign some nodes that propose and validate new blocks. Furthermore when there is cross-shard communication necessary, these nodes send the data around. Sounds fine and not too hard? Well, if we have a fraction of the nodes on each shard, then it is much easier for these nodes to form a cartel or bribe some other nodes to do nasty things. They could just start forging malicious blocks and send this information to other shards, corrupting the whole blockchain. That would be quite bad.

Luckily there is a solution for this and we just shuffle these nodes constantly. If these nodes are exchanged and presented new partners every few seconds, it becomes very hard to collude or bribe others. Since you need some coordination and time to do that, it becomes very hard if constantly new players show up on your shard and you have to move to another shard quite often. In the end you have to collude with everyone and this means that it is as secure as the good old blockchains. Random shuffling also solves another problem, namely that the coordinator of all those shards could also act maliciously by dropping some cross-shard transactions or forging fake cross-shard transactions. By moving around these validators can easily recognize misbehavior of a coordinator node. So everything is fine now with sharding? No. Shuffling has a very delicate implication. It basically means that we have to send all the data of the shard we are leaving to someone else and we need to receive all the data of our new shard. This means a lot of bandwidth.

Alternatively we could store a lot of data, for example of all shards, but then there is no advantage in sharding. If every node has to store everything, there might only be a scaling of computation power, but not of data storage. A single blockchain today is already 100s of GB big, so storing 64 shards or more, might not be that easy and drive out many nodes, leading to centralization, something we don’t want. If we give up decentralization, we don’t need to run a blockchain. Then we can stick to the good old databases. So we either need a lot of data storage or we need a lot of data bandwidth or we need to have nodes stick to the same shards, which leads to inferior security. This is called the Data-Availability-Problem. It is nice to note that this is a trilemma, so instead of an ordinary dilemma, where two options are presented and both are bad, we have now three options, and all of them are bad. Unfortunately for the trilemma, there is a genius solution for this problem.

What is the solution? We can split up the nodes, which have the data and the nodes that validate the blocks. But how does that solve the problem? It is just more data transfer necessary because nodes have been split? Well, the trick is that the data nodes stick to their shard and the validating nodes are randomly shuffled. This means that the data nodes hold all the data of their shard and don’t have to reload new data of other shards all the time. These nodes provide the transactions to form blocks or even propose the blocks, this depends on the specific implementation. But whatever is the case, the validators just take these transactions and forge a block or just check the proposed block. If the validators agree, the block is published. With this approach the validators can be shuffled quite often, so that it is hard to collude. But it is not necessary to transfer huge loads of data all the time. Great, so we have solved sharding? Well, there is still a lot to take care of, but in essence it is possible. We will look at some other finer problems, when we see how it is implemented.

2. Compatibility (was Isolatability)

When we look at compatibility, let’s differentiate between
a) Security
b) Storage
c) Computation
d) Hard Features
e) Token-Economy

If you read this you are trying to be less like Ricky, very good!

The first part a) Security means what level of security a platform offers (who would have thought?). Let’s look at an example of a service, where users can buy stocks and keep them, basically a crypto portfolio. The users put $100m of worth into it, but the miners of the platform are only worth $1m, then an attack might be so cheap, that the $100m of stocks will be stolen. In this case the platform does not provide enough security. The designers of this portfolio might be better off using another platform or building their own infrastructure. This is why Ethereum is great. If you build a smart contract on Ethereum, an attack via this vector is very expensive, even though no effort was put into building up your own infrastructure. This is very nice. The reason for this is, that the infrastructure is already there and we may use it.

If we look at b) Storage, we come back to what we just discussed in the previous part. A platform might be able or unable to store whatever data your application needs. Sharding or interoperability offer a great leap here, since data can be put on additional shards or on freshly generated zones. It is also possible to use layer-2 technologies, to move data away from the blockchain. Some applications have a lot of data requirement, where others don’t need much. Data is currently very scarce on blockchains, because the usual way to get in for data is via transactions into blocks. Blocks are scarce as well, so here is room for improvement with blockchain 3.0.

The next point c) Computation has already been mentioned some times. It is very similar to storage, both describe something like “How powerful is the virtual machine”. Computation is also something that is quite expensive and there are again a lot of improvements on layer-2 possible. For example it is possible to move the calculation of smart contracts off-chain and only save the result. Again there are different applications when it comes to computational needs and often it overlaps with storage. Games on blockchains might need a lot more computation than the given example of a portfolio. But the demand for security will be much less. So we see here that needs can be very different.

d) Hard Features are functionalities that cannot be plugged in afterwards. For example you cannot just make smart contracts work with bitcoin. You can think about some layer-2 concepts but there is no way that the bitcoin miners verifiy the computation of smart contracts. Blockchain 2.0 is the inclusion of smart contracts and since you can code anything with these, anything should be possible, right? No. One example might be zero knowledge computing. It is mostly used for anonymity, like in Monero or Z-Cash, but there might be other reasons why someone wants this. For example if you have governance and you don’t want that users are able to see the preliminary results on the blockchain. Sure here anonymity might also be a thing, but let’s assume it already worked to give everyone a pseudonym, so nobody knows who is voting. It might still be bad, if someone sees the other votes incoming and waits until the very end, so that he knows on which option his vote might change something. Other examples might be (pseudo-)random number generation or usage of advanced signature algorithms like BLS. Allowing to use different approaches means being faster and more efficient for various problems. Sometimes random number generation is needed, which has some specific constraints. For example if you sell some Cryptokitties, it is desirable that the user is not able to predict the outcome of the pseudo-random algorithm. Another example might be in-protocol upgrades. This point also belongs to 4. Governance, but it is also such a hard feature. The idea is that once the participants of a blockchain have decided to upgrade to a new version, this is not done by shutting down all nodes, download a new version via git and then starting again, but rather with a live update, where the version of the software is defined by a vote and thus is compulsory and deploys seemlessly. So there might be many more hard features one might come up with and that is not the important part of this section. The important take home message is that a platform might have some of these features, but not all. Depending on how a platform functions it is very easy to nearly impossible to use new features or deploy your own hard features. It is again strongly connected to infrastructure. If you roll your own infrastructure, of course you can easily deploy your own hard features, if you use a big infrastructure run by many others, it might be very hard to get a new feature deployed. So here again we see, that some points exclude each other.

Sometimes not all features are shipped on release.

Which brings us to the last point e) Token Economy. This describes how compatible a platform is with the desire of a project to run its own token economy. So a smart contract on Ethereum is able to mint its own coin, which opens a wide design space, but there are limits. You cannot change how fees are charged and you cannot decide how the flow of the fees should be. Thus you cannot disincentivize certain behavior with fees or incentivize other actions by lowering the fees. And possibly most important, the creators of smart contracts do not earn the fees. The result of this is that for many solutions there is a fee on top of the blockchain fee. If you use Uniswap you pay the transaction fee of the Ethereum blockchain, but you also pay a fee to the owner of the smart contract in form of having a bad exchange rate or a fee depending on volume. This part is necessary because the owners of the smart contracts do not profit from the low level fees of the blockchain, since they are not the miners of Ethereum. The reason here is again infrastructure. Uniswap was able to deploy extremely fast because Ethereum provides the infrastructure. But there might be a competitor in the future, offering the same service (providing a nice user experience) but with lower friction, since it runs on its own blockchain solution. For example a PoS chain, where the creators own their staking coins and thus earn the fees. It is also possible then to fine tune the fees for different actors, but this might be more important for other dApps, for example games.

What do we learn from this chapter? Well, it is very unlikely that there will be a single solution, that fits perfectly for everybody. Since not all dApps have the same preferences and it is not possible for a blockchain platform to be good at all of these points, it has become obvious to us now, why there is no single best solution for all dApps.

3. Developability

There is quite some overlap with the previous point here, but still some differences. Especially d) Hard Features has a lot to do with developability. But there is more to it. Take for example the Bitcoin codebase which is just one big block of code. Whenever a project decided to fork bitcoin and create their own blockchain, they forked the whole codebase and changed whatever was necessary. This leads to an ecosystem where many projects start solving the same problems again and again. A better approach is modularity and a separation of different layers. Most prominent is the separation of network, consensus and application layer. But it can go further than that. Basically all features can be modularized, so that governance, non-fungible-token and whatever comes to mind is something that can be added. The idea is not new for Blockchain 3.0, but has had a lot of success in web development and might be the main reason why Javascript is so successful. Even though Javascript often leads to dirty code, the high chance of being able to use another solution into your own code has boosted Javascript to one of the most prominent languages. The concept is even not original in the Javascript ecosystem, since it was already prominent earlier in Ruby. But this is not the topic of this article. We just want to note here, that Blockchain 3.0 aims for such improvements. Another big impact on developer experience is existing infrastructure which is different to the ecosystem part we have already explained. We have mentioned this infrastructure aspect quite often now and in contrast to usable software in the ecosystem it provides a live infrastructure to build on, where nothing has to be done. So Ethereum is the main example, where the blockchain runs and you only need to deploy the smart contract. This is huge, since for some projects setting up infrastructure can be a real killer. There is something in between hosting everything yourself and having a full blown infrastructure and this is Shared/Pooled Security. In this case there is an existing infrastructure and you deploy your blockchain as another chain into the existing infrastructure. By doing so you help to secure the existing infrastructure and the existing infrastructure secures your chain. You still have to run servers, but there is an automated way how to connect to the rest and get it all up.

One last important thing to mention here is standardization. Over the last years crypto has become very big and many different approaches have emerged and they are not compatible. This applies to many different layers, starting at the top from having smart contracts, which have their own specific language on whatever blockchain implements them and goes all the way down to the consensus and network layer, where different network topology, different networking paradigms, different hash functions and different consensus mechanisms lead to a very fractured landscape of different non-compatible approaches. One way to fix this is standardization. However the crypto ecosystem has not failed in ending up in this state. If something is new nobody knows which paths are the right ones to take. When the settlers arrived in america, I bet many of them went on really shitty paths, but of course we only know from the success stories today, like Klondike. But in order to find the good path, many paths must be tried out. That is why we are now in this situation.

After WWII Germany was split up to try out 2 different approaches to car production. One part of the country explored all possible paths of car production and the other standardized everything and build a single type of car, which you can see on the picture. With standardization a lot of friction was prevented. The other part ended up producing many different cars with many companies like VW, Audi, BMW and Mercedes, what a waste.

The question is, if it is really necessary to standardize everything? I think the answer is no. If we standardize the communication between blockchains and allow for open interoperability, then it doesn’t really matter that all these smart contracts are totally different, that hashing algorithms are different (well, this might cause higher cost for cross blockchain communications). In a world where all blockchains can interconnect, developers can pick the approach they prefer and still not be siloed in this specific technology. The only thing that needs to be standardized is communication between blockchains. This is why interoperabiltiy touches this section as well and we see now how big the importance of this very part is.

4. Governance

Finally we arrive at the last part and this is Governance. At the beginning of crypto nobody would have thought this is so important. That is why Satoshi Nakamoto did not think about it in the whitepaper. I’m sure Satoshi thought a lot about having all the mechanisms so that Bitcoin stays decentral and does not centralize on some nodes. But there is more to it. In Bitcoin there are 3 groups, the users, the miners and the developers. So far so good. The problem is that these 3 groups do not always have their goals aligned. The best example is fees. The users want them to be low and the miners want them to be high. In between are the developers, who can code stuff to improve this, to make the users happy, but it must be something the miners accept. And I’m very sure there are Bitcoin maximalists out there who will say, that this is working perfectly and as intended. To these I usually ask the question, why Bitcoin was forked so often? Because all groups have agreed and aligned? Or because there is no proper way to govern Bitcoin. This misalignment is a very old problem and it can now also be experience on Ethereum with EIP-1559.

There are the users who can threaten the miners to leave the system and use something else and there are miners, who can decide to just not upgrade a certain improvement or move to a different fork because this fork does something they prefer. Ethereum has a similar technology, when it comes to this aspect, but differs a bit, since there is the Ethereum Foundation. It is like the wise old men giving advice to the community. In addition there is Vitalik Buterin, who fulfills a similar role, he might not resemble an old man though. In other communities there are leaders who often leave their projects and move on creating a new project, doing another big ICO, even though the old project could have been improved. I don’t want to fingerpoint these projects or leaders here, but there is a reason why some 3-letter projects did not make this list. Leaders like Vitalik Buterin have a lot of worth, because they help to steer such a project to long term success. Of course decentralization means not having such leaders and that is why we want to have proper governance described by the protocol.

Proof-of-Stake provides a little bit of help, since the owners of the coins (for Bitcoin the users) and the producers of the blocks are the same entities. So the power relations are more aligned from the beginning, but the real innovation and improvement here is voting. By having on-chain vote, these communities are able to officially measure what the users want. This usually ends up in doing upgrades only if users have approved them and then they are mandatory. Another improvement is in-protocol upgrades, which means that the software upgrade happens automatically once the vote is over. This means there cannot be a Donald Trump who really don’t want to accept his defeat. This is another great improvement. In previous sections presenting some nice improvements always ended up with a lot of problems that needed to be solved as a consequence. Not in this case. Here come the good news: Voting is already working in many blockchains and even in-protocol upgrades are only a matter of time until they will be available in many modern blockchains.

If you have read until this point here, then my plan has worked. The idea of this article was to lure all the readers into it by revealing which project will give $$$ in the future but then once the readers are trapped here, teaching them basics about blockchains.

I know this meme is very old and you know it already, but I had to make a decision. This is basically a triangle, where you can only have 2 properties. The corners are “quality, quantity and novelty”. I hope you appreciate, that I wanted the best memes but then sacrifice the novelty while still having many.

But now we will discuss the 3 projects from the title now, Cosmos, Polkadot and Ethereum 2.0. And the news get even better: After reading all of the previous stuff it will be much easier to see the specifics and differences of these projects. We will start with Cosmos, then have a look into Polkadot and finally into Ethereum 2.0. There are good reasons to start with Cosmos, one is that it will launch earliest. After this part we will summarize what we have learned with some nice master tables and graphs and also look at some example dApps and see which platforms offer the best performance for each.


The whitepaper of Cosmos came out in 2016 and its influence was easy to see as many interoperability projects presented very similar ideas afterwards in their whitepapers. Let’s first look at the design goals of Cosmos:
Multi-Token — Cosmos supports any number of token/coins on its blockchain. From the very basic design it is possible to have any number of denominated currency units on an address.
Bottom-up — The whole system follows a bottom-up design principle. This means there should not be systems at the top directing smaller parts on the bottom. The goal is to have the small parts interact in an emergent way. This might be a bit intangible here, but we will come back to this quite often.
Building Blocks — Cosmos and blockchains built with it, should consist of building blocks. One example for this is the separation of network layer, consensus layer and application layer. We have already mentioned this point in the developer experience part and this is one materialization of the building blocks concept. Another one is that the application layer consists of building blocks. So if you want to have governance, then the module can be added to your application, if you need smart contracts, add an ethereum virtual machine or a WASM smart contract virtual machine. Building blocks also means that you can change parts, so if you don’t want to use something else than PoS it is possible by changing the building block in the consensus layer.
Stop energy waste — This one mainly means that the high energy consumption of PoW shall come to an end by providing everyone an easily accessible way to build a PoS blockchain.
Interoperability as protocol — There are more ways to skin a cat and so are ways to implement interoperability. Cosmos aims to realize interoperability as a protocol. This means that you do not necessarily need the Cosmos blockchain to have interoperability. You can use the protocol specified by Cosmos in order to connect two blockchains. This is a very open approach to interoperability and plays into the bottom-up point already mentioned. By giving everyone the ability to connect to other blockchains, an Internet of Blockchains emerges instead of being engineered.
Internet of Blockchains — Cosmos wants to start an era where all blockchains connect to each other and siloed technologies are no longer the common. This internet shall not be organized by single entities.
Overcome “One coin to rule them all” — A mentality heavily criticized especially in the early days of Cosmos. One coin to rule them all basically means that a blockchain tries to accumulate all dApps and all users under its main chain, thus forcing them to buy their coin. This mentality is seen as unpractical since there won’t be a single blockchain solution to all problems, just in the same way as there is not a single operating system for all computers. But these different operating systems can still communicate over a single protocol, ethernet for example.

This is just a funny meme, not investment advice.

One really important thing when it comes to Cosmos is Tendermint. This is the software that has been written before Cosmos was tackled by the same people. Tendermint is basically an implementation of practical Byzantine Fault Tolerance, so it allows replicated state across a decentral network of participants, where a minority can be malicious. Wait, so Tendermint is already the thing? Don’t forget, pBFT alone is not permissionless. To make it permissionless, one needs a way of introducing new participants and that is Proof-of-Stake. So from a chronological point of view, Cosmos continues the development of Tendermint by implementing PoS. This allows to launch a public blockchain, which is Cosmos (Atom), that is live since March 2019. Since we have learned these specifics already in the previous section, we can now lean back and just state the facts. Cosmos uses the validator/delegator split, has instant finality and uses slashing to solve the usual PoS problems.

Did the Cosmos team invent all of those things back in the time with Tendermint? No, but some of them. The idea of doing delegation to reduce the number of validators comes from Daniel Larimer’s BitShares and the term slashing was coined by Vitalik Buterin. But using the ideas from Liskov’s pBFT paper is Jae Kwon’s merit. From this follows instant finality and the idea of having bonded token, where punishment applies, when misbehavior is displayed. This concept has proven to be quite reliable so nowadays it used in most PoS systems. The approach of Cosmos was quite down to earth, since they first build a software that is useful and is being used by other companies and then found a way to build a blockchain from this. Where in contrast other players keep publishing new blockchains, that do not really work at this point and it doesn’t really matter, because they move on to the next project, before they have to bother.

In addition Cosmos asked for a funding with an ICO of $17 million. Actually they asked for less, but the demand was high. Other ICOs in the same space have demanded and gotten 10 times or more of that and yet Cosmos is among the few to deliver. After the ICO Cosmos was developed and launched in March 2019 with 100 Validators and has increased the number to 125 with up to 300 planned. The block time is 7s and Cosmos is able to handle over 1000 tx/s. We don’t know the exact number, since the Cosmos Hub has never been under such a load that it could show its maximum capacity. There have been testnets/simulations, which achieved 4000 tx/s, but it is very unlikely that the real network achieves this number. Still it delivers what PoS made people hope for. This is something one always should remember, often these numbers are theoretical numbers and some new awesome blockchain claiming 10k tx/s or TPS, do this in a experimental simulation. The same goes for fees and delay. Sometimes blockchains advertise for themselves with very low fees, which are often also only so low because the network does have a much lower valuation than for example Ethereum. For Cosmos the fee for a transaction is currently at 0.0075$, assuming a price of $10 (price when this line was written…) for an Atom. This is a very low fee and it might go up in the future. I don’t want to do a bold shill for any coin here based on such numbers, so I hope I don’t miss that goal.

Why do we still want to look at these numbers? Because they indicate if there is a real technological advance or just some kind of reset. A reset applied to Ethereum would also reset the fees to a lower level. So when Ethereum had a valuation comparable to Cosmos ($2B), a typical fee was $0.02. So this means the difference here is not that big, like a factor of 2 or 3. Does this mean, that Cosmos is more efficient by a factor of 2 only? No, not really and the reason is how full the bucket is. For Ethereum after it hit $0.02 as fee, just a couple of months later the fee increased to $1. The reason for this is that there were already quite a lot of transactions going on and once the ICO craze started in summer 2017, fee increased a lot. This would not have happened if Ethereum was able to handle 1000 tx/s and not 15 tx/s. Ok, but if there is so much space left in the blocks in Cosmos, why is the fee not much much lower? The reason is that this fee at the lower limit does not represent how full the bucket is, but rather what kind of fee was chosen as a spam protection. You can build a blockchain with Cosmos that does not charge fees, but then anybody can just flood your blockchain with many pointless transactions to do harm. A small fee prevents that. So a blockchain might also pick a very low value here to give the impression the system is very efficient.

CAP-Theorem with examples

The image above shows a triangle with letters C, A and P. It represents the CAP-Theorem, which states that you cannot have Consistency, Availabiltiy and Partition Tolerance all at max in a distributed system. I have already discussed this in other articles, but here again it helps a lot to understand. With the design choices of Cosmos there is a big difference to Bitcoin or IOTA. I like to pick IOTA here, because it is one of the extremest examples. It might be better to have a look at Avalanche in this regard, since this is something that might actually work, but that is something for another article. High Availability (A) means that many transactions go through and are processed, the system is live and available. High Consistency (C) means that there is a single state that everyone agrees too, so there is not much confusion about what is in the blockchain and what is not. High Partition Tolerance (P) means that it doesn’t really matter if nodes go offline or do nasty things. Bitcoin is the king in this regard and that is because any node can validate blocks without the others and any fraction of miners can go on producing new blocks if the rest go bust in an earthquake.

At first glance it might seem not very clever to pick high consistency when designing a blockchain, but I’m convinced this is a very smart pick and I will explain why. Also the first thought might be, how does it matter, if it takes 10 minutes longer until something becomes final, if you sacrifice other things for it? Sure it would be cool, if Cosmos stays live if half of the validators are taken offline like Bitcoin. However it is not the end of the world if it happens. Even if half of the network stays offline, it is still possible for the remaining validators to make a decision and move on, depending on what has happened. It is very unlikely that a natural catastrophe can take out that many nodes and if it really does, it is something like a meteor vaporising all life on earth. This is tragic, but the biggest concern in such a situation is not, why the Cosmos network has gone offline. If it is something else, for example many countries on the planet have formed an alliance and prohibit blockchain, then the validators might know in advance and move their servers to nice countries. Also I guess such an event (prohibition) is much more likely for Bitcoin, because of its tremendous energy waste in contrast to PoS blockchains. In the 2 years Cosmos is running now, there has not been any event that made the chain halt. Ok, but now lets understand why Consistency is such a good choice and it has a lot to do with Interblockchain Communication, which we have discussed in general, but not with direct regard to Cosmos.

CZ Shilling Cosmos and our reaction.

So how does Cosmos want to implement Interblockchain Communication? I think some might have guessed it already, because from the design goals of Cosmos and what we have introduced about it, it might be clear:

IBC Protocol standardizes communication
No pre-defined network topology
2-way peg via validators
Peg-Zones for non-IBC chains
Hubs and Zones
Shared Security (later)

So the idea with Cosmos is that there exists a way to connect all blockchains together and don’t necessarily have to be build with Cosmos technology, it is sufficient if they implement the IBC Protocol. Implementing the IBC Protocol might not be possible for everyone, there are some limitations, but these are quite low. What you need is something like a global state and finality. Finality must not be instant, but it must happen at some point in time. We have discussed this and we understand how easy it is for PoS blockchains to have it. Is it impossible for PoW blockchains to have it? Well, if it is just pure Nakamoto Consensus then yes, but some have PoS mechanisms included for checkpointing. Does this mean, these coins will never be able to be on the Cosmos blockchain, for example like Bitcoin? No. This just means they cannot implement IBC and make this process very easy. It is still possible to include them via Peg-Zones (or bridges).

A Peg-Zone is basically a blockchain, which implements IBC and its validators also manage wallets on the Bitcoin blockchain (or whatever is pegged). On the Bitcoin side it comes down to our example with the multisig wallets and on the Peg-Zone, there are the validators who punish each other for misbehavior and transfer the coins into the Cosmos ecosystem. Ok, how about global state? Well, this basically means that there is some state of the blockchain, let’s say all acounts and their balances and for a given point in time all nodes on the earth have agreed to this state. Hold up… How would that not be the case? Well, again let’s pull out IOTA. This network does not agree on a specific global state, but rather each individual transaction points to a history of other transactions. It might always be the case that there is a huge arm of the tangle (their type of “blockchain”) that has yet not come in contact with your knowledge of the tangle. All these explanations do refer to IOTA how it is meant to function, not how it works today with a coordinator running. The coordinator solves this problem, but also renders the blockchain not being decentralized and making it slow. As long as it is running, IOTA is basically a database with a lot of overhead. But also IOTA can be connected via a Peg-Zone. The only question is, if there are enough validators for such a zone, who want to take the risk of routing transactions, which might be reverted later. If these validators are connected to many nodes in the IOTA network, they are still able to mitigate this risk.

Reading this article reduces the amount of time to 3 years.

Cool, next for this “no topology thing”. Well, IBC only defines how 2 blockchains can talk to each other. The rest is up to the ones who build the network. Cosmos speaks of a Hub and Zones model, where some blockchains should be hubs, connecting different zones together. But this defines the network topology in the same way as Ethernet hubs or switches define the topology of the internet. And for Ethernet it is the same. It does not pre-define the topology that has to be used. This is displayed by the most basic connecting units of such a network, the hub and the switch, already deploying different topologies. So the hub has a bus topology, sending all packages to all connected devices. The switch in contrast has a star topology, only sending each package to where it belongs. This upgrade from hub to star makes a lot of sense, since both devices have cables going out in a star topology anyway. The thing was just that it was more expensive back in the days to have chips that do the package routing. But again, we are drifting off.

So these IBC Hubs can connect to each other in any manner they desire and this all depends on which hubs they want to connect to and thus think these are secure. In my opinion this is a very good thing, since the best solutions can be engineered instead of just saying, ok we have this approach and we hope for the best. Of course the drawback is that it must be engineered and does not instantly give a huge speedup just like having 1000 shards. But since the engineering of how to set up these shards is much more work, this might be arguable.

This is a visualization of the Cosmos Hub. It is represented by the Atom on the top. But in reality it consists also of its validators being on the bottom of it, producing blocks through the Tendermint Consensus Engine. The validators again are backed by the delegators on the very bottom. On the top the Hub also connects to other blockchains directly via IBC or via Peg Zones (BTC and ETH).

Now let’s finally get back to why consistency is so great. We have understood why the sacrifice in Partition Tolerance is not a big deal. We have seen that some networks (IOTA), sacrificing consistency for Availability are not able to play out their advantage, because the low consistency makes a coordinator necessary.

When do we need consistency? If we want to have really low settlement times for whatever happens on a blockchain. Blocks can be produced really fast, but only if the blocks are final, we can move on with actions depending on finality. So let’s assume we have some stuff ongoing between some blockchains, for example on the Cryptokittie Zone is a very precious Cryptokittie and it should be transferred to the DeFi Zone, where it is possible to stake this Kittie in some strange manner (strange for us normal people, for DeFi people everything can have its derivatives and be staked and the staked stuff can be staked again and so on). But before this happens, the Cryptokittie should have Sex with a Cryptodragon on another Zone, to enrich its value or something. So the Cryptokittie first gets transferred to the Cryptodragon Zone, which might happen fast, but it must be final, because the owner of the Cryptodragon is not giving away the precious Cryptodragon juice as long as the Cryptokittie is not really there. After that the Cryptokittie is transferred to a Hub, which connects to the Defi Zone (we assume the Cryptokitties and Dragons are directly connected Zones). The Hub will only transfer the Kittie to the Defi Zone if it is really there, so waits for finality. And finally the Kittie is staked. This takes 7+7+7 until the Kittie arrives at the Hub and then 7+7 until it arrives at the DeFi Zone and is staked. So 35 seconds.

Let’s take another network of blockchains, which have 4s instead of 7s block time and 900 block finality. This blockchain is able to process more transactions because of the relaxed finality restrictions, but it has a finality time of 60 minutes. So this whole process takes 5 hours there. There can be a lot more Cryptokitties being transferred in parallel, but each single one takes quite a while to pass between the different blockchains. Now one might argue that the instant finality blockchain network is only better as long as it is not congested. Once it becomes congested, the 7s don’t help you anymore. And there comes the next reason why high Consistency is good:
What do you do with a blockchain that is in a network of blockchains, if it gets congested? You duplicate it and connect both together. If you can’t handle all the trades on one DEX chain, then just spawn more chains. Split the trade pairs to different zones. If 2 aren’t enough, spawn 100. Connect the 100 to a Hub and let the hub do the routing. If 100 are too many and the hub cannot handle so many subzones, then just connect only 10 to a hub, do this 10 times and connect the 10 hubs in a mesh network or whatever makes most sense. If stuff really gets crazy and you need 1000 zones, then just go by another layer of 10. This means multiply everything by 10 and put another mesh network of 10 on top.

Guess what happens if you have something like these wrapped Bitcoin on something like Cosmos or Polkadot? Yes, no longer $20 fees and fast transactions.

What is the downside of this approach? Everything now happens in different zones. Let’s say for example you want to trade Fupacoin into DaddyOFive-coin. You might be lucky and the trade pair Fupa/DO5 is on the same zone and you can do it directly, but maybe you need to trade Fupa/USD and then USD/DO5 and these trade pairs are on totally different zones and these zones connect to different hubs and these hubs only come together at the very top mesh network of 10 main hubs. In this case the USD from the Fupa/USD deal need to go up 2 hops, 7+7s, there go down to the other final zone 7+7s and might go somewhere else, for example leave the DEX network, again 7+7+7s, this means it takes 49s to do this kind of thing. If we compare this to an alternative network, where we don’t need 1000 zones with 1000 TPS each, but we only need 10 zones with 100k TPS each. But these zones have 60 minutes finality (they can only achieve 100k TPS by giving up instant finality), so it is much more likely you have Fupa/DO5 on the same chain and can trade directly. In this case it takes you 60 minutes to leave the network after the trade. If you have bad luck and need to transfer once and leave the network afterwards, it is 60+60minutes. So both cases are much worse than the 49s we had in the other example. There 49s was the worst case already. This is the reason why this choice for instant finality makes a lot of sense and is very awesome in an internet of blockchains. It makes this bottom up design go round.


We arrive at the next blockchain and that is Polkadot. We have started this article with what is wrong in Blockchain 2.0 and 1.0 and what needs to be fixed. The opener was mostly taken from the Polkadot whitepaper. The Polkadot whitepaper does a very good job in describing what needs to be done to take blockchain to the next level. No wonder Gavin Wood was able to directly draw a big fellowship when he decided to leave Ethereum and start Polkadot. This brings us to the first design goal of Polkadot, but let’s list all of them together:

Solve the scalability problem faster than Ethereum 2.0
Parachains shall provide more freedom than shards
Scalable heterogenous multi-chain
Minimal, simple, general, robust

The first and already mentioned point outlines very much Gavin Wood’s frustration with how things were going in the Ethereum ecosystem. Not that he was thinking the direction was totally wrong or people were being incompetent, more like the whole thing had become a bureaucratic monster already and steering it was slow and inefficient. Especially when it comes to switching to PoS. He knew that it will be much easier to directly build a new PoS blockchain than to transition something from PoW to PoS. And he was right, since both Polkadot and Cosmos are online today with a PoS consensus and Ethereum is still running on PoW. But the idea is not only to do exactly what was planned with Ethereum, but also to provide more freedom. Instead of shards, Polkadot aims for Parachains and these are full fledged blockchains. This means they can have their own set of functions, they might not even have smart contracts or have different interpreters running these contracts. Unfortunately we have not discussed Ethereum 2.0 now, so it might seem a bit strange to explain Polkadot in contrast to Ethereum. It is, but actually we want to explain Polkadot in Contrast to Cosmos, which works pretty well after we understand all these details about Cosmos. Once we understand Polkadot, going over to Ethereum 2.0 will be easy.

Don’t fall for pump and dumps. If something has outdated technology it is either Bitcoin and has the first mover advantage or it is a shitcoin. Not that hard to understand.

The next point is scalable heterogenous multi-chain. This heterogenous exactly represents what we just said about different Parachains being able to have different functionality. I hope scalable is now clear to any reader, who made it to this point. If you have no idea what this means, well, maybe the article was not written enough on a real idiot level :D. Multi-chain means that Polkadot is not just a simple chain, that might or might not connect to other chains, but rather being a strong team of many chains working together. I avoid saying network here, since the connection is stronger than in the network of Cosmos blockchains.

The honey badger seen from different perspectives.

The next point is Tendermint+HoneyBadgerBFT, where the first part is already known to us and Polkadot acknowledges the great success of Tendermint here and practically says we also want to be a BFT based blockchain. The HoneyBadgerBFT is different to the pBFT protocol we have seen for Cosmos. Reading its whitepaper might help as much as watching the video of the Honey Badger. The basic idea is that the Honey Badger don’t care. In contrast pBFT does care and goes offline if too many Cobras show up. So in more blockchainy terms, the HoneyBadgerBFT changes mostly the constraints on synchronicity and is thus asynchronous. This means it does not halt that fast and can go on even if more nodes fail. For our beloved triangle this means we trade away Consistency for more Availability and Partition Tolerance:

Now we have added Polkadot and its ability to go on under less strict constraints represents a loss of Consistency in comparison to Cosmos but means advantages in Availabilty and Partition Tolerance.

There is a good reason why Polkadot does this and we will understand it later. In order to finish this part, let’s check the last point Minimal, simple, general, robust. Minimal means that Polkadot should not be overloaded with functionality that does not serve its main purpose. The idea of Polkadot is to have these Parachains and connect them with a relay chain. The priniciple of minimality especially applies to the relay chain, which shall have no functionality except connecting the Parachains. On the relay chain there will be no smart contracts. Smart contracts are nice, but do not serve the purpose of connecting Parachains.

Simple aims in a very similar direction, saying that all features should be implemented in a simple fashion, even if there are more sophisticated approaches, these are not taken as long as they are not ultimately needed. One implication is that the Relay chain also does not have the concept of Gas. Wait no gas? Well, there are transaction fes and gas is only needed if you need to calculate a dynamic price for a transaction, which calls smart contracts. Depending on the complexity of the smart contract, there are more or less gas costs. If you don’t have smart contracts, you don’t need gas. This doesn’t mean you cannot have it on your Parachain. A parachain can have whatever crazy complexity someone wants to build into it. It is not that surprising here that Jae Kwon (the founder of Cosmos) has advocated for the same principle with the Cosmos Hub. It does not have smart contracts for the same reason. The solution is to just connect Zones with the functionality you need.

General means that everything should be possible. There should be no constraints on what to build on a Parachain set up by the relay chain. This something where Gavin Wood might have thought about Ethereum 2.0 and how it will only allow smart contracts with Solidity and nothing else. Here he wants to provide more freedom than is possible with Ethereum.

Robust is quite easy. It just means the multi-chain should be secure and attacks should not be possible. This is a given. If a blockchain does not have it, it is worthless. Something that might have been also good to include to this list would have been distributed or decentral, which means that the centralization potentially happening should be kept in check.

We now understand the design goals of Polkadot, but do not really understand its consensus mechanisms yet. We might fall to the trap that we already know that it is some Tendermint with more asynchronity, but there’s more to it. Polkadot is quite interesting because transactions into other Parachains are not much different for the user than internal transactions. In order to achieve this, there must be a clever way to connect all these Parachains. Polkadot is designed to directly deploy with Shared/Pooled Security, a feature that Cosmos plans to implement later. We have already mentioned this feature and it says that you don’t deploy your Parachain alone, but rather connect to the relay chain and add to all other Parachain’s security as well as all of them add to yours. How is this possible? To understand it, we must look at Parachains a bit more like shards and apply all the knowledge we already have from the beginning of this article. The validators of a Parachain do not stay with their chain, but rather rotate around. This opens up for the Data-Availability-Problem, which is solved exactly how we described it. In Polkadot there are Collators and these are the data nodes. They propose blocks to the validators, which check the validity of the blocks. Beside collators and validators there are also nominators, who are the same as delegators, just a different name. But there is a new group of participants in the consensus process and these are fishermen. Their job is to find misbehavior and we come back to this soon.

If you are looking for patterns in candle stick charts, maybe counseling is cheaper on the long run.

First let’s have a look how the block producing process works. In principle it is similar to Cosmos, so validators vote if they agree on a block and if the majority agrees, the block passes. But the block is not proposed by a validator but rather a collator. The validators also need the necessary data to be able to check the validity of a block. So a validator does not tell everyone its opinion of the validity of a block but also its own availability. Stating that a validator is available means that the necessary data from the collators was provided, so a rational decision on the validity can be made. This means in order to accept a block, 2/3 of validators must vote valid and none must vote invalid as well as 1/3 must state their own availability as positive. If this does not happen, because anybody votes invalid or availability is not given an exceptional condition is thrown, which means the case must be investigated. In the Cosmos consensus it would not really make sense to throw such a condition. Since all the validators have all the information, there is nothing to investigate. Investigation means more information must be gathered in order to make an informed decision. This decision is mostly on how to proceed and especially on who to punish. In the case of Cosmos or say pBFT+PoS there is no need for investigation, even though there might be punishments because of misbehavior. The split of active participants in different roles and on different shards has the consequence that nobody has all the information. This is why fishermen as a 4th role make sense. They do not participate in the process of producing blocks directly but watch over it and try to find inconsistencies or misbehavior and report it. These occasions might be very rare, but the reward is good. That is why they are called fishermen.

This is what it looks like when you are trying to explain why the price will move to a certain direction.

In order to understand this better, let’s look at some examples. What might be a misbehavior of a collator? A collator could propose a block that is invalid and also provide wrong information making this block look valid. This is only possible if many collators of the same Parachain do so, but can be detected by fishermen if they get hold of the real data and the data that is wrong. In most cases this won’t work anyway because there are too many collators, but in case of a collusion this might become more intersting. Still the collators cannot easily forge blocks, since signatures must be correct, however censoring data can be lucrative. Beside the usual thing for validators to sign invalid blocks, a validator can also signal its availability but in reality did not collect any data. After that the validator also signs the block as valid. At first this seems not very rational, but we all know this behavior from school. When the teacher asks who has done the homework and we know how the ones are selected who present their homework, it might make sense to signal availability of done homework and later also agree to what others say, but in reality there is just an empty page in front of us. Why does this happen? Because the reward is given if availability is signaled and the block is valid. The validator has participated in the active consensus process, but did not have to bother to download data. When too many start to do this, the whole validation process might become pointless. This is basically the situation in which the two hardworking girls in the front row raise their hand every morning and present their homework and everyone else in the class has done nothing. The teacher then often switches to the fisherman protocol, where they point out individuals randomly and check if they really have data availability given. Sometimes there is another protocol, where teachers go round and check all homework. Sometimes this is cheated with the mimicry method, where pupils just present any kind of written text. This only works if the teacher only checks superficially. But if the teacher checks thoroughly this process becomes very inefficient, wasting a couple of minutes at the beginning of each class. In the school it is possible, since there are 30 pupils only, but with thousands of participants and understanding the Data-Availability-Problem, we know why this isn’t an option.

There are some more details, we can’t cover all here but very important is of course how all these Parachains work together by meeting on the Relay Chain. What happens within each Parachain is so far understood quite well by us, but what happens if you want to send a transaction to another Parachain? The promise of Polkadot is that there is no difference for the user whether a transaction is inside a Parachain or goes outside. There is so called egress transaction information, which is something that wants to leave the Parachain for another Parachain. This egress must be validated, since if for example some coins of a Parachain leave it, but aren’t actually there, the whole system might be compromised. This is why the validators have to state, if they have enough information to make a decision on these egress transactions. Once the Parachain block is sealed it can be processed on the Relay Chain, where a sealed block leads to the distribution of these transations into the ingress of the Parachain where it belongs. So the Relay Chain does the job of routing all these transactions to the proper Parachains. The Relay Chain also takes care that the validators rotate properly among the Parachains and of course punish misbehavior.

This almost overseeable image shows how all these actors in Polkadot relate to each other. Transactions come in from the top left in green and are proposed by the Collators. The validators of this parachain then validate these transactions and they are processed on the Relay chain (white) to the ingress of other Parachains attached to it. On the bottom we see a Parachain bridge to Ethereum.

Ok, now we understand this part, but what happens if on a Parachain an exceptional condition is thrown and the case is investigated? Does the Parachain halt until it is resolved? Actually, since one Parachain can corrupt the whole multi-chain, then everything must be halted, right? Well, yes. But there is a big difference here between Polkadot and Cosmos. Remember, instant finality is given up, so blocks do not become final instantly, this means there is more time to revert. Polkadot chooses 900 blocks until finality. This means there is a lot of time to investigate a case. So if on one Parachain bad things happen, the transaction can propagate through the network, while the case is still examined and new blocks can be produced. Also the process of sealing a block is not as strict as with the Tendermint+pBFT from Cosmos. There we have a round where a block is proposed and then everyone votes and once everyone has voted, the next block is produced. In Polkadot we don’t need to wait for everyone, if we have enough votes, the block can become valid, but later there can still be a validator voting this block as invalid, even though the 2/3 majority was already given. This is because HoneyBadgerBFT allows asynchrony. This allows to be more robust and more available even though misbehavior might happen, which needs time until enough actors get hold of it. We discussed the price of this already and that is Consistency.

This is the reason why Polkadot aimed for 4s block time in their whitepaper. However they ended up with 6s now, which so they state might reduce in the future. Cosmos is currently running with 7s. Polkadot aimed for 144 validators in the whitepaper but is now at 297, so it has more validators than planned and this might mostly explain why the block time is at 6s rather than 4. Here again the sacrifice of instant finality allows for having more validators without slowing the process. Cosmos has increased its validator set from 100 to 125 and wants to go to 300 in the long run. Polkadot wants to have 1000 validators in the future. Now we only need to understand the differences in this Interblockchain thing. Let’s start with the outsiders, because neither Cosmos can assume everyone will implement IBC nor Polkadot can assume everyone will buy a slot for becoming a Parachain. Buy a slot? We will explain this soon. So what about the outsiders? In Cosmos these can be integrated with Peg-Zones into the network and for Polkadot it is the same. There is no real difference, except that the Peg-Zone or in the words of Polkadot the Bridge is automatically a blockchain with shared security. In Cosmos these Peg-Zones make a lot of sense being implemented with shared security, but until this feature is finished, they will be included as normal zones. Including outsiders is very similar and this is a good point to also state that there wll be a limited number of Parachains. Since this number is limited the slots are auctioned off.

Be skeptical if a project throws around a lot of buzzwords and solves goat herding.

Hold up, what? You need to pay to become a Parachain? Yes. But you do not just get the slot, you also become a validator and earn rewards from participating in the shared-security model. It is not possible to connect any arbitrarely high number of Parachains and that is why this number must be limited. It is still possible to have Parachains connected, which are again Relay Chains for other chains below it, so this is where we see the star topology of the Polkadot network. This is the big difference between Polkadot and Cosmos. In both cases you can develop your own blockchain and have your own tokenomics as well as functionality. So if you need a feature, which other Parachains do not support, you can go and create your own. But you cannot just connect to the Relay Chain whenever you want to. You need to win an auction or buy a slot from another winner. If we look at smart contracts and data integration, then again Cosmos and Polkadot are quite similar. Parachains can have their own smart contracts, even different engines for processing them, same as Cosmos zones, but a Parachain cannot read out the data of a smart contract on another Parachain. The same holds true for Cosmos but is different for Ethereum. Smart contracts can be called of course across chains by sending transactions to these.

Ethereum 2.0 or Serenity

Just like the others we start with the design goals of Ethereum 2.0. This part on Ethereum 2.0 will be much shorter than the others and this is not because Ethereum steals their ideas from them, but because we explain it in this order. In fact Ethereum has introduced smart contracts and has coined most ideas for sharding. But let’s get to the goals:

Smooth transition from PoW to PoS
Vertical scaling through PoS
Horizontal scaling through Sharding
Seamless processing of smart contracts

The process of making Ethereum 2.0 a reality is much more a long odyssey of several upgrades than just an engineering process where a specific whitepaper is implemented as is the case with Cosmos or Polkadot. This is already reflected by the fact that there is no single whitepaper for Ethereum 2.0 that describes all important bits. We can’t dive into all of these different paths that ended up contributing a lot or just a tiny bit to the whole process. This article is already very long and we mostly want to focus on what will be the outcome at the end and what does it mean for the end user and developers. We have already learned a lot about it, because the initial statements about how sharding can be made scalable and secure is from the research on Ethereum. So let’s start thinking about the design goals.

Smooth transition from PoW to PoS is a problem the other two projects do not have to solve. But it is a real problem for Ethereum. There is already a running blockchain, in my opinion Ethereum is the blockchain that demonstrated most how much the functionality of Bitcoin can be improved. Whoever thinks today that Bitcoin has all the functionality needed, does not understand Ethereum or is ignorant. So there is a good reason to keep this network alive, but the transition to PoS is very disruptive. Not only a very important group in the Ethereum ecosystem gets obsolete, the miners, but in fact the group with the most power over what happens to Ethereum. So by just saying, well guys, the party is over, we move on to the next thing, you are obsolete now, the miners might not support that step. This is why there is a need of a smooth transition. There are other projects out there, where the leader did not decide to improve the project that is already there but rather leave the ship and start something new in order to make this process simpler or just collect new ICO funds. Irony has it that one of these projects is presented here as well with Polkadot. However there are much worse examples and the reasons for Gavin Wood are understandable, also Polkadot does not only throw away PoW but has a lot more differences. Still in my opinion it is a very good sign, that Vitalik Buterin stays with the ship and tries to fix these problems.

The next point Vertical scaling through PoS is easy and we already understand this after reading the article. So let’s go back and understand how Ethereum wants to get to PoS. The thing is that Ethereum does not just want to switch that part, but also do horizontal scaling through sharding. This means the design goal of Ethereum 2.0 is to have a cleverly tinkered roadmap which achieves this goal by implementing several new features step by step. That way there is no single point in time where PoW switches from being everything to being totally removed. Also shards are tested in a real world environment before smart contracts run on shards. How is this done? By first starting a Beacon Chain, which is a PoS blockchain, but is completely independent of the original Ethereum mainnet, that will still run as usual. This is phase 0. The next step is to launch shards, which are coordinated by that Beacon Chain, this is very similar to the Relay chain of Polkadot. Once this is achieved, the mainnet becomes a shard and is thus integrated into the PoS network. After this, phase 2 starts and finally implements cross shard transfers and smart contracts as well as 100% PoS.

This graphic visualizes how the different Eth chains interact.

Last but not least there is another design goal and that is Seamless processing of smart contracts. This means that neither the user nor the developer has to bother about the shards. Running a smart contract should be the same as in original Ethereum. Behind the scenes there will be data passing between shards and stuff like this, but all of that should be automated by the infrastructure. The bar is set quite high here and this differentiates Ethereum 2.0 from the other two projects, Cosmos and Polkadot.

Now that we have understood the design goals, let’s try to understand the consensus mechanism of Ethereum 2.0. It is quite similar to Polkadot, which is not a big surprise, since Polkadot tries to be something like Ethereum 2.0, but faster and with more freedom. So we have a PoS-based BFT algorithm, that sacrifices instant finality for less overhead and more throughput. Guess what, there is another triangle and researchers of distributed software love these triangles. So I present it here and fill it with dots representing blockchains. This is always just an abstract model and reality is more complex, but it helps us understand basic ideas:

This triangle represents other corners and edges than the previous one, but still there is quite some overlap. Consistency is closely related to Low Latency Finality, Partition Tolerance to Large Number of Nodes, but Availability is not the same as High Overhead. Keep in mind the former are also not the same, there is just overlap. This triangle is from the Ethereum Foundation and I have added the dots. The dots for each tech can only be an approximation and are relative to the others. If you put in something else, the relative position of some dots might move together or apart…

In order to understand why finality is dropped we need to understand one important concept and that is described in a paper, which explains how Casper and GHOST can be combined. Unfortunately we have not really explained what Capser FFG is and we have not explained what GHOST is. In addition this is just a part of the story, since there is also CBC-Casper and the Ethereum 2.0 has changed quite a lot over time and is changing right now, since the importancy of zk-Rollups has increased by a lot in the recent weeks. So the plan might change further in the future, since Ethereum 2.0 is not just a matter of implementing specifications. There is also a huge roadmap posted by Buterin on Twitter and it shows how much stuff is going on:

Ethereum 2.0 roadmap as posted by Vitalik Buterin. The green rectangle on the left is “Today”. From there are depicted the many paths that lead to the Goal of Ethereum 2.0. I have marked 4 different segments and we will have a closer look at these.

The first event for a broader public is the Phase 0 launch and this has recently happened (December 2020). This process can be seen in the purple part of the roadmap:

The purple rectangle depicts the process of launching the Beacon chain and reviewing its performance for the Phase 1 launch.

Ok, so what is Casper FFG. The FFG stands for Friendly Finality Gadget and it is called Gadget because it is not a full consensus mechanism, it is not even a necessary part of such a mechanism, it is an addon that can introduce finality — even for Nakamoto Consensus based blockchains. In the case of Ethereum this is great, because Eth is currently based on this Nakamomo Consensus and we already know that it is very nice to have finality. With some kind of finality the current mainnet of Ethereum cannot just become a shard of the beacon chain, but also implement IBC and become a part of the Cosmos network. Casper FFG in essence introduces checkpointing via PoS and is designed like the other BFT approaches we have seen. There again is also some inspiration from Tendermint, but the fork choice rule is different and called GHOST. This is not part of Casper FFG, but Gasper is the combination of both, which we already mentioned. With Cosmos the fork choice rule was a no-brainer, because with instant finality there are no real forks. Whoever starts a fork is directly punished. But by giving up instant finality, the fork choice rule becomes again an interesting topic.

So what does GHOST mean? Greediest Heaviest Observed SubTree, sounds complicated but in essence it is very close to the longest chain rule. Actually it should be called heaviest chain rule, so that heavy is the same here, the only difference is that heavy stands for how many stake vouched for the validity of this block instead of how much work was put in this block. A SubTree is the part of the blockchain between two checkpoints, however Observerd SubTree was only picked to end up with thee acronym GHOST, which gives some nerd humor points, since Casper is a ghost. It could have been called greediest chain rule as well, but that’s fine. Actually it is called LMD GHOST, the LMD stands for Latest Message Driven, that basically means that the latest attestations of the validators are counted. Attestation is vouching for the validity of a block.

This is taken from the Gasper paper and it already has a caption. This caption will self-destruct whenever Medium supports this feature. (Nerd humor points for me…)

LMD GHOST and Casper FFG together form something that we have called consensus mechanism or in other terms blockchain protocol. Now that we understand how fork choice works, we understand how delayed finality can work here. Why is delayed finality now a good thing for transaction throughput? Because we don’t have to wait for everyone to give attestation and can go on producing more blocks even though some validators are lagging behind. It might also be the case that a conflict has arisen (exceptional condition) and some participants of the protocol need to get more data to make a decision. In the case of instant finality the network has to halt and now we understand why halting is not an issue with the HoneyBadger or with Gasper.

The red rectangle of Vitalik’s roadmap depicts the part of the roadmap that is making the PoW part of Ethereum (the current mainnet) compatible with the PoS part (the current Beacon Chain)

Here are the steps, which have to be taken in order to make light clients possible for Ethereum. At some point in the past Jae Kwon (founder of Cosmos) said that light client is the holy grail of interoperability, so somehow it must be important. So what is a light client? In contrast to a full node a light client does not download the whole blockchain and does not need to be online all the time. The ressources needed for a computer to run it are very small, in contrast to a full node. Still, a light client is different to just requesting some blockchain data from a full node providing these via an API. Let’s call the latter a naive client. The naive client trusts whatever information it gets and it might send some transactions to the full node, hoping these will make it to the blockchain. The naive client does not know if this has happened for real or if the full node is making up a different reality, the only way out is connecting to another full node as well and checking if both realities match and hoping both are not in collusion. The light client in contrast has some information (headers of validators and blocks) that allow it to verify most things and it is quite hard to fake things when talking to a light client. If the light client connects to other full nodes, it becomes impossible. Obviously this is something we want to have, especially since most blockchains have more read than write operations. Statelessness is connected to it and it also helps with rollups, which we will discuss next.

The green rectangle depicts the part of the Ethereum roadmap, which will enable rollups.

Rollups are a very interesting, but we have a big problem here. This article is very long already and we can’t double its length by explaining all kinds of layer-2 solutions. If you know what layer-2 protocols are, this is nice, if you don’t know, I try to make it short. The basic idea of all layer-2 solutions is to move things that originally happen on the blockchain and do it off-chain. This is not an intrinsic scaling of the blockchain, but since more data and even transactions can be processed, it is an extrinsic scaling method. There are many different ways to implement layer-2 solutions, channels provide a way to make many transactions between different parties and only when the channel is closed, they need to make a real transaction on the blockchain. To make it secure, they have to lock-up their coins, which are used in the channel.

Over the time these layer-2 technologies have evolved into multi-party channels, commitchains (for example Plasma), sidechains or said rollups. If one investigates these concepts and their evolution over time it can be seen that these sidechains or off-layer solutions become more and more like smaller blockchains attached to the big one. The newer versions also have penalties for misbehavior and similar concepts. So one remark I want to make here is, that IBC allows to attach other blockchains in a very similar fashion to sidechains, but with the difference that it is a full fledged blockchain being attached and it is 2-way instead of 1-way attachment. In some abstract sense one could say that IBC is the most scalable and adjustable approach to sidechains. And building a network of blockchains connected via IBC is a layer-2 solution, where each blockchain connected is a second layer to each other.

But why are we talking about this in the Ethereum 2.0 section? Because zk-Rollups are an interesting concept and in the last days Vitalik Buterin seems to increase their priority a lot. What can they do for Ethereum? They allow to move the computation of smart contracts off-chain and only settle the result on-chain. This reduces the data on the blockchain by a lot and much more things can happen on a block. But what happens if some actor is malicious? Well, it is possible to challenge the results of such a computation and if someone does the case is inspected and penalties apply if some wrongdoing was found.

Rollmops — not to confuse with Rollups.

Another awesome thing is that these improvements do not need the PoS part of Ethereum to be functional. It can be done with the improvments shown in the green rectangle and thus allows for a speedup of Ethereum mainnet in the near future. But now let’s proceed with the real advanced stuff:

The blue rectangle depicts the part of the Ethereum 2.0 roadmap, which is the most advanced technology. Here sharding becomes possible and some more modern things like post-quantum cryptography and SNARKs/STARKs become available as well as CBC Casper.

The thing here is that CBC Casper is not yet a finished specification but more of an open research. But it defines the direction sharding will work in Ethereum 2.0. Basically it extends LMD GHOST + Casper FFG by what is needed to make shards work together and solve the Data Availability Problem. The paper itself is not finished and there might be a lot of things that change in the future. Understanding this, we know why it might take 2 years or even more until Ethereum 2.0 is available with all its main features. There are also some additional features and these are post-quantum cryptography and zk-SNARKs/STARKs. Post-quantum cryptography basically means changing the hash algorithms to those, which are resilient against attacks from quantum computers. This is not really important right now, since quantum computers have very few Qubits at the moment. But one day they will become bigger and there are some problems, which can be solved very efficiently by quantum computers. These problems are integer factorization, discrete logarithm problem and elliptic curves. Unfortunately these are the building blocks of asymmetric cryptography, which is used extensively in blockchain technology. But since there are quite good alternatives, we don’t have to be afraid that quantum computers will destroy crypto currencies. Or at least not those being able to adapt to the future (I’m looking at you Bitcoin).

The zk (zero-knowledge) stuff is about anonymity, which means that transactions can be processed but parties can stay anonym while doing so. Wait, didn’t we have zk-Rollups in the near future already? Why is it now again in the distant future? Well, there is a difference whether rollups are based on zero-knowledge schemes (zk) or if the state transition of Ethereum supports that (the latter is more complicated). Some people think Bitcoin and Ethereum are already anonym. This is wrong. Pseudonymous is not the same as anonymous. If you have an address in Bitcoin or Ethereum, it is possible to link it with your identity and once someone has done that, he or she can track all your actions on the blockchain. With zk-SNARKs it is possible to do real anonymous transactions that cannot be tracked. This is especially useful if one day elections and votes happen on a blockchain, since votes should be anonymous. Unfortunately this won’t stop some egocentric politicians claiming the vote was rigged.

Summary & Comparison

It is unbelievable, we made it. We made it to the final section of this article. Here we will have a look at what all of this means for real world applications. First we compare features of these blockchains and what are these good for. Then we will go through some examplary applications and finally I will present what are the biggest concerns in the crypto community about each project and what I think about it.

This table gives an overview by describing how each technology tackles each category.

Looking at the table there is something we can see quite easy: All points where Cosmos shines, Ethereum is weak and vice versa. Polkadot is somewhere in the middle. This is not a surprise, since the approaches of Cosmos and Ethereum vary a lot, but Polkadot tries to give a lot more autonomy to projects than Ethereum, but also offers more already set up infrastructure with Shared Security than Cosmos. The three important things here are Infrastructure, Autonomy and Data Access. Why? Because depending on your application you need some aspects of these fields. We have discussed all of these aspects here already, but I will give a short recap: Infrastructure means how much blockchain is already there, on which you can build your dApp. With “already there” is not meant how much has been realized, but in how far the technology itself is designed to allow you to use an existing network. Autonomy is in how far your freedom is limited. It is in some sense the other side of the coin. A big and strong infrastructure enforces many things, which limits your autonomy. We discussed the most important parts of autonomy in the Compatibility section. With the following examples we will understand more of its importance. Data Access is about the ability to read data of other applications or smart contracts. This is interesting for financial derivatives or Vitalik Buterin’s most beloved example: CryptoDragons eating CryptoKitties. So whenever an application is interconnected with many other applications it is nice if the integration is high. This is also somewhat contrary to autonomy, since integration is much easier when everything is standardized. Since we have learned already about the love of triangles, here comes the next one:

This triangle shows 3 different aspects of Blockchain 3.0 and where each technology shines. The more a circle covers the more of this aspect is available with the given technology.

This image shows us further how Polkadot and Ethereum 2.0 overlap more with each other than with Cosmos. So Polkadot sometimes calling itself “Ethereum-Killer” might be a correct classification for itself. If it can really kill Ethereum is an interesting question we will discuss here.

So let’s now discuss some examples:

This is short fordecentral exchange and is a well known concept for many in crypto. Today all useful DEXes are built with Ethereum and you can only trade Ethereum token on it. That limits the usefulness quite much and in addition Ethereum fees are high, so that it is not cheaper than central exchanges. But at least it has the real advantage, that the exchange cannot go bust and you lose all your money. So here we see that only 2 things limit DEXs from exploding and becoming the main thing, scaling and cross chain token transfer. Both is solved by all blockchain 3.0 approaches. Nice. Is there some advantage for either technology? Well, since Data Access is not necessary it all comes down to Autonomy vs. Infrastructure. Infrastructure means a startup can build such a thing very fast and given that many DEXes already exist and only need to upgrade to allow non-Eth token as well, it looks like Ethereum 2.0 will be at a main advantage here. But autonomy allows to collect the fees by yourself, so if a project decides to build a DEX, then it might want to get the fees themselves instead of letting this stream of income run to the Ethereum validators. So an autonomous solution on its own chain (the Cosmos approach) will have less friction and might win over time. But there is also the time to launch to consider and here this is a big one. There is no first mover advantage with Ethereum 2.0 if it takes 2 years until it goes online and Cosmos IBC is available on 18th of February. So in a few days. It is very unlikely that no project will be able to build a interblockchain DEX within 2 years. Especially since it is already possible to build such a thing for some years now on Cosmos and go live soon. Polkadot lies in between, since the launch is not as far away as Ethereum and it offers more autonomy, but it is also not an ecosystem like Ethereum were many many token are already there.


This stands for decentral finance and is the big hype right now. Well it is of course a big thing, since the classic finance sector with all its derivatives and options is huge. I think here are 2 different important classes, one is synthetic assets and the other is derivatives. Synthetic assets are real world assets, for example the Apple Stocks, mirrored on the blockchain. This already exists for Ethereum (Synthetix Network) and for Terra (Mirror protocol). Terra is based on Cosmos, so we might say, the future is already here. Derivatives also do exist for a long time already and these are financial products with an underlying and some interesting mechanic. Typical examples are stock options, longs, or shorts. For example if you think a company will lose value in the future you might want to buy sell options (puts) to hedge your risk of losing the money invested in the stock. Or you could buy these puts without having the real stock, then you bet on the price moving down. You can also go all in and short a stock, for example of a company selling games in retail stores. Shorting means that you lend the stock and sell it. When the price drops, you can buy it back and make a profit by giving it back and cashing in the difference. The problem might be that the price of the stock can also rise and then you have to buy it back at a higher price, since you owe someone stocks of that type. This has happened with Gamestop stocks (GME) and was quite popular in the media. Derivatives can also be on crypto coins, so you can also short Ethereum or other token things. Like DEXs this was only available for Ethereum based token, since everything needs to be on the same network. So buying call options for Bitcoin in a decentral way is not possible without interblockchain communication. This field will open up in the future with Blockchain 3.0 and it might become a big thing.
Discussing this is quite similar to DEXs, so in some sense Cosmos would win, but especially for derivatives it is very nice to have Data Access. So if many products that want to be derivatived are build with Ethereum, then this can be built very efficient and fast with Ethereum 2.0. Polkadot unfortunately lacks this feature, so it cannot combine the advantages of both worlds in it for derivatives. However Polkadot has Shared Security, which is interesting for DeFi products, but if someone builds a DeFi platform, the scope is so big that the investment for a self-hosted infrastructure is manageable. Then there is also instant finality, which is nice for DeFi in many circumstances and Cosmos has it. If you try out Mirror Protocol and compare it to Synthetix Network, it is much better for the User Experience to have confirmation after 7s and go on very fast. Especially since often these things are bought, put in a liquidity pair and then staked or activated in the end, so every step can only begin after the previous was done, this feels much better with fast confirmation.

This is a very wide field. Games differ a lot. Playing poker has other demands than playing World of Warcraft. Especially when looking at the infrastructure needed to support it. For Poker it is possible to let it fully run on today’s Ethereum. A transaction costs roughly $10, which is a lot, but if you play poker with $10.000 pot and more, this might be not a big problem. Unfortunatly most players play a bit smaller pots, so fees become a problem here. And poker is a game with really low transaction number per game, so other games are much more complicated. Often these games resort to having only ingame items and game outcomes on the blockchain. I’m working on a decentral trading card game, where users can create their own cards and vote on cards of others to bring the game to balance. Magic the Gathering is the origin of this genre and cards usually cost $0.01 to couple of hundred dollar, some are even more expensive, but most cards are valued less than $1. So having transaction fees of $10 is not possible here, even if one moves the game off-chain and only reports game outcomes and ownership of game items on the blockchain.

This is the reason why CryptoKitties are so expensive and all other game items, that came after it. Let it be robots that fight each other or some other collectibles. There is no sense in having collectibes with lower valuation than the transaction fee. So here we understand, why for games blockchain 3.0 will be a total game changer. Games are often indie projects or start small and become quite big once a big partner is found. This means for many projects it is not feasible to have enough money to buy a slot in the Polkadot network. This means the game has to resort to using smart contracts on another Parachain, but then the fees flow off to other validators and not the ones actually programming the game or funding the project itself. For Ethereum 2.0 the same applies. So if somone builds something like CryptoKitties, which is more about collecting rare assets than being a real game, then it is fine. But if a game is more like World of Warcraft or a trading card game, Polkadot and Ethereum 2.0 have a big disadvantage compared to Cosmos.

Here autonomy is really important. The idea of having application specific blockchains makes a lot of sense for games. In addition it is not really necessary to have a Shared Security model, because for games when they launch there are not millions of $ in game assets. This is different for DEXes or DeFi, where anybody hosting such a project wants to onboard as much value as possible right from the start. Then you need security. In contrast the valuation of the staking token of a game’s blockchain can grow together with the game assets, which are collected by the players, thus building up security. Another good addition is freedom in setting up the infrastructure. If someone wants to build such a crazy thing as WoW, then it is possible to split up different areas also on different zones, to scale it to infinity. This can be done, so that interacting users end up on the same zone.

This stands for Decentral Autonomous Organizations. A big and famous DAO was TheDAO in 2016, which ultimately collapsed, because its smart contracts were not designed well. The idea was to collect funds, then invest in startups and give returns back to the members of the DAO. So in essence this was an investment fund decentralized. DAOs can be all kinds of things and this example again works on Ethereum, because high transaction fees are not a big thing if you are investing $100k or millions in a decentral way. But you can also create a sports club as a DAO, a worker union or a political party. These examples are much different, because there is no way that this will work, if every vote costs the voter $10. When a sports club elects a new treasurer and everyone has to pay $10 to give their opinion, this might be problematic. After that the president is elected and so on. So the yearly member’s meeting will cost a couple of $100 quite fast — for each participant.

For Ethereum 2.0 these fees might become really small, but still members have to get some Ethers to participate, which might be annoying. So for smaller DAOs, Cosmos might be a real winner here. Especially because once the DAOs grows to a meaningful size, it can easily connect to other Hubs and become part of the network. Since the range of possible DAOs is quite wide, there are of course other examples, where Ethereum 2.0 will shine. These might be some concepts, where the DAO interacts a lot with other Ethereum dApps. Unfortunately most DAOs are more isolated than other things and the advantages of Polkadot and Eth2.0 over Cosmos are not relevent in contrast to having maximum autonomy.

Interactive Smart Contracts
It is hard to find a suitable name for this. But what it means is dApps that are strongly integrated or interacting with other dApps. One example was already given and that is CryptoDragons eating and digesting CryptoKitties. You could have a virtual hotel with a casino, where in the base floor there are a lot of slot machines and anyone can buy such a machine and host their smart contract on this machine. On the upper floors there are meeting rooms, where DAOs come together and so on. This makes a lot of sense on Ethereum 2.0 and does not really work with Cosmos. With Polkadot this might also work quite well. It strongly depends on how much you need reading the data of other smart contracts. This is the main difference between Polkadot and Ethereum 2.0. If your interconnected dApp does not really need this feature than the additional freedom of Polkadot is an advantage. If you need maximum integration, then Ethereum 2.0 is the winner.

Whenever many different actors build something together with various smart contracts, then we are in this realm here. The vast majority of dApps that comes to mind, does not really need this feature. In most cases there is a single product, which is build by one group and if it is able to transfer and receive foreign coins, it is fine. Even the feature to send data to other blockchains might not be necessary for many dApps. But we have to keep one thing in mind here: This is the case because we come from a thinking, where such things were just not possible. Maybe totally new things will emerge from this and then a data integration like Ethereum 2.0 is a killer feature. We don’t know this yet. Maybe it will be disappointing and there are some examples like this hotel with virtual rooms etc., but this turns out to be some funny thing to play with like CryptoKitties and not a fundamental new way of interacting.

An overview of different types of dApps and how each technology performs.

We have now arrived at the very last part and this is the candy at the end. The “when moon”-questions are answered here. Let’s first discuss what the typical criticism of each project in the crypto community is:

Cosmos is shit, because it accrues no value!
This critique is very interesting and the main idea is that Cosmos or better say the native token of the Cosmos Hub, Atom, is not able to become valuable. For Polkadot you need to buy DOTs to become a Parachain and for Ethereum you need to buy Ether to run the smart contracts or let’s better say your users need to buy Ether for the smart contracts. But for Cosmos anyone can just start and build their own Cosmos based blockchain and Atom does not get any value from that. This is actually right. This results from trying to overcome the “one coin to rule them all”. But this critique is also short sighted. On one hand it reduces the value generation of a token to forcing others to buy it in order to use the tech. This is not correct. For example if you have a DEX on a platform, then users will transfer some coins over the network in order to use the DEX. All validators profit from these transfers by fees, even if on the DEX a trade is done between 2 different projects and neither of these might be build with the same tech as the DEX. So if a Polkadot based coin is trades for an Ethereum based token on a Cosmos DEX and the Cosmos Hub connects these chains, then the Cosmos validators profit from this. So there is another reason, why Atoms might have value and that is that using the Hub for routing token to other Zones gives fees to Atom stakers. On the other hand if you say that Polkadot and Ethereum will make great profit from forcing projects building on them to buy their coin, then you are basically saying that you understand this mechanism but the projects building on Polkadot or the users using smart contracts on Ethereum will not. Because the Cosmos based solutions might compete with the others. If you are able to have the same service on Cosmos with less fees than on Ethereum, you might just use the Cosmos version. If you are able to build your project with Cosmos technology and not pay for the auction price of a Polkadot slot, then you might just go with Cosmos. If you think these projects will be forced succesfully into these projects paying something they don’t have to pay with Cosmos, then you are basically betting that the developers of these projects understand less than you do. I don’t want to bet on this — since I know that I’m a real idiot. And furthermore if for many projects it is cheaper and easier to build with Cosmos, then which network will most likely have the highest number of transactions, generating most fees? Well, the one that did not force others in the first place.

Edit: I have made a personally frustrating experience with the Cosmos community managers. When I wrote this article here the experience already happened and I did not find it fair to write something like that into this article here, since sometimes shit just happens. However now I have made a second frustrating experience and I can exclude that it’s just a misunderstanding. So when Hackatom 5 happened, the Cosmos hackathon, there was a community winner called “King of Cards”. This project copied the frontend from our Cosmos based trading card game and handed it in. Well, somehow nobody noticed that the github project does not have any blockchain running (somehow they did not bother to copy the Cosmos blockchain) and also nobody got suspicious by a single commit putting in a complete website and the website connecting to a blockchain on another domain (ours), but well. That is shit that happens. Not a big deal. I told the organizers, it was a community prize, so the community voted for this project and it was not selected by the jury. So I said, they should declare us as the winners, but I totally understand that they did not. But at least I demanded they would at least mention us as the project that was plagiarized here. They did not. This was frustrating for our team. I tried to explain the community managers that most plagiarism harms by making genuine ideas more public without giving attribution to the originators. Only a small part of plagiarism are actually products being copied in low wage countries being sold for a cheaper price. Well they did not really care, I did not understand that, since somehow it would make sense to support your own community, but maybe that’s just my opinion. Then they proposed to write something about our project or say spotlight us in some article and I said, yeah that is nice. In December I was told it would take a bit longer, in January I only got an answer after asking several times and was told that the Cosmos upgrade draws a lot of time from them and that is totally understandable. I asked back how it’s going with the article about us in February, March and April, once a month, I tried not to be annoying, but in all these cases I did not get an answer. So a second time as a community member who really tries to build awesome stuff for Cosmos, I was left standing in the rain by the community managers. Since after several months I can’t believe this is just some random misunderstanding, I think it is not unfair to write something like this experience in this article here. It is a personal experience and others might have other and hopefully better experiences. Furthermore I highly appreciate the marketing mission launched for Cosmos (see https://www.mintscan.io/cosmos/proposals/34). I have had contact to some of the administrators of this fund and all of them have been always helpful and thoughtful. This fund gives me hope that Cosmos can achieve a better marketing and community approach than what I have experienced so far.

Polkadot is shit, because Parity Wallet was hacked!
The company that started Polkadot is also a provider of some widely used Ethereum infrastructure. The parity wallet is from them and it was hacked twice. The first time in mid of 2017 and the hacker stole 150k Ether (~30M USD at that time) and the second time the “hacker” did not steal anything but wiped out 500k Ether (~152M USD at that time) on parity wallets. In November 2017 it happened accidentally so the “hacker” claimed. The problem here is that it should not have been possible in the first place and this is Parity’s fault. The hack is a result of an incomplete initialization of the smart contract of said wallet and the addition of an unneeded function to kill it. Polkadot collected 480k Ether in its ICO of which 2/3 were lost when the last of these 2 hacks happened. That was really sad news for Polkadot, but luckily it was able to run a second ICO or say token sale, where 60M USD were collected so the project was not ultimately harmed by these events. However besides the direct damage by lost funds these events also raise some big question marks whether the team of Parity might do more mistakes in the future, thus causing more severe hacks to happen. It might also mean that they have learned their lesson and due diligence will be on a very high level now.

Such things always have something of fingerpointing, but I still decided to put it here, because it has a lot to do with technical expertise, which is important I think. For Cosmos there is also an event that I could have picked, which is the founder Jae Kwon becoming a little bit crazy. This happened after Cosmos launched its mainnet, when IBC was still under heavy development. Somehow he identified with being Cosmuhammad Bitcoin Jaesuestain some kind of crypto prophet madness. But these events were crazy when they happened, however the team of Cosmos was able to move on and delivering their vision on 18th of February in 2021, so this event of the founder freaking out a bit did not mean the technical expertise was harmed. Similar things happened to Tezos, which was also able to move on after the split with their founder. Also I try to present here what the most popular critique of each project is and that Cosmos is not able to accrue value from projects joining the network is something you hear very often in contrast to the Jae Kwon story. For Polkadot you also hear quite often that the project is shit, because you have to buy DOTs to get in and the important fact being left out often is that these DOTs are not burned but give you also staking rewards.

Ethereum 2.0 is shit, because it will never be finished.
Yeah, well, what can I say. Never might not be right, but it takes a hell lot of time. The critique is not wrong. Often along comes criticism because the plan for Ethereum 2.0 has changed quite often and might also change in the future. Some people even predict that shards will never come and everything will be done with rollups, because these shards too complicated. Well, if that is true, then Polkadot must have the same problem, since the Parachains and the shards are not that different and Ethereum can also lower their seamlessness of smart contracts to the level that matches Polkadot. But I think this prediction is not correct. Shards are no crazy magic nobody actually understands, but it is a lot of engineering work, even after the basic approach has been finalized. Another reason why it takes so long is of course that Ethereum has an existing ecosystem and blockchain already running, built on older technology. But this has a big advantage: Ethereum is already at all exchanges, everyone knows it and after the transition there is no resetted ecosystem but instead there is a big community waiting for the upgrade. So even if this critique here is correct, the question is if Cosmos or Polkadot can outrun Ethereum 2.0 in their 2 or 1 years headstart.

Final words
Thank you for reading and congrats you made it to the end. It got a bit longer than I intended. Also there will be some things wrong, some things outdated. That is hard for me to avoid, especially since some things I have learned quite some while ago and it might have changed recently. I might be biased in many things, that is because I’m human and I try to minimize it, but it is not really possible to totally overcome it. If you find mistakes or stuff that is outdated, just let me know in the comments. I’ll be happy to correct it.

Join Coinmonks Telegram Channel and Youtube Channel get daily Crypto News

Also, Read