<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Patrick Wieth on Medium]]></title>
        <description><![CDATA[Stories by Patrick Wieth on Medium]]></description>
        <link>https://medium.com/@patrick-wieth?source=rss-8e91a3236ca6------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 06 May 2026 15:34:10 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@patrick-wieth/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[How to build a blockchain game]]></title>
            <link>https://patrick-wieth.medium.com/how-to-build-a-blockchain-game-f18c9d730c42?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/f18c9d730c42</guid>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[games]]></category>
            <category><![CDATA[strategy-games]]></category>
            <category><![CDATA[games-development]]></category>
            <category><![CDATA[trading-card-game]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Fri, 16 Aug 2024 11:40:27 GMT</pubDate>
            <atom:updated>2024-08-16T11:40:27.136Z</atom:updated>
            <content:encoded><![CDATA[<h3>or how and why we build CrowdControl</h3><p>This article will inspect how online games can be build on a blockchain and what interesting pitfalls can occur. The example of this will be <a href="https://crowdcontrol.network">CrowdControl</a>, a blockchain based trading card game I’m working on now for a couple of years. However the main goal is to give you insight how to build games on blockchains, whereas CrowdControl is just the thing that comes out of all the learned lessons here, hopefully. This is an article I’m writing without knowing the outcome and final conclusions, so it might be surprising even for myself, haha. Often I learn new things, when I write an article, since it forces me to be really accurate. So maybe we learn together the game I’m working on for years does not make sense, maybe we’ll learn something else, let’s go!</p><p>Let me get another thing straight as well: We will not discuss slot machines or some kind of gambling schemes disguised as a game. This exists a lot and these are not real games but rather gambling devices. We will talk about real games, that are fun to play by itself and not just hook us by exploiting our dopamine system. Real games like Warcraft, Starcraft, Hearthstone, League of Legends, Fortnite, GTA or Teamfight Tactics and many more. Of course the line is blurred nowadays, but I’m sure you get the point.</p><p>To give you some forecast what will be in this article, here is the list of topics we will cover:<br>1. What type of games can build with a blockchain?<br>2. Technical limitations of using blockchain<br>3. What is CrowdControl? And why??</p><h3><strong>1. What type of games can build with a blockchain?</strong></h3><p>Does it make sense to build any kind of game on a blockchain or are there some limits, where it really makes no sense. Let’s take Sudoku for example. The game is singleplayer, you just solve a puzzle, this is already an indicator this might not be for blockchain. Still, it can make sense. In Sudoku there are many different configurations to be played and one could think about putting the game creation on a blockchain.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*m4V1r9ZT0WXu86uR.jpg" /><figcaption>Sudoku. Does it make sense to have this game on a blockchain?</figcaption></figure><p>This means you think about and design a Sudoku puzzle on a blockchain, mint this configuration as a NFT and then others can play your creation. In this approach it makes sense to put even a singleplayer game on a blockchain. In the concrete example there is one big problem though: All solvable configurations can be calculated by an algorithm and letting humans do this task and incentivize it on a chain is not really meaningful. It might still be a succesful blockchain project, because sometimes projects manage to promote useless bogus and sell it, before everyone realizes what it is. The best example to this are meme coins of which a lot exist.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/680/0*d6MEFih6AQwapy11.jpg" /><figcaption>Source <a href="https://roomescapeartist.com/2020/06/13/defense-sudoku/">https://roomescapeartist.com/2020/06/13/defense-sudoku/</a></figcaption></figure><p>The other option is to make challenges out of a configuration. So if you start a game, the blockchain randomly picks a Sudoku and then you have 5 minutes to solve it. To start a game you have to lock some coins and if you win you get some extra. If you don’t solve the game, you lose your locked coins. Again this makes no sense, because Sudoku can also be solved by algorithms in very short time and then this is very exploitable by bots.</p><p>So now we have learned about a singleplayer example, for which already two approaches to put it on a blockchain exit. However for Sudoku this makes no sense. We still have learned some concepts and can apply these to other games. So is there a singleplayer game, where this makes sense?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*JI7A89haThiPpO7g" /><figcaption>Super Mario Kaizo — community generated super hard to beat Mario Levels</figcaption></figure><p>Sure, let’s just take Super Mario, a jump and run game. Some players could design levels and think out interesting parcours and others could play these levels, try to solve those. The blockchain has some nice advantages here, for example in order to publish a level, the designer must first solve it herself. This prevents troll levels, which are unsolvable. Nice. What about the other option, challenges? Makes sense here, even with the aspect of speedruns in mind, it might be interesting to beat other players in solving levels fastest. The blockchain then puts a frame around these challenges and allows to bet on attempts and similar things. Of course there is the problem again that some people might write bots, which are superhuman strong in solving these levels and just scoop all the rewards and bets.</p><p>This is always a problem and has not really to do with blockchain. Even in World of Warcraft bots farm items and sell these online. And WoW is a complex game. So even moving from the complexity of Sudoku to Super Mario or even WoW does not change this problem, it only changes on which level of popularity a game starts to have botting. So this problem must be solved in other ways (Captchas, bot pattern detection etc.).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*OQLwhutUbYaX9Qid.jpg" /><figcaption>Mushrooms generally enhance experiences.</figcaption></figure><p>Unfortunately, I have committed a tiny lie, saying we are discussing a singleplayer game and then turning it into multiplayer by making different players interact with each other. The point here is that such a thing will always happen, because blockchains in essence are engines of consensus.</p><p>Blockchains always have to do with multiple entities, if not, there is no reason why you need a blockchain. Consensus always happens between different persons. I mean, it can be that a person is torn within herself and wants to find a consensus, but it is questionable how a blockchain can help here. This is absolutely crucial to understand, blockchains are engines of consensus and if the problem you are about to solve has nothing to do with consensus, then blockchain is not the answer.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OJ8hTpVjic9x2Sdoqe0NlQ.png" /><figcaption>An example which does not really need a blockchain. Or let’s be more exact: It does not need blockchain to solve any problems, blockchain is just a good platform to make money with a product that has no substance.</figcaption></figure><p>So let’s be more specific on what we call a singleplayer game. A singleplayer game is for a single player inside the game. So our examples of level creation interaction or speedrun interaction are both interactions outside of the game. While you play there is no interaction. Chess in contrast is always a multiplayer game no matter which scope you pick. By that definition Trackmania Nations/Sunrise is also a singleplayer game, because there is no interaction between the racing cars even if you play at the same time.</p><p>Which brings us to the next example, Chess. If you guessed that this will be the next example, then you are actually smart enough to read this article. If not, no worries, just keep on reading.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7RGV79NslukfXn9t.jpg" /><figcaption>Chess — undoubtedly a multiplayer game. It is also deterministic and a game with perfect information. Is this relevant? Let’s find out.</figcaption></figure><p>Chess is a very good example, because this totally works on a blockchain. Having games on blockchains often implies technical limits and Chess has no problem with these limits — we will cover this in the next section. So putting Chess on a blockchain is really straightforward, two players commit to a game, every move is posted on the chain and then the chain checks if there is a winner or if someone gave up. This is a third variation, competition. The game creation part makes no sense for Chess, because the game always starts in the exact same configuration. The third game mode “Challenge” does not apply to normal Chess, but there are Chess puzzles, where you have to win the game with a fixed number of moves. As chess is a deterministic game with perfect information this has exactly the same problems Sudoku also has, so we will not repeat this. Of course botting is also a huge problem in normal Chess, if there is some money involved. Since Chess computers do exist and they beat world champions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*N5xlDFqQ6Hijx2Vepx3fNA.png" /><figcaption>Starcraft — a multiplayer game without perfect information(source <a href="https://starcraft.blizzard.com/">Blizzard</a>)</figcaption></figure><p>What about other multiplayer games? Let’s have a look at things like Starcraft, Warcraft, Command &amp; Conquer, you know, the good old real time strategy games. These games share the feature of competitiveness with Chess, so it should come out as the same. But they differ in that not every starting configuration is the same, factions are asymmetric and the real-time aspect brings a lot of technical limitations. Different starting configurations might open up the possibility for game creation interaction. Great, but what is the real-time problem? Imagine you’d have to post every state change to the blockchain. For these games at least 10 happen per second. Blockchains have block times of seconds if they are fast. So even fast blockchains are too slow. So this real-time problem is a real problem. We will discuss solutions to this in the next section.</p><p>Well, then let’s do Poker, things happen slow in Poker. Decisions are made over many seconds, so should work for full blockchain integration? Yes, in some sense, but there is another pitfall. In real-time strategy games there is something called fog of war. In Poker it’s just not seeing the hands of the other players. Unfortunately blockchains are very transparent and one could inspect the game setup and know what cards others have. So here again a solution must be found and we will look at it in the technical chapter. Yes, I know, I’m building up a lot of tension and this is something a reader might not expect from such an article. But it could be worse, I’m not using cliffhangers which do not let you know if some main actor of our story died or not.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*HQf-92ksADwyQDJT.jpg" /><figcaption>Poker, slower than real-time strategy but also with hidden information. <em>© happy_author/stock.adobe.com</em></figcaption></figure><p>But let’s come back to the real-time strategy games. Competition interaction works the same like Chess, but what about game creation interaction? This is a bit dull, since sure there are maps, but most of the time 3–4 maps emerge which are played almost all of the time. So this might not be very interesting in these games. But there are other possiblities as well. For example in Warcraft III maps were not just made for competitive play under normal game rules, but the game rules were altered. Some of these maps were so popular, that these are still played today. The most prominent example is DoTA, from which a whole genre has emerged, the MOBAs to which League of Legends and obviously DotA 2 belong. The same applies to battle chess type games like Teamfight Tactics. So there would be a possibility to put these maps as creations on a chain.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*g8aMwcRGFbkFwA2g" /><figcaption>DotA Allstars — not the first of its kind, Aeon of Strife existed before it, but the most popular one. An ultimate testament to what is possible with Warcraft III map editor.</figcaption></figure><p>Another option is to create units. Since these games are not set with a fixed configuration from the start, but rather units are build while the game progresses, it is possible to introduce new units. So one could think about a game, where players can design new orks, catapults or space ships to fight each other. However there lies a cursed problem as Alex Jaffe from Riot Games calls it (<a href="https://www.youtube.com/watch?v=8uE6-vIi1rQ">https://www.youtube.com/watch?v=8uE6-vIi1r)</a>. A cursed problem in game design is a set of design goals which cannot be achieved at the same time. In our given example letting players create units in competitive games brings the following problem:<br>A designer of a new orc wants it big and strong and easy to build, just imbalanced as hell, because your own creation should be impactful. But a game is only interesting if it is balanced. So giving the design of units into the hand of players drives insane imbalance, which makes the game unfun.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/550/0*3V6fcJRaAvmBiAp0.jpg" /><figcaption>Minecraft — a game which has been enhanced tremendously by its community via mods. Source: <a href="https://www.pcgamesn.com/minecraft/twenty-best-minecraft-mods">pcgamesn</a></figcaption></figure><p>This is not a problem in non-competitive games. In Sims, Minecraft or Factorio this is not a problem. Someone can design a new game element that breaks the game. Because you can just avoid this element and not include it into your game. But in competitive games the opponent can chose to play against you with this creation that breaks the game. Here we see an inherent problem of game asset creation in competitive games, that is much less severe in cooperative games. However in cooperative games it is also necessary to gatekeep these additions.</p><p>So remaining are cooperative games, which work on blockchains, but of course become a bit more competitive, since there might be competition for ressources in some sense. If you take Sims on a blockchain, then some property for real estate might be more valuable and similar things. It can be a real problem, because these games can suffer very fast if things like this happen. Since creativity is at the center of games like Sims, showing players they have to compete for ressources to be creative can ultimately kill the fun of being creative and turn it into a grind. This always depends on the exact game design and of course what type of blockchain interaction makes sense. But creating assets and using these is a pretty clear usecase and the concept of a metaverse is on everyone’s lips. Here we are basically talking of games where the creation of game bound NFTs is at the center and trading these between players is crucial. This can be just for building your virtual home and having nice NFT clothing in the metaverse or fighting with weapons or pokemon like NFTs in games like Diablo, Path of Exile or Pokemon. The blockchain examples are already in existence, like Axie Infinity.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/0*cRLdt8kBuI2yCM9i" /><figcaption>Factorio — just like Minecraft is so much bigger with all those mods from the community. The developers often take ideas from mods and put those into the official version. From this we can see some interesting problem: Consensus on what should be included for everyone. Source: <a href="https://www.reddit.com/r/factorio/comments/sa1zcc/krastorio_fusion_reactor_and_fuel_factory_u235/?tl=de#lightbox">reddit</a></figcaption></figure><p><strong>Summary of this section:</strong><br>Singleplayer games can be build with blockchains only if some kind of interaction is introduced either via game creation interaction or challenges. Competitive games work great but can have big problems with game creation interaction. Non-competitive games work, but can become competitive through fighting for ressources. NFTs are a core concept that can be used in games.</p><p><strong>P.S.</strong> oh I forgot something important, because this article is about blockchain, many arejust reading this to learn how to get rich faster. Well, err here is some get rich faster advide: If you see a game, that is actually running on a blockchain but is true singleplayer. Well, this is very likely a slot machine or other type of gambling device. There is no point in putting such a thing on a blockchain if you want to create a good game. The only point to put such things on a blockchain is because your money is readily available and gambling laws might be bypassed. Don’t lose your money gambling. This is the ultimate advice that no gambler has ever heard, haha.</p><h3>2. Technical limitations of using blockchain</h3><p>Well, we already learned a lot about these and the most common one is transaction or throughput limitation. This is really no surprise since this is already a problem for blockchains which only transfer coins between users. So for games which have several state updates every second there is no way to capture all that on a blockchain. Either these updates are captured on a second level chain, which then only posts concluding states after a long time or the game server becomes a trusted entity, which posts updates from time to time on the chain. The first second layer concept means that for example every game is on its own “mini-chain” or channel and once the game closes, only the result is posted to the first layer blockchain. The difference between the 2 presented options here is not that big. This is because both reduce the decentralization a lot. For most game devs it might be a lot easier to have a trusted game server, which can post game results on a blockchain. So for Chess this means the server only posts the result of the game on the chain or maybe also the moves, but each move is done directly on the server, so Blitz Chess is possible.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*gSaNTZv7IDalBJXX" /><figcaption>State channels are a typical solution not only for games. source: <a href="https://research.csiro.au/">https://research.csiro.au/</a></figcaption></figure><p>For Poker we have seen another type of problem, which is also solved by such a server and might not be solved by a second layer. A server could easily hide the game information from the players. However there is a problem, with the amount of money involved in Poker, trusting a server might not always be great, since hacking happens. So is there another solution? Yes, chains that implement zk (zero-knowledge) protocols. This is some crazy stuff, where you can proof that you know something without revealing what you actually know. This reminds me, that I have to write an article about how zk crazy stuff works. This allows poker to be played completely decentrally but without the ability to cheat and see the hand of others. It often comes with quite high transaction fees, so might be problematic for low stake Poker games, but not if you build a blockchain dedicated for poker games.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*equo7LOrtBaz08sp.jpeg" /><figcaption>I couldn’t find a useful image showing zero-knowledge algorithms/proofs, so I let AI create another useless one.</figcaption></figure><p>So basically all games that have a real-time aspect, might it be shooters, strategy, MOBA or just GTA, cannot post everything on the chain. They have to rely on something that is off-chain and processes the game interactions.</p><p>Another important thing is fees. If you have a game where you collect and find items which have a value of $0.1 or $1 and fees are $30, then the awesome feature of trading and sharing with others is basically not usable. So the first thought here might be to pick a chain with low fees. But I would give the advice to go with your own chain. It depends a bit on the game, some can totally work on an established chain, but games that are hosted on their own chain have the fee problem ultimately solved. Fees basically mean competition for block space between different users on the blockchain. Users who transfer a lot of money are willing to pay more fees than those who just play a game. If all users on a chain basically play the same game, then the competition for fees is totally different.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Wqc_T5r53bywYI1-" /><figcaption>Network fees can vary :D (source <a href="https://x.com/coinmarketcap/status/1481448979997069312">https://x.com/coinmarketcap/status/1481448979997069312</a>)</figcaption></figure><p>Another important thing is governance. A game that is hosted on their own chain has a completely different level of governance. Fundamental things can be changed, rollbacks in case of catastrophic events are possible and most importantly you don’t go down with the ship. What do I mean with this? The collapse of UST and Terra is a prime example. If you have build your game on Terra and your users use UST to transfer value, then after the crash you have basically lost everything. This is not nice. “But I want to have these external assets, stablecoins etc. in my game!” no problem, if you build with Cosmos you can transfer these assets via IBC on your chain and use them, but still your chain survives after such a collapse. The big downside of this is of course more development time is needed for a chain and the infrastructure is more costly. If this is a problem you can still build your game as a dapp first and then move to a app specific chain if there is interest in the game.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*0aId7kob5VlnTDd1" /><figcaption>Visualization of the vision of the early cosmos ecosystem. Good old times.</figcaption></figure><p>Unfortunately Cosmos has changed over time and did not stick to its initial goal of enabling app-specific chains very much. Therefore gaming infrastructure was not really build. When I started to write this article, I was still full of hope, that Cosmos might continue to evolve as it has done, but now this has changed and I’d say, it is no longer a no-brainer to just go with Cosmos.</p><p><strong>Summary of this section:<br></strong>Transaction througput is a concern, especially if your game is a real-time game. Fees can be a problem, but actually have the same root. If you need updates more than once a second, you have to move these off-chain. For all these problems, solutions exist. Build on your own chain if you can and there is no ultimate need for using smart contracts. However if you just build a bad game that aims to collect some money and that’s it, it might be best to just go with a big network and ignore all the downsides. If you want to build a truly awesome game, follow along.</p><h3>3. What is CrowdControl</h3><p>So after all these realizations I thought about what kind of game I want to make using blockchain. And I came directly back to the cursed problem mentioned above. Because I like competitive games, but I also like being creative and creating new stuff in games. So I had to think about how to solve this cursed problem and if blockchain can be of help here. Let’s think about how this process normally happens:<br>Some game designers create some new hero in League of Legends or a new tank in a stragegy game and then the players play it. The hero might be totally overpowered and then players complain about it in boards, on youtube or twitter. The game designers realize this and then nerf the hero, (nerf = make weaker). This can also happen in the other direction if something is too weak, then the game designers make it stronger.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Y4fT7YEWjWFBQ1f4PrVqaQ.png" /><figcaption>New heroes are added to League of Legends periodically. Often these are too strong and must be nerfed. Over time Heroes, whose power level was ok, becomes too low over time, this is called power creep. Source <a href="https://www.leagueoflegends.com/en-us/champions/cassiopeia/">leagueoflegends</a></figcaption></figure><p>Also it might happen that such a game evolves over time and something that was strong in the past has become weak because all the new stuff is just a tiny bit stronger, this is usually called power creep. It often happens into this direction because making new stuff a bit stronger incentivizes players to spend money on the new stuff. But at some point it becomes necessary to adjust the old stuff to the new level, which is long term balancing. What we see here is a loop of interaction between game designers and players, a constant exchange of adjustments and evaluation.</p><p>This process can be automated. If we move it on a blockchain, then we can collect the feedback of players on game elements and thus buff (make stronger) or nerf these elements. I think there are 2 different ways of doing this. One way is letting the players explicitly express their evaluation. We call this voting. This is the automated equivalent of ranting online about overpowered things and seeing how many upvotes such a rant gets. Upvotes can also be in the form of retweets or board responses in a forum. The other way is to draw conclusions from statistics. If something occurs very often for winning players, this might be overpowered. This approach has the big upside that it is not distorted by human emotion and perception, but it also has downsides.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Rfp4MMqH9iD50ghoA0PWgQ.jpeg" /><figcaption>Is the next evolution in gaming empowering players?</figcaption></figure><p>One downside is that if some elements are rare or are not used then they might not even appear in statistics and the engine will never find out if it contributes to won or lost games more. Another downside is that some items might be utility things and are not necessarily overpowered. So for heroes in LoL this might not apply, since all heroes are on the same level, you only pick one per game and play that. But it applies to the heroes items, some might be useful for play and very often appear in the inventory of succesful players, but not because these are overpowered (OP) and make you win but rather because they provide a better game experience and were added to the game as “quality of life” features. For example a potion, which lets you see invisible things. Everyone uses it and if you don’t, you just lose to stealthy shenanigans. So in statistics this item is highly correlated with winning, but it’s just a default tool of the game and nerfing it may lead to a situation where everybody starts playing stealthy champions and so on. Other statistical biases can be found as well, for example if looking at Magic the Gathering, then your balancing algorithm might find out that Islands are the strongest card. For everyone who does not know Magic the Gathering, this is a trading card game, where you play lands, which give you mana and that let’s you play spells and creatures. There are 5 basic lands, to which the Islands belong and all of them do the same, give you a single mana just in 5 different colors. So neither of these lands are overpowered, since they all do the same and are the basic piece of economy. Every deck consists of 30% of these lands and if the color blue is overpowered, then Islands are by far the most common cards in winning decks, even though the Islands are totally not overpowered. But the color these Islands are associated with might be overpowered. In such cases a balancing algorithm might fail, whereas players are better to identify the broken part of a game.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wvJdDrCvliwQ8QxrxPrBkA.png" /><figcaption>CrowdControl — a trading card game as deep as Magic the Gathering, but also as accessible as Hearthstone.</figcaption></figure><p>But in my opinion the biggest downside is that these algorithms do not make use of the consensus engine a blockchain is. By voting players express their opinion, collecting votes and finding the most popular vote creates consensus. This allows players something that is truly unique and is what we want to achieve with CrowdControl. <strong>A game that is developed by the players. </strong>The players can also unite and nerf something out of the game, that is not too strong, but unpleasant to play. It is possible to change the game to whatever the players want. At this point a very typical response is “but players are idiots, they will not create utopia but dystopia, they will destroy everything”. To this I can only respond, that this is the ultimate discussion if democracy works or not.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FQFgcqB8-AxE%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DQFgcqB8-AxE&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FQFgcqB8-AxE%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/13dadf5ad2647bdeae1c65e2d12b931a/href">https://medium.com/media/13dadf5ad2647bdeae1c65e2d12b931a/href</a></iframe><p>We believe at CrowdControl that democracy works and want to find out what happens if the players are empowered.</p><p>That is why we are creating a blockchain, where players create their game, balance it and write their story. We think this is something new, which is enabled by blockchain and sadly most blockchain games are just about extracting coins from their users instead of giving them a voice. We all love having an opinion and playing games, so why not combine the two things together.</p><p>We are not just building a game, but the whole blockchain solution that has all those features of self-developing and auto-balancing. We believe in empowering games, so if you are building a game and want to use our technology, come and let us know! <br>If you are interested in playing our game, also come!</p><p>This linktree has all the places you can go to: <a href="https://linktr.ee/crowdcontrolnet">https://linktr.ee/crowdcontrolnet</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BnbuIqRzhsE9TIOz8ilOPw.png" /><figcaption>Infinite fun awaits!</figcaption></figure><p>The fastest way to get in contact with us via <a href="https://discord.gg/yPA3aKe">Discord</a>.</p><p>I really like this topic, so I think I will write more about it. I was very busy in the recent years building this stuff, I was talking about above and I hope you could learn some interesting thoughts here, even if you find what we are doing totally boring.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f18c9d730c42" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Did Rammstein’s Till Lindemann saw it coming?]]></title>
            <link>https://patrick-wieth.medium.com/did-rammsteins-till-lindemann-saw-it-coming-315d2ddffbd1?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/315d2ddffbd1</guid>
            <category><![CDATA[lindemann]]></category>
            <category><![CDATA[accusations]]></category>
            <category><![CDATA[rammstein]]></category>
            <category><![CDATA[rape]]></category>
            <category><![CDATA[sex]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Tue, 27 Jun 2023 14:35:14 GMT</pubDate>
            <atom:updated>2023-07-03T15:44:51.040Z</atom:updated>
            <cc:license>http://creativecommons.org/publicdomain/zero/1.0/</cc:license>
            <content:encoded><![CDATA[<h3>Did Rammstein’s Till Lindemann see it coming?</h3><p>Well, normally I write about other things, but somehow this caught my eye, and I must write about it.<br>It’s all over the news, Till Lindemann is facing serious accusations, which basically boil down to a system where young girls, allegedly intoxicated with alcohol and/or roofies, are brought to him for the purpose of having sex with him, where consent may not have been given. I’m not saying the case is closed and he’s going to be sentenced now. But the way the girls are coming forward and telling their stories is reminiscent of Weinstein and the broader #metoo movement.</p><p>On the face of it, this is nothing new, stories of rock stars having their groupies lined up and picked were told when Robbie Williams was at the height of his popularity. Well, that’s what I remember, when I was a kid, Robbie Williams was at the height of his popularity. So, this rock star groupie thing is probably 30 years older, I don’t know. But the allegations against Lindemann go a bit further, as the girls are said to have been drugged or their drinks spiked, which takes the whole thing from degradation to rape, if true.</p><p>There is more to it, the girls claim to have been separated from their peer group, made to give up their mobile phones, encouraged to drink alcohol and many report dizziness and the feeling of being drugged with knock-out drops. It is all over the media and if you want to see for yourself just read the articles or watch youtube. I’m not going to link to that stuff here, I want to look at Lindemann’s content and ask the question of the title: Did he see it coming? I mean, he may not be guilty, but if he is, then maybe he subconsciously saw his fate and processed it in his lyrics.</p><p>So, if you look at the content of Lindemann’s lyrics, well, that gives these accusations a strange taste. The first thing that comes to mind is a porno that was released a few years ago:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1009/0*MrQ0FwBN_s3wa7up.png" /><figcaption>Till the End, a porn released by Till Lindemann in 2020, hardcore porn might be an understatement…</figcaption></figure><p>Well, the porn is not just a little bit crazy, it shows violence and dehumanisation. But even if it was less hardcore, there would still be the question, why the hell would a superstar like Till release something like that?</p><p>And then, there is a poem, written by Lindemann, which it goes like this:</p><blockquote>Ich schlafe gerne mit dir, wenn du schläfst.<br>Wenn du dich überhaupt nicht regst.<br>Mund ist offen, Augen zu.<br>Der ganze Körper ist in Ruhe.<br>Kann dich überall anfassen.<br>Schlaf gerne mit dir, wenn du träumst.<br>Und genau so soll das sein (so soll das sein so macht es Spaß).<br>Etwas Rohypnol im Wein (etwas Rohypnol ins Glas).<br>Kannst dich gar nicht mehr bewegen.<br>Und du schläfst, es ist ein Segen.</blockquote><p>I guess not everyone speaks German like I do, so here is an attempt at a translation:</p><p>“I like to sleep with you, when you sleep.<br>When you’re not moving<br>Mouth open, eyes closed<br>The whole body is still<br>Can touch you everywhere<br>Like to sleep with you, when you dream<br>And precisely so shall it be, so shall it be, so it is fun<br>A bit of roofies in Wine (a bit of roofies in the glass)<br>You cannot move anymore<br>And you sleep, what a bliss”</p><p>I try to translate things as close to the original as possible, rather than focusing on good English.</p><p>I try to translate stuff here in a way that is true to the original as possible not primarily focusing on good English.</p><p>The poem describes something that is explained in the allegations, the porn describes something else that is also in the allegations. So the girls/women telling their stories talk about bruises and assume they were drunk, so their memory is gone. These two pieces describe a lot of what is in the allegations. There is more, but one question is often asked:</p><p>That poem describes something, which is explained in the accusations, the porn describes something else also found in the accusations. So the girls/woman who tell their stories speak of bruises and assume being intoxicated so their memory is gone. These two pieces portray quite a lot of what the accusations contain. There is more, but one question comes up often:</p><p>Why would someone with so many groupies drug his victims? This is a fair point and one that is often raised by Rammstein fans. To be honest, I am a Rammstein fan myself and I have often talked with my friends, and we have tried to understand their works of art. Interpreting Rammstein songs is not always easy, sometimes it is just strange, but with these stories coming out a lot of things have become clearer, but not in a good way.</p><p>Why would someone who can have so many groupies like him, drug his victims? That is a fair point which is brought up often by Rammstein fans. To be honest, I am a Rammstein fan myself and I have often talked with my friends and we tried to understand their pieces of art. Interpretation of Rammstein songs is not always easy, sometimes it is just strange, but with these stories coming out, a lot things have become clearer, but not in a good way.</p><p>So, the question we are trying to understand is, why would someone drug and rape their fans when there are so many fans that they could just bang with consent?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*lrwBqGMkoogMmrYn.png" /><figcaption>Weißes Fleisch live performance</figcaption></figure><p>Let’s take a look at the really old stuff first. There is Weißes Fleisch, a song from 1995, which has in its lyrics</p><blockquote>Du auf dem Schulhof, ich zum Töten bereit<br>Und keiner hier weiß von meiner Einsamkeit<br>Rote Striemen auf weißer Haut<br>Ich tu’ dir weh und du jammerst laut<br>[…]<br>Ich werd’ immer geiler von deinem Gekreisch<br>Der Angstschweiß da auf deiner weißen Stirn<br>Hagelt in mein krankes Gehirn<br>[…]<br>Jetzt hast du Angst und ich bin soweit<br>Mein krankes Dasein nach Erlösung schreit<br>Dein weißes Fleisch wird mein Schafott-t-t-t<br>In meinem Himmel gibt es keinen Gott</blockquote><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FErFOZidF6kI%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DErFOZidF6kI&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FErFOZidF6kI%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/6eab4a45e3dd7b1d0f54d5cbc985694c/href">https://medium.com/media/6eab4a45e3dd7b1d0f54d5cbc985694c/href</a></iframe><p>In English:<br>“You on the schoolyard, I’m ready to kill<br>And nobody here knows how lonely I am<br>Red stripes on white skin<br>I hurt you and you moan loudly</p><p>I get hornier and hornier through your screams<br>The cold sweat on your forehead<br>Hails into my sick brain</p><p>Now you are afraid and I’m ready<br>My sick existence screams for release<br>Your white skin becomes my scaffold<br>In my heaven there is no god”</p><p>The lyrics are essentially about the torture of innocent victims found in the school playground. The torture is sexual, and the speaker realises that he is mad. It is also interesting that the white skin, which represents innocence, becomes his scaffold, suggesting that this will not end well.</p><p>Another notable mention from the old times is “Bück dich”</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FveZaHimbtaQ%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DveZaHimbtaQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FveZaHimbtaQ%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/83fcdfa02ed9e24dbccf7f52bb79787a/href">https://medium.com/media/83fcdfa02ed9e24dbccf7f52bb79787a/href</a></iframe><p>The song basically says “bend over! I command”, “your face does not matter”.</p><p>At the time, we would never have thought that these could be Rammstein’s fantasies, or rather Lindemann’s, because the accusations seem to be aimed at him alone. We just thought they liked to provoke or create art that tried to show and perform a special morbid part of being human.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FIxuEtL7gxoM%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DIxuEtL7gxoM&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FIxuEtL7gxoM%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/27dd12f35f8266a6a07144389e36d870/href">https://medium.com/media/27dd12f35f8266a6a07144389e36d870/href</a></iframe><p>“Ich tu dir weh” is an easy one. It’s mostly giving insight into BDSM. Easy to interpret.</p><blockquote>Nur für mich bist du am Leben<br>Ich steck’ dir Orden ins Gesicht<br>Du bist mir ganz und gar ergeben<br>Du liebst mich, denn ich lieb’ dich nicht<br>Du blutest für mein Seelenheil<br>Ein kleiner Schnitt und du wirst geil<br>Der Körper schon total entstellt<br>Egal, erlaubt ist, was gefällt</blockquote><p>“Just for me you are alive<br>I put decoration in your face<br>you are totally devoted<br>You love me, cause I love you not<br>You bleed for my salvation<br>A little cut and you become horny<br>The body totally disfigured<br>Whatever, do as you like”</p><p>The song was somehow wrong. At first glance, the lighting, the clothes, everything made it easy to conclude that this was all about BDSM. But it was not quite right. “You love me, because I love you not” is not really BDSM. I have not quoted the whole lyrics here because they just describe more things that can be done to hurt someone. The important thing is that it does not seem to be consensual. It gives the impression that the speaker is somehow crazy. But yeah, I just thought it was another way of expressing and especially trying to give an impression of the morbid and dark places of human psychology. It was not that Rammstein only produced sexually morbid stuff. There was also other morbid stuff like eating people (Mein Teil). So back then we could all agree that Rammstein just like to provoke with controversial topics. Probably carving out a map of dark places in our minds.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*3aVvAHGEoMQGMvfk.png" /><figcaption>Till has cut a real hole in his cheek to get the lighting into his mouth.</figcaption></figure><p>It is also worth noting that Lindemann drilled a real hole in his cheek for “Ich tu dir weh” in order to shine a light into his mouth. This story is told by the band in a making of (<a href="https://www.youtube.com/watch?v=VWBDNL71tJo">watch on youtube</a>). To make it even more real and cool, Lindemann really injures himself and his bandmates are somehow impressed by his dedication, but also somehow shocked. They even say that they tried to stop him, but he was so happy to do it. It is dangerous, you can get a deadly blood infection, but he was only happy when he finally hurt himself like that and made his art more real.</p><p>For my personal reception of Rammstein this was a turning point. I no longer had the feeling that they just wanted to provoke people. Like they were just showing society things that society wanted to keep under the carpet. It was about more. It was about an inner conflict between hating yourself and taking that hatred out and finding something to release the pressure. They wanted to express that inner emptiness, creating an uncomfortable feeling of helplessness that can be silenced for a moment when violence erupts. We will see this pattern again.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*NnqMWCmWRxxFqGVO.jpg" /></figure><p>And then came Lindemann, the project Till had with Peter Tägtgren. Me and my friends were like “WHAT THE HELL IS THIS”. We tried to deal with the content, but it was a heavy load. Someone came to the conclusion that Till now has his own project where he can do all the stuff that is too crazy for Rammstein.</p><h3>Lindemann — Skills in Pills</h3><p>The first Lindemann album is in English, so we don’t have to translate it here. There are even a couple of songs about sex, most notably “Fish On”. It starts with “Catching ladies is my delight” and in the clip there are some young naked girls being chased by hairy creatures.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F142008863%3Fapp_id%3D122963&amp;dntp=1&amp;display_name=Vimeo&amp;url=https%3A%2F%2Fvimeo.com%2F142008863&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F539188105-28c5e7696511f376c8909cef261275fe1e3a6b3e09d0250c6f6c0c6efa825f21-d_1280&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/e33c46ba4c89e925af8f95861e6c91fe/href">https://medium.com/media/e33c46ba4c89e925af8f95861e6c91fe/href</a></iframe><p>The girls are eventually caught in fishing nets, and there are some chunky lines like “my rod is stiff” and “I put some grease on my reel”. Then the girls have to ride stationary bikes, which generate the electricity for the building. The building has “Lindemann” written on the top, with lights powered by the girls riding the bikes. At the end, the girls kill the hairy creatures with knives and flee the scene. As a result, the lights go out and the band is left frustrated in the dark. The Lindemann band is furious and Till smashes Peter’s accordion.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*XGaucnf3Uaprgfd2.png" /><figcaption>One of the caught girls spits at the guy whipping the girls. She repeats that later.</figcaption></figure><p>At the time, it didn’t quite make sense. The fishing part made sense, but why are they riding bicycles? Why are they generating electricity? But if the accusations are true, then it makes more sense, a darker sense. The captured girls are what powers the band and allows Lindemann to perform. There are insiders who say that Lindemann’s sex addiction and groupies are an open secret. And in rock’n’roll it is not unheard of. But assuming he is guilty of the accusations that go beyond that, the plot of Fish On foreshadows quite well what is happening now. At some point his victims will collectively turn against him and that will turn off the lights for Rammstein or Lindemann. We don’t know yet how it will end, but from my point of view Rammstein have already fired one person involved, Alena Makeeva, who is supposed to be responsible for fishing the girls for Till, er, I didn’t want to say fishing, I meant she offered them to meet Till at a private after-show party. We don’t know how this will end and if Till will be found guilty or if these accusations are true, but if they are, then Fish On is spot on.</p><p>The allegations go as far as saying that there was a power system that made girls his sex victims — the hairy creatures in the video. These sex victims are what drives him as Lindemann to perform, and in the end the victims turn against him, and the lights go out.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*sVey2JBTKZ4qV6mz.png" /><figcaption>Once the caught girls flee, the lights go out and the band is frustrated.</figcaption></figure><p>In a crazy way you could say that Till Lindemann saw it coming and put it into a song. But the question is whether we are confusing the lyricist with the performer.</p><h3>F &amp; M</h3><p>Later came the second Lindemann album, and the first one already had a higher density of sex-related songs than any Rammstein album. The title of the second album “F &amp; M”, which stands for “Frau und Mann”, refers to the relationship between man and woman. The song of the same title is about an attempted rape…</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FDji_km6UJvA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DDji_km6UJvA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FDji_km6UJvA%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/18391742f330b1f33181fa6230e19308/href">https://medium.com/media/18391742f330b1f33181fa6230e19308/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/856/0*BgVnARg-5gzBgE0N.png" /><figcaption>An injury starts bleeding again after the protagonist sees his victim again</figcaption></figure><p>And then there are 3 more songs on this album and oh boy do they hit home. “Ach so gern”, “Knebel” and “Platz Eins”. I’d be a liar if I said I didn’t like these songs. They are good and they are works of art. Imagine if you look at art, or consume art in any way, and you get the feeling that the artist really wanted to get a message across to you, then you know it is art. That is a necessary part of great art. Heidegger says this in “The Origin of the Work of Art”, and please forgive me for trying to translate it, because it is hard enough to understand this guy in German. But what is really at work in a work of art is the revelation of true truth. And that is the impression I got from these songs.</p><h3>Knebel</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fp64X_5GX0J8%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dp64X_5GX0J8&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fp64X_5GX0J8%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/05545654e9bf17931d97b429d2465a7d/href">https://medium.com/media/05545654e9bf17931d97b429d2465a7d/href</a></iframe><p>As this song has two parts, let’s just go through the lyrics step by step:</p><blockquote>Ich mag die Sonne, die Palmen und das Meer<br>Ich mag den Himmel, schau’ den Wolken hinterher<br>Ich mag den kalten Mond, wenn er voll und rund<br>Und ich mag dich mit einem Knebel in dem Mund<br>Ich mag volle Gläser, die Straßen, wenn sie leer<br>Ich mag die Tiere, Menschen nicht so sehr<br>Ich mag dichte Wälder, die Wiesen blühen sie bunt<br>Und ich mag dich mit einem Knebel in dem Mund</blockquote><p>“I like the sun, the palm trees and the ocean<br>I like the sky, look at the clouds<br>I like the cold moon, full and round<br>And I like you with a gag in your mouth<br>I like full glasses, the empty streets<br>I like animals, humans not so much<br>I like dense forests, meadows blossoming colorfully<br>And I like you with a gag in your mouth”</p><p>The first verse shows the lyrical speaker’s love of nature, tranquillity, animals, and plants. The speaker does not really like people but likes someone who is addressed and has a gag in their mouth. As the video shows a young naked girl with chains, this could be addressed to her.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*tznDoSSStixsICzr.png" /><figcaption>Taking a walk through a river with a naked girl in chains.</figcaption></figure><blockquote>Das Leben ist einfach, einfach zu schwer<br>Es wäre so einfach, wenn es einfacher wär’<br>Ist alles Bestimmung, hat alles seinen Grund<br>Und du bist ganz still, hast einen Knebel in dem Mund</blockquote><p>“Life is so easy, easily too hard<br>It would be so easy if it was easier<br>It’s all destiny, everything has its cause<br>And you are so quiet, got a gag in your mouth”</p><p>The chorus shows how the speaker has a distorted view of life, trying to convince himself that it is easy when in fact it is difficult. Everything has a reason, there is a destiny, which refers to circumstances that cannot be changed. Again, the recipient has a gag in his mouth and is (unsurprisingly) silent.</p><blockquote>Ich mag leichte Mädchen und weine wenn sie schwer<br>Ich mag deine Mutter, den Vater nicht so sehr<br>Ich mag keine Kinder, ich tue es hier kund<br>Und doch ich mag dich mit einem Knebel in dem Mund<br>Ich mag die Tränen auf deinem Gesicht<br>Ich mag mich selber, mag mich selber nicht<br>Das Herz ist gebrochen, die Seele so wund<br>Und du schaust mich an mit einem Knebel in dem Mund</blockquote><p>“I like light girls and cry when they are heavy<br>I like your mother, your father not so much<br>I don’t like Kids, I say it now and here<br>But I like you with a gag in your Mouth”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*i-XiKmm_qzoFRAbU.png" /><figcaption>Life would be easy if it was easier.</figcaption></figure><p>The light girls can also be translated with “floozies”, in German there is an ambiguity because light girls can be literal light girls or floozies. I did not want to translate this directly with floozies, because I think Lindemann really wanted to have another contradiction of light girls who are heavy. The speaker doesn’t like children, he likes the mother but not the father. This is a typical pattern for a seducer who sees the father as a protector and children as a nuisance.</p><blockquote>Das Leben ist traurig, das Leben ist schwer<br>Ich würde es mögen, wenn es einfacher wär<br>Die Welt dreht sich weiter, die Erde ist rund<br>Um dich dreht sich nichts, hast einen Knebel in dem Mund<br>In dem Mund, ja</blockquote><p>“Life is sad, Life is hard<br>I’d like it, if it was easier<br>The world is revolving, the earth is round<br>Nothing revolves around you, you have a gag in your mouth<br>In your mouth, yes”</p><p>Here the song changes drastically. The speaker is no longer torn between two opposites. Life is no longer easy, slightly too hard, as it was described before, life is now sad and hard. No contradiction, just the statement of a frustrated person. The irrelevance of the person being addressed is clearly stated and this is also linked to the gag in the mouth. The next thing that happens is a change from harmonic to aggressive.</p><blockquote>Ich hasse dich<br>Ich hasse dich<br>Ich hasse dich<br>Ich hasse dich</blockquote><p>“I hate you<br>I hate you<br>I hate you<br>I hate you”</p><p>In fact, the person being addressed is now hated and not liked as before. All ambiguity, all contradictions are gone, and unlimited aggression breaks out. This is clear in the lyrics, the visuals, and the sound. The video shows blood on faces, screaming and expressions of violence like biting an eel.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ylVJjXhWhKtmvwK_.png" /><figcaption>Biting into an eel, because why not?</figcaption></figure><blockquote>Leben ist einfach, einfach zu schwer<br>Es wäre so einfach, wenn es einfacher wär’<br>Ist alles Bestimmung, hat alles seinen Grund<br>Und du bist ganz still, hast einen Knebel in dem Mund<br>In dem Mund</blockquote><p>This is the chorus again and we have already translated it. This time it is sung in an aggressive tone. The song describes a silenced victim who is addressed by the lyrical speaker. The lyrical speaker is a torn soul trying to maintain balance, but eventually all the hatred and frustration breaks out as the song moves into the exploding aggressive second part.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*WZ5qVOnnAorCUSLP.png" /><figcaption>The protagonist is pulled by his chain…</figcaption></figure><p>In order to interpret and understand the song, it makes sense to watch the uncensored version, as it reveals a few more details. When the aggression breaks out, the protagonist (played by Till) has his mouth in the crotch of the victim, who is standing in front of him and then sitting on his shoulders. He bites into the genitals or performs oral sex, it’s not clear, but there’s a lot of blood coming out of the crotch. If you have not understood how strongly this song is linked to sexuality, it should be clear now. In the first two scenes of Let’s call it eating story the victim looks like she enjoys it, after that the victim fights against Till and swims away in the end. Even before that, the victim pulls the protagonist towards her with the chain connected to the protagonist’s steel collar. The steel collar was seen earlier in the clip. Afterwards, the victim swims away and hits the chain into the water, as if frustrated by its presence. The chain and collar are clear symbols of his enslavement, and what makes him a slave could be his sexuality.</p><h3>Ach so Gern</h3><p>Although Knebel may not be clear at first glance, on closer inspection the story Knebel tells is clear. Let’s move on to the next song.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FxkEju-deJsE%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DxkEju-deJsE&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FxkEju-deJsE%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/4d266929b2cb0b2618b7022f7531cda9/href">https://medium.com/media/4d266929b2cb0b2618b7022f7531cda9/href</a></iframe><blockquote>Ich kannte viele schöne Damen<br>Auf dieser schönen, weiten Welt<br>Mit Fug und Recht kann man da sagen<br>Ich war ein wahrer Frauenheld<br>Man sagt mir nach, ich wäre schamlos<br>So herz- und lieblos und frivol<br>Man meint, ich hätte sie gezwungen<br>Nein, die Wahrheit liegt dazwischen wohl</blockquote><p>“I have known many beautiful ladies<br>In this beautiful wide world<br>One can rightfully claim<br>I was a real womanizer<br>They say that I am shameless<br>So heartless, unkind and frivolous<br>They say I might have forced them<br>No, the truth lies somewhere in between”</p><blockquote>Denn ach so gern hab’ ich die Frauen geküsst<br>Und doch nicht immer auf den Mund<br>Ich wollte immer wissen, wie es ist<br>Und küsste mir die Lippen wund</blockquote><p>“‘Cause oh I have so gladly kissed women<br>But not always on the mouth<br>I have always wanted to know how it is<br>And I kissed until my lips were sore”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*XaXspcKsNjRAaPSP.png" /><figcaption>The protagonist of “Ach so gern” is pulling a naked woman in the snow</figcaption></figure><blockquote>Ich küsste nicht nur rote Wangen<br>Ich hatte einfach alles lieb<br>Man sagt, ich sieche vor Verlangen<br>Besessen so von Paarungstrieb<br>Sie meinten, ich wär’ tief gefallen<br>In ein Meer von Libido<br>Man sagt, ich sieche vor Verlangen<br>Das kann man so sehen oder so</blockquote><p>“I did not only kiss red cheeks<br>I just loved everything<br>They say I am dying of desire<br>So obsessed of mating drive<br>They say I have fallen far<br>Into a sea of libido<br>They say I am dying of desire<br>You can see it either way”</p><blockquote>Denn ach so gern hab’ ich die Frauen geküsst<br>Und doch nicht immer auf den Mund<br>Ich wollte immer wissen, wie es ist<br>Und küsste mir die Lippen wund<br>Ich nahm sie einfach in die Arme<br>Und Manche hauchte leise: „Nein“<br>Doch ich kannte kein Erbarmen<br>Am Ende sollten sie’s bereuen</blockquote><p>“‘Cause oh I have so gladly kissed women<br>But not always on the mouth<br>I have always wanted to know how it is<br>And I kissed until my lips were sore<br>I just embraced them<br>And some of them whispered “no”<br>But I knew no mercy<br>In the end they would regret it”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*3Brh2QOJGOUoGxR0.png" /><figcaption>The protagonist, a rapist, is severely beaten</figcaption></figure><blockquote>Wie das Kaninchen vor der Schlange<br>Ein kalter Blick, dann biss ich zu<br>Und das Gift ruft ein Verlangen<br>Ließe nimmer mich in Ruh’<br>Ach, die Frauen, all die treuen<br>Und manches Herz brach wohl entzwei<br>Am Ende sollten sie’s bereuen<br>So viele Tränen und Geschrei</blockquote><p>“Just like the rabbit in front of the snake<br>A cold look, then I bite<br>And the venom calls for a desire<br>Did not leave me alone<br>Oh all the faithful women<br>And some heart was broken<br>In the end they would regret it<br>So many tears and screams”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*axYg2RaRmbhTxo2d.png" /><figcaption>At the end the camera zooms out showing that the cell of the prisoner is even worse than one thought.</figcaption></figure><blockquote>Denn ach so gern hab’ ich die Frauen geküsst<br>Und das nicht immer auf den Mund<br>Ich wollte einfach wissen, wie es ist<br>Und küsste mir die Lippen wund<br>Ich nahm sie einfach in die Arme<br>Und Manche hauchte leise: „Nein“<br>Doch ich kannte kein Erbarmen<br>Soll damit sie glücklich sein<br>Ba-ba-ram-bam-ba-ba-ram</blockquote><p>“‘Cause oh I have so gladly kissed women<br>But not always on the mouth<br>I have always wanted to know how it is<br>And I kissed until my lips were sore<br>I just embraced them<br>And some of them whispered “no”<br>But I knew no mercy<br>so she will be happy”</p><p>The interpretation of “Ach so gern” is surprisingly simple, there is no real subtlety here. A rapist is telling his story, how he is seen, and even from his relativisations his guilt is obvious. The lyrics say nothing of what is seen in the clip, if you watch the one-shot clip linked above. There is also an official clip, which has fewer views, but also has some other scenes. However, the important part of the clip is the protagonist being beaten by the prison guards. The lyrics could be what he explains to them.</p><p>If you want to identify a subtle message, it is the typical defence of a rapist, the girls wanted it too, mixed with some fatalistic explanation that he was a victim of his lust and external circumstances made him do it.</p><h3>Platz Eins</h3><p>And last but certainly not least we come to “Platz Eins”</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F0nS9W7MdbrA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D0nS9W7MdbrA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F0nS9W7MdbrA%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/e81e83db46febbf0e2a5e3f7bda33514/href">https://medium.com/media/e81e83db46febbf0e2a5e3f7bda33514/href</a></iframe><blockquote>Alle schauen mich neidisch an<br>Denn ich führ’ die Liste an<br>Endlich bin ich an der Spitze<br>Erfolg kriecht mir aus jeder Ritze</blockquote><p>“Everybody is jealously looking at me<br>because I am at the leader of the list<br>I’m finally at the top<br>Success crawls out of every crack”</p><blockquote>Durch die Menge geht ein Raunen<br>Und die Männer werden staunen<br>Alle Frauen, alles meins<br>Alles dreht sich nur um mich</blockquote><p>“the crowd goes ooh and aah<br>and the men will marvel<br>Every women, everything mine<br>everything revolves around me”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*JgJXzsAaMPdvVMPS.png" /><figcaption>Girls in lingerie wearing Till or Peter masks</figcaption></figure><blockquote>Ich bin Platz eins, ja<br>Alles oder nichts<br>Platz eins<br>Ich im Rampenlicht</blockquote><p>“I am number one, yes<br>Everything or nothing<br>Number one<br>I am in the spotlight”</p><blockquote>Die ganze Welt wird mich bald singen<br>Ich werde es noch sehr weit bringen<br>Jede Note sing’ ich richtig<br>Der Text dabei ist gar nicht wichtig</blockquote><p>“The whole world will sing me<br>I will go far<br>Every note I sing perfectly<br>The lyrics do not really matter”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ANtP7uJp2DDNuU-D.png" /><figcaption>Girls caged in washing machines and Till moving a cart</figcaption></figure><blockquote>Meine Lieder sind die Besten<br>Und Autogramme für die Gäste<br>Der liebe Gott hat auch schon eins<br>Und alle Engel, alle meins</blockquote><p>“My songs are the best<br>And autographs for the guest<br>Even god has one, too<br>and all angels, all mine”</p><blockquote>Ich bin Platz eins, ja<br>Alles oder nichts<br>Platz eins<br>Ich im Rampenlicht</blockquote><p>“I am number one, yes<br>Everything or nothing<br>Number one<br>I am in the spotlight”</p><blockquote>Vor, zurück, zurück und vor<br>Jeder will mein Lied im Ohr<br>Vor, zurück, zurück und vor<br>Alle singen mit im Chor</blockquote><p>“Forwards, backwards, backwards and forwards<br>Everybody wants to listen to my song<br>Forwards, backwards, backwards and forwards<br>Everybody is singing all together”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*_pmTpZVSJtNNTcMm.png" /><figcaption>The dehumanized fans cheer — the lyrics do not really matter</figcaption></figure><p>I only cut the last part, as it just repeats stuff that has already been translated. Overall, I tried to stick more to the German lyrics than to go for good English. The lyrics of Platz Eins alone are quite clear. It is about a very successful musician, who is maybe a bit too arrogant and self-opinionated. Some of the statements are interesting. The lyrical speaker realises that it does not really matter what lyrics he writes, the fans will still admire him. He also thinks that all women belong to him.</p><p>Watching the clip here makes a lot of sense for further interpretation of the song. There is also an uncensored version that shows scenes from Till’s porn “Till The End”. The video for the song shows women in cages, washing machines or being moved around in a trolley, presumably after being knocked out. The women wear lingerie and are clearly presented as sex objects. In some scenes they wear masks with Till’s or Peter’s face on them. These are further objectifications / dehumanisations. It could be argued that they are being held as sex slaves. The sex scenes also show violence, as in the porn “Till The End”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*8xDwYnN0C8eaQhZb.png" /><figcaption>“Alles Oder Nichts” Hotel</figcaption></figure><p>In summary, it is quite clear what is meant by “Every woman, everything mine”. The scene takes place in a hotel called “All or Nothing”, which means “all or nothing”. This indicates that the protagonist is willing to risk everything to get everything.</p><p>And then there is the Till nugget. The video also shows a second scene where Till is lying on a bed in some kind of dark hospital, with no limbs, just his body and head.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*8H26C0WjXsSUaed1.png" /><figcaption>“Till Nugget”</figcaption></figure><p>It’s a completely different scene, an absolute contrast. The same person who was untouchable, who had girls in cages, who was number one, is now in the worst condition. He still whispers “number one”, so there is a clear connection. Sometimes he gets an injection and Peter sits next to his bed playing a mini keyboard.</p><p>His eyes are white, and the unusual colour indicates possible brain damage. As the video continues, it becomes increasingly disturbing, with a naked girl crawling backwards, blood everywhere, and then the video cuts to the final scene in the dark hospital, where the protagonist stops breathing and dies.</p><p>There are several possible explanations for how these things are connected. One interpretation could be that the protagonist is insane and dreams of being number one and doing all this crazy stuff. Another possibility is that one scene shows his inner world and the other his outer world. So emotionally, personally, he is totally crippled and almost dead, but on the outside, he is living a wild life with no limits. The third possible interpretation is that there is a transition from the successful superstar to the wreck of a human being.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/971/0*RLhi5W7sGdnxmi_f.png" /><figcaption>Sometimes the party in the hotel room escalates a bit more than initially planned.</figcaption></figure><p>In my opinion, the first interpretation can be largely ruled out, because when such a dream or hallucination plot is set up, the filmmakers indicate it by having a waking scene, sometimes starting with the real setting, then moving into the dream, and then leaving it with a waking scene. This is not the case here; the video starts with the action and often jumps back and forth between the settings. However, it could be that the Nugget is remembering things from the past.</p><p>I think both interpretations are acceptable, and what is clearly seen in the video is an evolution. That is, the madness grows, and everything becomes wilder and escalates from an initially less mad situation. This works very well with the third interpretation, which is the transition from superstar to crippled nugget. But in fact, this also works with the inside/outside world interpretation, because there could also be a development and a transition into more madness. So, in this case, the outer actions become crazier, more violent, and disturbing, while the inner emotions become duller and duller, to the point of total emotional death at the end.</p><p>Both interpretations fit well with the overall tone and aesthetic of the video. It shows a dark side of super-success, a mind unable to cope with being number one. With the realisation that anything is possible, and you will still be cheered, comes absolute brutalisation. Or as another saying goes, “absolute power corrupts absolutely”. This is probably the abstract message here and that this corruption will not end well.</p><h3>Conclusion</h3><p>I have already said that we do not know if Lindemann is guilty of the accusations. In any case, it is crazy how many songs deal with exactly this topic. All these stories revolve around someone who abuses women, and it will not end well. On the newer albums that Lindemann has written the lyrics for, the theme is also present several times on a single album.</p><p>t is obvious that it is not possible to draw any conclusions from this. If it were possible, it would mean that any art about any subject means that the artist actually does what the lyricist does, which is clearly nonsense. By the same token, it does not exonerate him. Which brings us to a very popular point, especially among Rammstein fans:</p><p>“Why would Till do such a thing, if he has so many groupies, if there is absolutely no need for it?”</p><p>And this point is also bullshit. Then you can also say: “Genghis Khan did not rape women, why should he? He was the ruler of the Mongolian empire, so many women wanted to have children with him, there is no need to rape. And the same goes for so many others. There is only one way to find out if he is guilty and that is to listen to the guests at his private parties and find out if they are full of contradictions and have them analysed by professionals. That is why we have a legal system and hopefully it will be able to find out the truth.</p><p>Another thing that happened after the organiser of the private parties was fired is a statement by Christoph Schneider, Rammstein’s drummer:</p><p>It can be found on Instagram: <a href="https://www.instagram.com/p/CtjOPD8sgZf/?hl=de">https://www.instagram.com/p/CtjOPD8sgZf/?hl=de</a></p><p>He says 3 important things. Firstly, he does not think that Till did anything illegal. Secondly, he is sad because Till has created his own bubble with his own party and his own people, which he does not share with the rest of the band. These parties are clearly different from Rammstein’s official after-show parties, where everyone is free to leave at any time and all drinks are opened in front of the guests so they can see that they are unaltered. The third statement is that he feels sorry for the women who did not get what they expected, even though it was legal.</p><p>Well, the statement that every drink is opened in front of the guests is hard to believe because I have never seen a party where 30 shots are filled from a bottle of vodka and before everyone gets a shot, everyone is gathered around, the bottle is opened, and the shots are filled in front of 30 guests and then the shots are distributed. I call that bullshit. It is also noteworthy that he mentions structures that have grown up in Till’s bubble and do not represent the values of the rest of the band. Even if his statement is meant to show that Till is not guilty in a legal sense, it still sounds very strange. The statement sounds like “Yeah well, actually what happened there is not ok, I don’t think anything illegal happened, but just in case I didn’t really have anything to do with it”.</p><p>Finally, can we now answer the question of whether Till Lindemann saw it coming? We don’t know if he saw it coming, but the lyrical speaker definitely saw it coming. After Christoph Schneiders statement it sounds very much like there were crazy sex parties, if they contain illegal elements is not clear yet, but the band distancing itself from these parties is a strong statement.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=315d2ddffbd1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why I did not pay taxes on Staking Rewards and why it makes sense.]]></title>
            <link>https://patrick-wieth.medium.com/why-i-did-not-pay-taxes-on-staking-rewards-and-why-it-makes-sense-b6cc3b0ec4e?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/b6cc3b0ec4e</guid>
            <category><![CDATA[cryptocurrency]]></category>
            <category><![CDATA[staking]]></category>
            <category><![CDATA[cosmos-network]]></category>
            <category><![CDATA[proof-of-stake]]></category>
            <category><![CDATA[passive-income]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Wed, 01 Dec 2021 19:44:56 GMT</pubDate>
            <atom:updated>2021-12-01T19:44:56.716Z</atom:updated>
            <content:encoded><![CDATA[<p>So just recently I got my tax receipt from the german government and it says I don’t have to pay taxes on my staking rewards from Cosmos in 2019. This is very surprising, since in Germany you have to pay taxes on staking rewards, usually up to 42%.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/1*QdEDgCUGAPT9fdqOYkPCGw.jpeg" /></figure><p>For me Cosmos was the first investment in a Proof-of-Stake coin with its ICO in 2017, before that I was not convinced by the earlier candidates, like Peercoin and others, but I was mining Bitcoin in the early days and getting into Ethereum, once it had proven it works with Frontier. I have to be honest, when the Ethereum ICO was announced, I just thought “yeah sure buddy, you gonna build this, at your age, in a few years, this is ridiculous” and just closed bitcointalk.org, lol. But once it was working, I could not believe it, amazing. So from all of this I was already aware, that taxes have to be paid and even if crypto makes it very easy not to pay taxes and to avoid them, here comes financial advice: Pay your taxes. In Germany school and university are for free and I greatly profited from this, so it is all fair to pay taxes now. Just look at it as a big lottery. You get some nice stuff for free, some people will succeed and get a lot of money and then you should pay your taxes. If you lose the lottery, then you don’t have to pay as many taxes back as what was given to you and that makes life at least a little bit fairer, since we cannot influence this lottery much. We can only try and be lucky.</p><p>OK, but wait, this article is about not paying taxes? Yes, I will get to this point. So when I started to stake my Cosmos’ Atoms, I already expected I’d have to pay taxes, since the earlier advocates of PoS already made it through the german tax system, even though there are technical differences, like having a Masternode and not being able to delegate only. How these differences play out for the taxation is another complex matter, but we will focus on the important stuff here.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/1*LDhxnwCCE8GYl5Mcorlj4g.png" /></figure><p>Ok wait again, you are only talking about Germany, for me as a shitcoiner from somewhere else, this doesn’t matter? Well, I found out over the years that many countries are very similar and the same principles apply. There are 2 reasons for this. A) The principles are copied from other countries. So if a country has an effective way of taxing stocks and capital gains, other countries copy the concept. B) The principes often make a lot of sense and many alternatives lead to problematic outcomes. So if you are familiar with your tax system it is very likely you will recognize the concepts presented here.</p><p>Having this stuff in my mind, sometimes I thought about how taxing of staking rewards works. In essence you get some rewards for each block and these rewards have a value. This is your income. You have to pay taxes on this income. So if you get 1000 Atoms and these are $10 worth, you have to pay taxes on $10.000. Now let’s think about what these rewards consist of. There are the fees, which users pay if they do transactions and there are inflationary rewards. In 2019 the inflationary rewards easily make up 99% of the rewards given. So these inflationary rewards have the main purpose to punish people who are not staking by inflating the amount of Atoms in the system. So if you stake, your account grows with inflation, but if you don’t stake your account loses value because of inflation. So if Cosmos was designed differently, for example by penalizing non-staking accounts and just subtract 5% coins per year (smoothed over all blocks) from these, then you would not have to pay taxes. You only have to pay taxes on the remaining fees, which make up 1% or less. But then if you have some atoms being removed by non-stakers punishment, these are losses in the sense of taxes, so you can subtract these from your gains and pay less taxes (on the gains from fees for example). This means that 2 different systems which achieve the same thing and don’t differ in how they economically work (they differ how they account, though) would be totally different when taxed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/482/1*SjoHMlLaF8kiRNknOnuTNg.png" /></figure><p>This lead me to the thought that something doesn’t make sense here. And then I found out that share splits for stocks are not taxed. And if we think about it, it would not make any sense to tax this. A share split turns your 10 Berkshire Hathaway shares valued at $2500 into 100 valued at $250. The value of your account does not change. And therefore nothing should be taxed. With inflationary staking rewards you have a share split every block. You get some extra Atoms, without any value added, the price should fall accordingly, it is just smoothed out over the year.</p><p>So I explained this to my tax lawyer/adviser and he said: “Yes, sure it makes no sense to pay taxes on something that just changes accounting. It should be possible to explain this to the authorities and they should follow”. So then I had to explain how in reality it is all a bit more complicated:<br>There are inflationary rewards and fees. The gains from the latter should be taxed. And inflationary rewards are not that easy, because not everyone gets them. This means the inflationary reward is only like a share split if everyone stakes. If some do not stake, which is always the case, then you get an economic advantage over them, because your account inflates by 10% and the whole network only inflates by 7%. This means that 7% should not be taxed but 3% should be. On top come fees, which also should be taxed. This makes it a bit more complicated. Luckily there is an easy way of calculating the actual gains:</p><p>virtual_gainz = sum of claim_rewards<br>real_gainz = advantageous_inflation+fees</p><p>The virtual gains are pretty easy to calculate, since this is what you can see in your transactions. Advantageous_inflation is the part of inflation that gives you proportionally more than what the network actually increases. The real gains are unfortunately not so easy to read out somewhere, but let’s try to have more equations for more clarity:</p><p>sum of claim_rewards = fees + advantageous_inflation + even_inflation<br>=&gt;sum of claim_rewards — even_inflation = fees + advantageous_inflation</p><p>Ok, this is pretty nice, because on the right side of the last equation is the expression we already have for the real_gainz, which gives us a resulting equation:</p><p>real_gainz = sum of claim_rewards — even_inflation</p><p>Now we have arrived at a very helpful equation and this is a very special moment for me, because this is the first time my Physics PhD and years of handling complex equations like these have helped me in real life! <br>Why is this equation so helpful? Because the even_inflation is very easy to calculate. If there is 240m Atoms at the beginning of the year and there are 256,8m Atoms at the end, then there was 7% global inflation. These numbers are publicly available, basically they can be derived from the market cap and the price, just by dividing and these numbers are very available for every day and even for the hour and minute.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/420/1*UMjcf-TlJlX53bu-O4Swlg.png" /></figure><p>So how does the exact calculation look like?<br>You have 100 Atoms. In the fiscal year you collected rewards of 3 and 5 Atoms. The network has inflated by 7% in this year. This means you lose to even_inflation 7 Atoms (100*0.07 if you want to follow this calculation closely). This gives you real_gainz of 1 Atom (3+5–7, just go through it slowly). So if the price of the atom at the collect rewards point in time was $10, then you have to pay taxes on these $10. Easy we understand this. It gets a bit more complicated though if there are different valuations at the collect reward times (which is extremely likely). So with our example, let’s assume in the time span over which the 3 Atoms were collected the network increased by 4% and for the 5 Atoms it was 3%. Prices were $20 and $10 respectively. So now for the first collect we have claim_rewards of $60 and loss to inflation $80, for the second collect we have claim_rewards of $50 and loss to inflation of $30. Now for the whole year we have made -$20 in the first interval and +$20 in the second. This means for the whole year we have made no real gainz and don’t have to pay taxes. This is the scenario why I don’t have to pay taxes for staking rewards in 2019. At the beginning the price of Atom was a bit higher and I did not stake immediately, so I lost out on inflation, later in that year I made more from staking rewards than what I lost to global inflation, but it’s mostly eaten up by the first months, mostly because the price was also higher in this time span.</p><p>I think at this point it is pretty clear, how it works and even if you stake all through a year, it benefits you, because there is always something that should be subtracted from your vitual_gainz. I also hope it was understandable why it makes sense, since taxing should not be based on accounting but rather on real value added. Especially since the former allows for all kind of tax evasion shenanigans and we don’t want that for a functioning society.</p><p>Expected to be frequently asked questions:</p><p><strong>Does this apply to all Proof-of-Stake staking rewards?<br></strong>No. Take for example Terra, which is also based on the Cosmos technology, therefore one would expect it to be taxes in the very same way. Surprisingly this is completely different, because Terra does not do inflation. On Terra the rewards come from fees of blockchain transactions and users of the real world payment applications built on top of Terra. You have to look closely what the rewards consist of. Another important question is if this is actually running a business, which changes a lot of the taxation. In Germany for example mining PoW coins is a business, you automatically do it as a business and not as a private person. The same applies to running a validator (or Masternode), but does not apply to delegation, which usally belongs to your private asset management.</p><p><strong>Does this include losses from slashing as well?<br></strong>No, slashed Atoms are directly removed from your staked Atoms, just write down this number when you stake and then you can easily calculate your losses from slashing.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b6cc3b0ec4e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Polkadot vs. Cosmos vs. Ethereum 2.0 — for real idiots]]></title>
            <link>https://medium.com/coinmonks/polkadot-vs-cosmos-vs-ethereum-2-0-for-real-idiots-3b6f0e0cfb2f?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/3b6f0e0cfb2f</guid>
            <category><![CDATA[cosmos-network]]></category>
            <category><![CDATA[ethereum]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[cryptocurrency]]></category>
            <category><![CDATA[polkadot]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Wed, 17 Feb 2021 07:47:46 GMT</pubDate>
            <atom:updated>2022-06-24T18:10:48.339Z</atom:updated>
            <content:encoded><![CDATA[<h3>Polkadot vs. Cosmos vs. Ethereum 2.0 — for real idiots</h3><p>Hey, it is this time again. Cryptocurrencies are freaking out and articles pop out everywhere about what you should buy and what is great. So it is no surprise that I write an article. As usual, it is written from an idiot for idiots. The idea for this article is very old, exactly 3 years ago someone responded to my Cosmos article, asking what is the difference between Cosmos, Ark, Aion, ICON, and Wanchain. I was surprised that the question did not include Polkadot. So I also responded that Polkadot should be in the comparison since it is the only project en par with Cosmos. The others seemed to be not a real competition. Dfinity was also on my mind back then as something that has its own valuable ideas and does not jump on the interoperability hype train without its own ideas. However, now 3 years later, turns out most of these mentioned projects do not have much relevance left except for Cosmos and Polkadot, so let’s have a look. And Ethereum of course. What else can you expect from this article here? More understanding of shards, interoperability, and state-of-the-art proof-of-stake. Nice!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/930/1*62eJCx-TXUTEiTRLhk3yfw.jpeg" /><figcaption>Here we will be enriching the text with high-information density images like this.</figcaption></figure><p>Why exactly these 3 projects? To me these seem to be the three main players when it comes to being a platform that has interoperability, scaling and the appropriate ecosystem. I have done a lot of research, re-read their whitepapers, all kind of other documentation and tried to get a decent insight. However since this is so much stuff, this article will only scrape the surface and I will always be biased. I try to be not biased, but I’m a human and also work with one of the technologies in my blockchain based game project (www.crowdcontrol.network), so at the end of the article you can guess which technology I actually use. If you guessed it right, please flame me in the comments.</p><h3>Blockchain 3.0</h3><p>The development we are looking at is commonly called “Blockchain 3.0” and there are several definitions out there, but I think the Polkadot whitepaper has it quite nicely, which defines:</p><ol><li><strong>Scalability</strong></li><li><strong>Isolatability</strong></li><li><strong>Developability</strong></li><li><strong>Governance</strong></li><li><strong>Applicability</strong></li></ol><p>as the key areas, where Blockchain 3.0 needs to revolutionize the industry. This article will be segmented into 3 sections. At first we will try to understand what all of this means and what the state of the art is. Then we will look at general solutions to these challenges and their implications. Finally, we have a look at how our three big players do it. At the end, I will try to give some insights how this applies to real-world dApps (decentral Applications).</p><h4><strong>1. Scalability</strong></h4><p>Well, I like this definition the most: Scalability means how much extra workload can be processed by providing extra workforce. For example having additional excavators on a construction site means faster excavating (surprise!). However, at some point it might not help much putting an additional excavator in the scenery, because there might not be enough space left to operate. Scalability is limited, you cannot use 1000 excavators to have 10 times the throughput of dirt compared to 100 excavators.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/768/1*QBbDZyUBkGCJRwL_tek49w.jpeg" /><figcaption>Chinese construction site — Testing the scalability of excavators.</figcaption></figure><p>Looking at Bitcoin or any other Nakamoto Consensus based system (this means Proof-of-Work + Longest-Chain-Rule), we define the throughput of transactions as the workload and the number of nodes or miners as the workforce. Then the <strong>scalability is a surprising 0. Zero.</strong> For excavators quadrupling the number might still give double the throughput or more, but for Bitcoin miners or nodes, quadrupling the number of nodes/miners does not give double the throughput but rather no additional throughput at all. I explain this in my <a href="https://medium.com/coinmonks/proof-of-work-vs-proof-of-stake-for-real-idiots-a23ac4565649">PoW vs. PoS article</a>, but here we only need to know, that all of the additional nodes or miners totally contribute to the security of the system. This means there is not very much that can be done to increase the ~7 tx/s (transactions per second) Bitcoin can process. Well, actually it is a bit more, because of some awesome improvements like SegWit, but nothing that fundamentally changes the situation. In essence this means that Bitcoin is not suited for being a payment network. WHAT? WHEN I GO TO BITCOIN.ORG IT SAYS IT IS AN INNOVATIVE PAYMENT NETWORK? WHAT ARE YOU TALKING? Hold up. The people running this website might know it better, but marketing is not about saying 100% correct things, but rather saying 100% catchy things. But Bitcoin has the highest marketcap, what are you talking? The reason for this is not that paying with bitcoin is so nice. How often do you think, when buying a ticket for a train or bus “I would like to pay for this with bitcoin now” and in case you do, how often do you consider the transaction fee of Bitcoin and come to the conclusion that you really want to do it? I hope never. Because it does not make sense to buy a $2 ticket with a $10 tx fee.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JvAb336S_iKUVDbYJZud_Q.jpeg" /><figcaption>A “funny” meme summing up what we just learned.</figcaption></figure><p>Keeping all that in mind, Bitcoin is still very good at one thing and that is being an innovative store-of-value network. For store-of-value you don’t need to be very good at doing many transactions, you need to be very good at being secure. Visa/Mastercard is usally mentioned here with its ~2000 tx/s as a reference, that should be reachable with blockchain. Ok, fine. Now we understand why scalability might be a thing. It becomes even more daunting, once we think of Ethereum, where not only currency is transacted but also computation is done, which leads to much more workload.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*Bte44cG8zqxPefJlMzI9hA.gif" /><figcaption>Monks training for day trading with Uniswap.</figcaption></figure><h4><strong>2. Isolatability</strong></h4><p>Maybe, I would rather call it <strong>compatiblity</strong>. It basically means how many needs of different applications and/or users are satisfied. Well, Bitcoin only allows for transactions, but even there are different needs. Beside the mentioned $2 tickets, there are also things like escrow and multi-party payments (Bitcoin provides the functionality, yay). But there is much much more than just payments. Since Ethereum is turing-complete you can code anything you want with it. However that is only true as long as you can pay the fees for computation. If you want to build Minecraft on Ethereum, it might not work, since the computational demand is just too much. The limit here is again throughput, but there are also other things, like randomness, zero-knowledge and interoperability, where a platform might not provide the necessary functionality. So this point comes down to the wish that a platform should be compatible and perfectly suited for everything.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/680/1*7O3L_LxTpENXMYZVH-_aMQ.jpeg" /><figcaption>If this concept is not yet clear, that is ok. We will have more examples.</figcaption></figure><h4><strong>3. Developability</strong></h4><p>There is not only User Experience (UX) but also Developer Experience (DX). This point adresses that and it makes a big difference on which platform you develop. Ethereum is the network that started it and there is the great upside that you do not need to host any infrastructure. You write the smart contract and deploy it. That’s it. The blockchain runs your stuff. This is very great. No server management, no Kubernetes clusters. On the other hand smart contracts might not exactly do what you intended. They might not be safe. At one point in time really big shit happened in Ethereum. Well, actually several times. But the first thing that comes to mind is the <strong>DAO fork</strong>. Where the network needed to fork to repair a hack, that would have stolen many millions of Ether otherwise. The network did split into Ethereum and Ethereum classic (where the hack became successful). Such things can happen. Another example is the Parity wallet, which was hacked, which we will mention later again. So this point mainly comes down to <br>a) How good is the ecosystem? Are there nice projects you can use? Is the code layered and parts can easily be exchanged or is it spaghetti-hell?<br>b) How good is the infrastructure? Can you use it? <br>c) How strong is the foundation of the platform. Does it support what you want to do and if not, does it allow you to fix it and plug the fix in?<br>These three points can differ greatly. This comes from the fact, that some things exclude each other. Namely, if you have a strong infrastructure that you can use (Ethereum smart contracts) you cannot change anything of the foundation. Even if you have coded a nice solution how to roll random numbers or do zero-knowledge transactions, it might be years until this is included in Ethereum and becomes useful for you. This also applies to point 2. Isolatability.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/400/1*2rCsWFOn5P0nwImOA67EFg.jpeg" /><figcaption>This meme works even better for developer experience…</figcaption></figure><h4><strong>4. Governance</strong></h4><p>How governable is a blockchain or platform? Being a highly underestimated point, it is no surprise old projects lack it and every important project now understands why this is very important. In distributed applications there is no single entity, that is in charge and has to face the consequences alone. Everyone suffers if something goes into the wrong direction, but nobody can single-handedly change these things. Tezos for example even decided that this is the main point of their product, everything else can be upgraded into Tezos. So if we look at Bitcoin the trouble might not be visible at first glance. But actually there are three parties, the miners, the developers and the users. And there is no way to force an agreement between these parties. Hopefully their goals align, but that is not necessarily the case. For example miners might be happy with very high fees, but users want the exact opposite. When it comes to upgrades (BIP) users have no real say in deciding what or when to upgrade.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*aaUX4bmEsTJHJCFdQSXKgg.jpeg" /><figcaption>When it comes to protocol upgrades reducing the fees in a blockchain, miners love democracy.</figcaption></figure><p>Users can basically only threaten the miners to leave the ship and use something else. Developers also must hope for miners to like their work, but basically should do stuff that improves the user experience. There might be several reasons why there are so many Bitcoin forks out there, but I claim this is the main reason. Ethereum has a bit less trouble, the main reason might be that an entity like the Ethereum foundation exists, which navigates theship quite a lot. Hardcore fans of decentralization might not like this, but it might be a rational voice, advocating necessary upgrades. Just have a look at the current process of <strong>EIP-1559</strong>, this is the old story, in Bitcoin we had this quite often in the past. The miners do not want to support upgrades, which lower the fees and make the system better for the users. However the Ethereum user and developer base has a better position when facing opposition from miners.<br>In the end Governance comes down to processes to determine the interests of the members (mostly voting), deciding on software upgrades/changes, orchestration of these upgrades and eventually electing entities that represent or cooperate with the network.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*LKwFepEZBe5wHFLF7NGPjw.png" /><figcaption>Remember how we wanted to give more insight into the concept of expectation vs. reality? Here we go.</figcaption></figure><h4><strong>5. Applicability</strong></h4><p>This point mostly says “Does the platform have a killer feature?”. The Polkadot whitepaper describes it with “does the technology actually adress a burning need of its own”. This is quite funny, because later the whitepaper argues, that Polkadot should be minimal and simple, with no unnecessary functionality, even smart contracts should not run on the relay chain, but on Parachains. I think this is the right approach. There is no need to have a killer feature on its own for a platform. If the platform is well built, killer apps will come. Ethereum does not have a killer feature of its own, but <strong>Cryptokitties, Uniswap</strong>, several DEXes and lots ‘n lots of ICOs showed up organically. So let’s just cut this point. Being a good platform is the killer feature and this comes down to points 1–4.</p><h3>Main Part — Understanding the basics</h3><p>We managed our way through this first passage and now come to the part how these things are generally addressed. But what does generally mean? The thing is, that no project just does everything on its own. All of them stand on the shoulders of giants and are influenced by each other. Ethereum was such a big success, that there is no project that does not look at it.</p><h4><strong>1. Scalability</strong></h4><p>There are 2 types of scaling. Vertical and horizontal scaling. This does not apply to blockchain only, it also applies to software development in general and it also applies to everything. May it be businesses or whatever processes you imagine. Let’s take out our excavators again. If we scale horizontally, then we put more and more excavators on the field. If we scale vertically, we buy faster, bigger, better excavators. Their arms move faster and have bigger shovels, nice. For such excavators it’s quite obvious that horizontal scaling implies more friction, they are more likely to interfere with each other so that the scaling is no longer linear. Vertical scaling does not have this problem in the same way, since it is just the same unit being upgraded, so no additional communication costs and no space limitation (as long as the excavator fits on the construction site :D). The problem with vertical scaling lies in the circumstance that there are technological limits. At some point it becomes complicated to find a bigger hydraulic system, that is still able to move the shovel. Once you have put the biggest hydraulic system from the market into your excavator, you cannot further scale it vertically. You can buy more hydraulic systems and operate these in parallel, this is horizontal scaling and the excavator might still be a single unit and being scaled vertically, but the hydraulic system will be subject of diminishing returns, or say sub-linear scaling (typical for horizontal scaling). So internally the excavator is scaled up horizontally. The same applies for every component and the more you add to the excavator the more expensive it becomes and more heavy-weight and so on.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*fg1Wc6i5sOLM3rGqH3QbLw.jpeg" /><figcaption>My feeling when researching for this piece of text here.</figcaption></figure><p>Software has a special property, which basically means that copying it is very cheap. So one would say, that horizontal scaling comes for free? Well, there is still communication necessary. You can fork Bitcoin 100 times and then process 100 times the transcations, but it is no longer the same network. You might have a BTC on network #15 and someone else has a wallet on network #45, how do you interact with each other? You cannot. This is the big problem of horizontal scaling. In contrast for vertical scaling, we have the technological limits. Proof-of-Work and Longest-Chain-Rule can be scaled up by increasing block size and reducing block time. Seems to be very easy, but it is actually not an option. A fair competition and a real randomness of blocks being found is only guaranteed, if there is enough time to run the hunt for the next hash. The more you reduce the block time the more the network centralizes to those actors, which can communicate fastest to the others, the same applies with block size and bandwidth. Sad.</p><p>But what is the general approach to this? Do the cool blockchain kids of today try to scale vertically or horizontally? Turns out, both. The default solution for vertical scaling is Proof-of-Stake (PoS). The default solution for horizontal scaling is having multiple blockchains interoperating with each other. I will not explain all the fine details of PoS here, since I have done that in another article that is linked further above. We also don’t need to understand this stuff here, what we need to know is: PoS does not run a probabilistic puzzle, therefore the next block can be produced much faster. There is no need to have a long block time, the limit is basically how fast the members of the network can process a block and how fast they can communicate. Process time of a block is just a matter of CPUs or GPUs, something that can be scaled quite easily. Network delay is something that is ultimately limited by the speed of light, so a photon needs roughly 133 ms to travel around the earth. Signals in copper are 3 times slower, photons in fiber optics only lose 33% of their speed, so 200 ms is possible. However there are also routers in between and fibres might not be perfectly laid out and we need to have packets go back and forth, so let’s say 1s is reasonable.</p><p>It might be really hard to have a distributed application where all of it’s participants are informed of an update and agree to it in less than 1 second. It also depends on bandwidth compared to the block size, but for example Bitcoin blocks are 1 MB and this is not a bandwidth issue here. So if we assume Bitcoin block size and calculate the increase of throughput by running something that really has this 1s time, then we arrive at 600 times speed up. I will try to lay out how this is calculated and hopefully make it easy enough to understand. Bitcoin block time is 10 minutes and a minute has 60 seconds, so if we multiply that, we end up with 600 times the number of blocks in the same timespan. Nice speedup. Another complex calculation of Bitcoin’s 7 tx/s multiplied with 600 gives us 4200 tx/s, so in essence, Visa level is achievable by having a 1MB block every second. In reality though the timing is more between 4s and 7s, but we can also increase the block size as well, so vertical scaling looks good. Proof-of-Stake has another nice advantage, which is that it doesn’t increase global warming more than necessary. So we can get rich with it, but without destroying our planet. A nice addition on the list.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Zadh60RyqF8rCuWzk6oIgA.jpeg" /></figure><p>But here is one problem to mention and this has to do with this communication thing. In Nakamoto Consensus a new block can be broadcasted and you don’t need to wait for anything. The nodes communicate with a gossip network, so everyone tells its neighbors their new data (blocks). Why is there no need to wait for anyone? Because anybody can verify blocks without asking anyone. This is a very great feature and with PoS this is not possible. There is no cryptographic puzzle, so you cannot just verify a hash. You need to know if all the others have agreed. This means if there are 100 block producers, each of them needs to communicate with 99 others. If there are 10.000 block producers, each of them needs to coordinate with 9999. This basically means for N participants each one needs to communicate with N-1. So the communication scaling is N*(N-1), for large N the 1 does not matter, so we end up with N². We call this the <strong>N²-Problem</strong> and it means we can’t scale the block producers, or better say validators, indefinitely with PoS.</p><p>So is there a solution for the N²-Problem? Sure. A very common one is the validator delegator split. The blocks are produced by validators, who run nodes and propose blocks one after each other and vote if these blocks are valid. Their voting power is determined by their own stake plus the delegated stake from the delegators. The delegators do not run a server, they just bond their stake to a validator, who runs a server. The N²-Problem is solved, because there is only network communication necessary between the validators, so the number can be small, for example 100. What is the reason why we want to have as many nodes as possible? There are 2, one is prevention of cartel formation/collusion and the other is partition tolerance. So are we performing worse now? Well, the cartel formation or collusion part is actually still prevented by delegators. Since these can scale indefinitely, to form a cartel one needs to bribe the delegators stake as well. In case some cartel is forming, then the delegators should withdraw their stake from this cartel and redistribute it to other validators. For the partition tolerance the delegators do not help and partition tolerance is also lowered by another property of most PoS implementations. Based on <a href="http://pmg.csail.mit.edu/papers/osdi99.pdf">practical Byzantine Fault Tolerance</a> the liveness of such networks is different to Nakamoto Consensus. When too many nodes go down, then the network halts and cannot produce a new block. In contrast Nakamoto consensus does not really care. When half of the miners go down, then the next block will take longer — 20 minutes if half of the mining power is affected — but this will relax back to the targeted 10 minutes, if the network remains split. So here is a key take home message: <strong>PoS lowers partition tolerance</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/980/1*DsaEb3mDZJMjYjqZvr6EpQ.jpeg" /></figure><p>It is sad to have less partition tolerance, but on the one hand events where half of the block producers go offline, are so severe that it might be good to stop and wait for a moment anyway and on the other hand we get a nice feature for this tradeoff and that is finality. It means that blocks become final at some point. So in Nakamoto Consensus it is in theory possible that any block might be reverted later by a concurrent fork of the blockchain. That is not the case if some variant of pBFT is implemented.</p><p>Nice, so we are happy? Well, PoS has some other problems, too. Namely <strong>Long-Range-Attack</strong> and <strong>Nothing-at-Stake</strong>, which do not exist in PoW or not to that extent. These problems need to be solved in order to deploy a functional cryptocurrency. Let’s have a short look into these problems. If we implement PoS naively, which means like Nakamoto Consensus, but swapping PoW with PoS, then it is possible for an attacker to create an alternative reality. At some point in the past, an attacker could fork off and create their own reality. For example when the attacker has 1% of the stake, every 100th block is produced by the attacker. In this block the attacker maliciously increases their stake until after this has happened often enough the attacker has enough stake to take over the network. Once taken over, the attacker can produce many blocks very fast. Of course the attacker cannot fool other validators, who don’t agree to this reality. But imagine someone joining the network and wanting to download the blockchain history. This newcomer cannot decide if the reality presented by our attacker is real or not. Even more so, if he also downloads a competing reality from another validator, then using the Longest-Chain-Rule would lead to the faked chain to be more trustworthy. Since this attack works better the earlier it fakes blocks, it can basically start from the genesis block forging a different reality. That is why this is called Long-Range-Attack. Why doesn’t this work in PoW? Because you need to have the mining power to forge many blocks with high difficulty. You cannot just make up a blockchain in PoW with very high difficulty and being very long without having a lot of mining power. So any outsider can easily see your alternative fork is not the longest (heaviest) chain. Contrarily to what I just said the name Long-Range-Attack originally comes from PoW but what it means for PoW is a bit different (if you want to know, you can read it on the <a href="https://blog.ethereum.org/2014/05/15/long-range-attacks-the-serious-problem-with-adaptive-proof-of-work/">ethereum blog</a>).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/386/1*Yp3HrC-pRzW0IRcUxBoKTg.jpeg" /></figure><p>Ok, so how can we solve this? Well, actually we can’t use Longest-Chain-Rule anyway, so we need to change some parts anyway. The solution can be checkpointing. This basically means we decide that there is no way to revert some blocks. There cannot be an alternate fork overtaking these blocks anymore. This block is final. We have heard of this feature already and it is called finality. We also see that for Bitcoin it is hard to have such a checkpoint, since there is no one to decide when to do it. So what does it imply? Well, at these final blocks there is a fixed state, which we can write down and now it is sufficient for any outsider to just download this fixed state. This also means an outsider does not need to download the whole blockchain history anymore. Since all effects of the history are condensed in the state at this point. This is also called pruning. For example in Ethereum, without pruning the blockchain would be 4 TB big and with it, it stays quite small, only a few hundred GB. This still sounds like very much, but it is a big difference when it comes to what a server can easily handle.</p><p>However we see another point here, why it is hard for Bitcoin to do that. There is no state in Bitcoin. In Bitcoin blocks do collect a number of transactions which result in UTXOs on Bitcoin addresses. To know the balance of an address one needs to sum up all blocks. Ok, so how do these PoS/BFT consensus systems allow checkpoints? Basically they are defined in the protocol. So it would have been possible for Bitcoin to define such checkpoints as well, but it would mean a lot of implications, that Satoshi Nakamoto most likely wanted to avoid. In PoS/BFT this comes for free, since these implications are already bought. The most important one is waiting for all participants until they responded or at least the majority has responded. You cannot just go and say every 500th Block is final or so in Bitcoin, because that would give the lucky one to find such a block too much power. At such a point you need to make sure that everyone agrees to this checkpoint, since the good reasons to allow for competing forks still hold. In PoS/BFT we don’t need to insert such a thing into the consensus mechanism, since every block is produced by a vote. When 2/3 vote a block valid it becomes a block. In pBFT it even becomes a final block. So using this approach to make PoS a reality, the solution to the Long-Range-Attack comes for free. Nice.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mYQ6PZK3L7hDkMFsNGwslQ.jpeg" /><figcaption>Those of of you who can’t relate, please think twice.</figcaption></figure><p>Leaves us with the<strong> Nothing-At-Stake </strong>problem and it describes that there is nothing at stake when it comes to fork choice. In PoW if there are concurrent forks, you have to decide which to follow, since you cannot mine all of them. If between these forks there is just 1 transaction different, the hash will be different and you can only mine on one fork. This makes fork choice necessary. In PoS your coins get duplicated for each fork, so you can “mine” or better say validate on every single fork. This is even something that is not just possible but incentivized. It makes sense for you to follow any fork and receive the block rewards. Not a very good prospect if there might be 100 forks soon and nobody wants to solve that forkin’ mess. Even though such an attack never happened it might be the fear of it happening being one of the main reasons why early PoS Coins failed. There are some other factors, maybe the high pre-mine that was hated back in these days not giving “proper decentralization” or the lack of seeing and implementing the advantages of PoS. Discussing that is an interesting alternative topic, but let’s come back to what we are looking at here. We have an attack vector and it should be solved. How do we disincentivize validators from following each possible fork? Well, by punishing them for doing so. So in order to get the desired behavior, the protocol defines a punishment. So forks can still be done, but you have to decide. All members of your fork, slash the coins (punishment) of the members of the other fork and vice versa. If someone follows both, then slashing will happen on both. Other misbehavior is also punished by slashing coins and that is double spending and being offline for too long. With this combination of<strong> BFT + PoS + Slashing</strong> we have a system that makes misbehavior costly. The very same applies for PoW + Longest-Chain-Rule (Nakamoto Consensus). If you follow the wrong fork there, you lose your invested work, which means paid electricity, which means money. In case you misbehave and attack the blockchain properly you might also destroy the value of the coin you are invested in.</p><p>In PoS this is even tighter connected, since you must own the coin and not the hardware which might lose a lot of value, when the coin becomes worthless (for ASICs more than for GPUs). Ok great, so is both the same? Not the same, both approaches achieve the same goal, but there is a very important difference. In PoS you need to lock up coins and get punished in case shit happens, in PoW you need to burn electricity and only get something for it, if you don’t misbehave. Both achieves security for the decentral network, but one does not destroy the planet by wasting a lot of energy, which is an advantage. Just imagine how nice that would be.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*opQVyZJGt4JGwRdsHi6K-w.jpeg" /><figcaption>With PoS this can become a reality without the part, where the planet is destroyed.</figcaption></figure><p>Now that we understand why PoS makes a hell lot of sense, we also want to understand why it is much faster. In Nakamoto Consensus the actual processing of transactions is not very costly. The costly part is solving the cryptographic puzzle. To make it fair the time to solve the puzzle must be many orders longer than the propagation time of a block. If there is not enough time, then either the finder of the last block is highly advantaged (extreme case) or the most central nodes in the network have an advantage (more realistic case).</p><p>Well, this is not 100% true, since for example in Ethereum there are <strong>Uncles</strong>. Uncles are blocks that were found and are valid, but do not belong to the main line of the blockchain. So just like that strange uncle on a family gathering these are not really wanted in the first place, but belong to the family anyway. For Ethereum this means the Uncle gets included, if it is valid and for the finder of a block this means you can get it into the blockchain even if someone else was just 1 second faster than you. This allows for a drastic reduction in block time (15s for Ethereum). But still Ethereum is not significantly faster than Bitcoin. So let’s go back to PoS. There we don’t need to run such a long timing for the puzzle, since there is no puzzle, we only need the time to let everyone get to know from everyone else that the block is valid. Then we can go on producing the next block. There are different ways of selecting the next block producer, but it does not really make a difference. The important part is that on a planet like the earth and with the speed of light the whole process can be done in a second or two. The size of the block is mostly limited by bandwidth, so this might increase in the future. For Bitcoin this also holds true, but there are more adverse forces being against bigger blocks and the reason is mostly to keep decentralization at maximum.</p><p>Now that we have a basic understanding of the main approach to vertical scaling, PoS, let’s now look at <strong>horizontal scaling</strong>. There are 2 candidates and they are quite similar. The one is interoperability and the other one is sharding. Interoperability also belongs into the compatibility category, since it connects different blockchains together. So if Interoperability is solved we can make incompatible things compatible. But if we can do that, we can also create 100 forks of a given blockchain and let each fork talk to each other and connect the stuff. 100 forks process 100 times the number of blocks of a single fork, so why this scales is obvious. The big problem here is: <strong>How to have trustless Transactions between two blockchains? </strong>This is a good question and I will try to answer it, by giving a simple example how one can connect 2 blockchains without inventing any new technology. By the way, I find it very strange that in this blockchain world you find 1000 articles explaining how awesome and important interoperability is, but almost never it is expained how it actually works. Even if someone goes on a specific reddit and asks, please explain it to me, then users keep responding this “Yeah the IBC protocol makes it possible to have 2 blockchains securely talk to each other”, ok fine, but how?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/760/1*QO9vpuXZgT7kcParD4r0pw.jpeg" /><figcaption>Somehow all different interoperability memes are about how people talk too much about it, without understanding it.</figcaption></figure><p>Let’s assume we have Bob and Alice, where Bob has a Bitcoin and wants to trade his Bitcoin with Alice, who has an Ether. Both understand that exchanges like Mt.Gox do exist or say do exist only a limited amount of time and might go pop tomorrow and don’t want to put their valuable coins on an untrustworthy exchange. Not saying the exchange is a scam, but what can it do? In the end it is a central entity, that can always fall to individuals mistakes, bad architecture or just someone randomly dying in an airplane, being the only one with the private keys of the exchange’s wallet. So both of them really want to do their trade in a secure decentral manner. Now they have heard about such a great concept that is <strong>DEX </strong>(decentral exchange). But unfortunately this only works if they exchange shitcoin A with shitcoin B and both are from smart contracts of the same (Ethereum) blockchain. In the case of 2 different blockchains, this is not possible.</p><p>So they have another idea. There are a lot of friends of the two, that are willing to help them, some of them know each other, but most of them don’t. So let’s take 5 of them, that don’t know each other. Now they create a multi-sig wallet on Bitcoin and do the same on Ethereum. What is a multi-sig wallet? This is a wallet, where all of the 5 have to sign a transaction to send it into the blockchain. You can also have a multi-sig wallet, where only 3 of 5 need to sign, but for this example it does not really matter. So these 5 create two multi-sig wallets. Now Bob sends his bitcoin to the multi-sig wallet on the Bitcoin blockchain and Alice sends her Ether to the multi-sig wallet on the Ethereum blockchain. If both do it, we go to the next step, if not, then the 5 create a transaction that sends the Bitcoin back to Bob or the Ether back to Alice. Note that no one can single handedly run away with either the Bitcoin or the Ether here.</p><p>Now that both have transacted their coins, one member of the 5 creates a transaction, which sends the Bitcoin from the multi-sig wallet to Alice’s bitcoin address and another transaction that sends the Ether to Bob’s Ethereum adress. Now the other 4 members of the 5 see that both transactions were created just as planned and sign both of them. After that both coins are sent to the other one and they have successfuly made a decentral trade. Again nobody could have walked away with the coins single-handedly. This would only have been possible if all of the 5 colluded. Still there might be reasons for them to do so, for example a large transaction, where stealing the coins and dividing by 5 is more valuable than the reputation they lose in this process. That is why we maybe want 100 instead of 5. So basically we have solved interoperability now? Well, it might be a bit troublesome to find 5 or even 100 multi-sig wallet holders for every trade two blockchain users want to make. One approach could be to just set up a team of 100, that keeps doing this thing for everyone. Unfortunately this increases the security demands on these even more.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/594/1*pXr-cd3RxjTJlNcsuFvL7g.jpeg" /></figure><p>Is there some way to ensure that they act properly? Well, punishment is the key again. How about letting these 5 or 100 lock up some coins and if they misbehave, these coins get slashed. This makes it much more dangerous to try and start a collusion with others, since these might report such a behavior. However it might be hard to find 100, where each of them is willing to lock up a big amount of coins. But wait, really? We actually have some candidates that already did so and these are the validators of our PoS blockchain. It becomes interesting here. These validators are even running a server anyway and even more so handle transactions. So great, we just use the validators and problem solved? Unfortunately problems arise again. One problem might be forks. Again. So even if these 5 or 100 see that both have sent their coins, then it might still happen, that in 20 minutes there is another fork of the blockchain taking over and in that Alice maybe did not provide their part of the deal. But if the other part of the deal was already sent, then there is no way to stop it. So actually one needs to wait until there will be no more forks possible. In theory this is not possible for blockchains like Bitcoin. What we need here is finality. Aha. So PoS also provides a nice feature for interoperability as well.+</p><p>The faster finality is the faster a cross blockchain transaction can be finally settled. But does that mean, it is not possible at all to include Bitcoin into our whole interoperable world? Well, actually it is possible to create something that is called a <strong>Peg-Zone</strong> or <strong>Bridge</strong>, which takes Bitcoin and creates pegged or wrapped Bitcoin for that. The pegged Bitcoin is now rooted on a PoS blockchain and has finality and the real Bitcoin is waiting in a wallet controlled by the Peg-Zone. When someone wants to have the real thing back, the minted/pegged Bitcoin is burned again and the real Bitcoin is sent to a Bitcoin address. In this case the Peg-Zone can never be 100% sure, that there will not be a competing fork taking over, but this is basically the same problem exchanges have today, when they accept coins of blockchains without finality. So this means in a very strict theoretical thinking, the state of the chain is never final, but after many blocks, for example 10, it becomes extremely costly to try an attack that is successful. It depends on many parameters, but the basic thing here is, that you need to be faster than the miners of the blockchain to forge an alternate reality. So you need to invest into more miners that already exist and then attacking might crash the price of that coin, rendering your attack unlucrative. If you have less miners, for example only 10% of the total mining power, then you can still try your statistical luck, since you can still be succesful in finding a competing block, where you steal the coins from the Peg-Zone, but doing this for 2 blocks becomes less likely. For example if you have a 10% chance to find the next block, then you only have a 1% chance to find the block after that as well and if you need to find 10 blocks to do the attack, you have a 0,00000001% of succeeding. In order to try that you still need to run these miners, which might cost some billions of dollars. It is unlikely that you can easily steal that much money, so in essence and for practical reasons, it is not possible to attack this Peg-Zone without losing money.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/1*70GF7RXlcQy5dgSvhdSE0w.jpeg" /><figcaption>Those of you who can’t rela… wait we had this caption already.</figcaption></figure><p>Unfortunately this implies we have to wait for several blocks. In contrast to both sides having PoS with instant <strong>finality</strong>, we can do the thing with 1 block. Ok nice, is there something else? Yes. If we have these PoS chains set up, then it might be even possible to have shared security, which means we have shared validators on both blockchains. These are very interesting for our example, because they can be punished on both sides if they misbehave. Making bribing more complicated thus increasing security. Ok nice, so this is everything I need to know? No. There is even more. Basically we don’t need to send coins. We can also just send transactions with information across different blockchains. This means smart contracts can talk to each other. This is also a nice feature. Ok, but now we have everything? Well, let’s say we understand the basics and there are many details, for example why it makes sense to have a light client of each blockchain, allowing the members of the other blockchain to easily verify things. But let’s leave it at that for now.</p><p>And let’s go to <strong>sharding</strong>, which is a bit more accessible now that we have read so much and understand a lot more about how all of this stuff interacts. Sharding scales for the same reason interoperability scales, but instead of having multiple independent blockchains in parallel and make them communicate, we want to find a way to split up a single blockchain into multiple parts, or say shards. Sounds like it would end up with the same result, but it doesn’t. The source of the differences is that the first approach copies the blockchain and gives each copy its own infrastructure and the second approach has the same infrastructure trying to split up its workers over many parts. This is why for interoperable blockchains the big question is how to make these disjoint parts interact safely with each other and for shards the question is how to organize all these workers so that single entities cannot corrupt the whole thing.</p><p>After we see this approach as something where the workers of the infrastructure are divided and no worker does all of the work, we get an idea why sharding is good for scaling. The work is producing blocks and the workers are the validators. Sharding allows to split up this work, so that only a small fraction of the validators work on the same part of the network. The reason why Nakamoto Consensus does not scale up with more nodes is that every node <em>has </em>to do all the work. If we split up the work over many nodes, then having more nodes means we can split up the work in more pieces, making every piece smaller and thus enabling scaling. So the idea is that an incoming transaction goes to a shard and is processed there. It gets included in the block on this shard and the other shards do not really need to bother about this transaction. But wait, how does this work, if the transaction transfers some coins from an address that is not on this shard? Well, this indeed does not work. There are many possible ways to organize these shards, but luckily this is not a totally new topic for blockchains.</p><p>There is massive parallel computing, where many computers solve a heavy computational task together, for example to find signals of extraterrestial intelligence in the radio noise that comes from space (SETI). The other field is databases especially with clustered file systems and massive data storage. The different parts of huge databases, are often called shards, so this is where the naming comes from. In these fields smart researchers have already racked their brains over these problems and the solutions differ depending on what exactly your system should be good at. If you write very often in the database and read only rarely, it makes sense to organize the system so that writing speed maximizes. This means incoming data from a single source often ends up on totally different shards, but that is what allows very fast writing. If you have a lot of requests and these requests represent some kind of condensation of data, it might be a lot better to put data that belongs together on the same shard. For example if you have a database, which stores all data of people living on this planet and requests are something like “give me the average age of people living in all cities”. If the people’s data is distributed according to their living place on shards, then each shard can calculate the data for its cities by itself and then just report back the results for each city. If the data was lying around everywhere, then each shards needs to share the data with each other shard and then the calculation can be done, which is more work.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/1*gWMIp9hhXLzuH-1ZAsnDKg.jpeg" /></figure><p>It even gets more complicated, for example if we want to correlate this information with some other information, like how tall these people are. Then we can’t just sum up their ages and send the sum together with the number of people sampled to the next shard and so on. Then we might need to collect all of that data at some place first and then calculate the complex correlations. If we look at massive parallel computation it is the same story. Basically Bitcoin mining is a huge massive parallel computation operation and synchronizing it is very easy. The reason why this is so easy, is because all of the work does not matter at all, except for the one attempt, that finally found the block. It is somewhat similar for protein folding computation, which is also an example for such a distributed computation task. There the most important result is the one randomly generated parameter set that gives the most stable protein. For SETI the recorded noise can also be distributed quite well on different computers, but there is also the question of frequency involved. If a signal has a wavelength so long that the signal is sent over a duration of 10 years, then it might not be possible to find it, if the recorded noise is split into 100 second long pieces. Of course there are many clever things to be done, but that is not the aim of this article here :D</p><p>So how about sharding in blockchains? Well, there is obviously data storage involved and there is computation involved, great. So we have the worst of both worlds. And as luck has it, there is another big pile to be shit on top of that and that is secure computation. In the other examples there is no single entity that acts maliciously or even might be incentivized to do that. So we have data storage, computation and security. All of these must be nice. But we are lucky and we know quite a lot about the data to be stored and the computation to be done. For example for a transaction of coins, one must only check if there are enough coins available for the sender, the receiver is always fine. And the computation of smart contracts can work so that all the data is saved in the smart contract. So whenever someone sends a transaction to a smart contract, then all the data is in one place and not scatterd everywhere. Well, not necessarily all of the data, since a smart contract can also access the data of other smart contracts, but in most cases smart contracts care mostly about it’s own history and transactions sent to it. In addition we know that blockchains are not write heavy tasks. Since writing costs some fee and even though sharded blockchains aim for handling 1000 times more transactions than classical systems, it is still not a database writing GB/s of data. An examples for such a data storage si the CERN Large Hadron Collider experiment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*gQxkH-E35DZLOQvvl4g5xw.jpeg" /><figcaption>Looking deep into these eyes, you can see the reflexion of prices moving up</figcaption></figure><p>Still we can see now that there are 2 bottlenecks and one important constraint. The bottlenecks are computation and data storage, the constrain is everything must be secure. Taking this into account we end up with an architecture that does not randomly write wherever possible, but rather organizes data together into the same shards that belongs to the same smart contract and addresses interacting with each other and these smart contracts. Ok nice, so we have many shards and the addresses and smart contracts are split up over these shards. For each shard we assign some nodes that propose and validate new blocks. Furthermore when there is cross-shard communication necessary, these nodes send the data around. Sounds fine and not too hard? Well, if we have a fraction of the nodes on each shard, then it is much easier for these nodes to form a cartel or bribe some other nodes to do nasty things. They could just start forging malicious blocks and send this information to other shards, corrupting the whole blockchain. That would be quite bad.</p><p>Luckily there is a solution for this and we just shuffle these nodes constantly. If these nodes are exchanged and presented new partners every few seconds, it becomes very hard to collude or bribe others. Since you need some coordination and time to do that, it becomes very hard if constantly new players show up on your shard and you have to move to another shard quite often. In the end you have to collude with everyone and this means that it is as secure as the good old blockchains. Random shuffling also solves another problem, namely that the coordinator of all those shards could also act maliciously by dropping some cross-shard transactions or forging fake cross-shard transactions. By moving around these validators can easily recognize misbehavior of a coordinator node. So everything is fine now with sharding? No. Shuffling has a very delicate implication. It basically means that we have to send all the data of the shard we are leaving to someone else and we need to receive all the data of our new shard. This means a lot of bandwidth.</p><p>Alternatively we could store a lot of data, for example of all shards, but then there is no advantage in sharding. If every node has to store everything, there might only be a scaling of computation power, but not of data storage. A single blockchain today is already 100s of GB big, so storing 64 shards or more, might not be that easy and drive out many nodes, leading to centralization, something we don’t want. If we give up decentralization, we don’t need to run a blockchain. Then we can stick to the good old databases. So we either need a lot of data storage or we need a lot of data bandwidth or we need to have nodes stick to the same shards, which leads to inferior security. This is called the <strong>Data-Availability-Problem</strong>. It is nice to note that this is a trilemma, so instead of an ordinary dilemma, where two options are presented and both are bad, we have now three options, and all of them are bad. Unfortunately for the trilemma, there is a genius solution for this problem.</p><p>What is the solution? We can split up the nodes, which have the data and the nodes that validate the blocks. But how does that solve the problem? It is just more data transfer necessary because nodes have been split? Well, the trick is that the data nodes stick to their shard and the validating nodes are randomly shuffled. This means that the data nodes hold all the data of their shard and don’t have to reload new data of other shards all the time. These nodes provide the transactions to form blocks or even propose the blocks, this depends on the specific implementation. But whatever is the case, the validators just take these transactions and forge a block or just check the proposed block. If the validators agree, the block is published. With this approach the validators can be shuffled quite often, so that it is hard to collude. But it is not necessary to transfer huge loads of data all the time. Great, so we have solved sharding? Well, there is still a lot to take care of, but in essence it is possible. We will look at some other finer problems, when we see how it is implemented.</p><h4><strong>2. Compatibility (was Isolatability)</strong></h4><p>When we look at compatibility, let’s differentiate between<br><strong>a) Security<br>b) Storage<br>c) Computation<br>d) Hard Features<br>e) Token-Economy</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/942/1*A9d0pjnf05gFaY_Z6GxJSQ.jpeg" /><figcaption>If you read this you are trying to be less like Ricky, very good!</figcaption></figure><p>The first part <strong>a) Security </strong>means what level of security a platform offers (who would have thought?). Let’s look at an example of a service, where users can buy stocks and keep them, basically a <strong>crypto portfolio</strong>. The users put $100m of worth into it, but the miners of the platform are only worth $1m, then an attack might be so cheap, that the $100m of stocks will be stolen. In this case the platform does not provide enough security. The designers of this portfolio might be better off using another platform or building their own infrastructure. This is why Ethereum is great. If you build a smart contract on Ethereum, an attack via this vector is very expensive, even though no effort was put into building up your own infrastructure. This is very nice. The reason for this is, that the infrastructure is already there and we may use it.</p><p>If we look at <strong>b) Storage</strong>, we come back to what we just discussed in the previous part. A platform might be able or unable to store whatever data your application needs. Sharding or interoperability offer a great leap here, since data can be put on additional shards or on freshly generated zones. It is also possible to use <strong>layer-2</strong> technologies, to move data away from the blockchain. Some applications have a lot of data requirement, where others don’t need much. Data is currently very scarce on blockchains, because the usual way to get in for data is via transactions into blocks. Blocks are scarce as well, so here is room for improvement with blockchain 3.0.</p><p>The next point <strong>c) Computation</strong> has already been mentioned some times. It is very similar to storage, both describe something like “How powerful is the virtual machine”. Computation is also something that is quite expensive and there are again a lot of improvements on layer-2 possible. For example it is possible to move the calculation of smart contracts off-chain and only save the result. Again there are different applications when it comes to computational needs and often it overlaps with storage. Games on blockchains might need a lot more computation than the given example of a portfolio. But the demand for security will be much less. So we see here that needs can be very different.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/717/1*xsa8OKxXV1nTIkJC4oAK_A.jpeg" /></figure><p><strong>d) Hard Features</strong> are functionalities that cannot be plugged in afterwards. For example you cannot just make smart contracts work with bitcoin. You can think about some layer-2 concepts but there is no way that the bitcoin miners verifiy the computation of smart contracts. Blockchain 2.0 is the inclusion of smart contracts and since you can code anything with these, anything should be possible, right? No. One example might be <strong>zero knowledge</strong> computing. It is mostly used for anonymity, like in Monero or Z-Cash, but there might be other reasons why someone wants this. For example if you have governance and you don’t want that users are able to see the preliminary results on the blockchain. Sure here anonymity might also be a thing, but let’s assume it already worked to give everyone a pseudonym, so nobody knows who is voting. It might still be bad, if someone sees the other votes incoming and waits until the very end, so that he knows on which option his vote might change something. Other examples might be <strong>(pseudo-)random number generation</strong> or usage of advanced signature algorithms like <strong>BLS</strong>. Allowing to use different approaches means being faster and more efficient for various problems. Sometimes random number generation is needed, which has some specific constraints. For example if you sell some Cryptokitties, it is desirable that the user is not able to predict the outcome of the pseudo-random algorithm. Another example might be <strong>in-protocol upgrades</strong>. This point also belongs to 4. Governance, but it is also such a hard feature. The idea is that once the participants of a blockchain have decided to upgrade to a new version, this is not done by shutting down all nodes, download a new version via git and then starting again, but rather with a live update, where the version of the software is defined by a vote and thus is compulsory and deploys seemlessly. So there might be many more hard features one might come up with and that is not the important part of this section. The important take home message is that a platform might have some of these features, but not all. Depending on how a platform functions it is very easy to nearly impossible to use new features or deploy your own hard features. It is again strongly connected to infrastructure. If you roll your own infrastructure, of course you can easily deploy your own hard features, if you use a big infrastructure run by many others, it might be very hard to get a new feature deployed. So here again we see, that some points exclude each other.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/601/1*ARzqmOa7hmiLtdg8nwIDaQ.jpeg" /><figcaption>Sometimes not all features are shipped on release.</figcaption></figure><p>Which brings us to the last point <strong>e) Token Economy</strong>. This describes how compatible a platform is with the desire of a project to run its own token economy. So a smart contract on Ethereum is able to mint its own coin, which opens a wide design space, but there are limits. You cannot change how fees are charged and you cannot decide how the flow of the fees should be. Thus you cannot disincentivize certain behavior with fees or incentivize other actions by lowering the fees. And possibly most important, the creators of smart contracts do not earn the fees. The result of this is that for many solutions there is a fee on top of the blockchain fee. If you use Uniswap you pay the transaction fee of the Ethereum blockchain, but you also pay a fee to the owner of the smart contract in form of having a bad exchange rate or a fee depending on volume. This part is necessary because the owners of the smart contracts do not profit from the low level fees of the blockchain, since they are not the miners of Ethereum. The reason here is again infrastructure. <strong>Uniswap </strong>was able to deploy extremely fast because Ethereum provides the infrastructure. But there might be a competitor in the future, offering the same service (providing a nice user experience) but with lower friction, since it runs on its own blockchain solution. For example a PoS chain, where the creators own their staking coins and thus earn the fees. It is also possible then to fine tune the fees for different actors, but this might be more important for other dApps, for example games.</p><p>What do we learn from this chapter? Well, it is very unlikely that there will be a single solution, that fits perfectly for everybody. Since not all dApps have the same preferences and it is not possible for a blockchain platform to be good at all of these points, it has become obvious to us now, why there is no single best solution for all dApps.</p><h4><strong>3. Developability</strong></h4><p>There is quite some overlap with the previous point here, but still some differences. Especially <strong>d) Hard Features</strong> has a lot to do with developability. But there is more to it. Take for example the Bitcoin codebase which is just one big block of code. Whenever a project decided to fork bitcoin and create their own blockchain, they forked the whole codebase and changed whatever was necessary. This leads to an ecosystem where many projects start solving the same problems again and again. A better approach is <strong>modularity </strong>and a separation of different layers. Most prominent is the separation of network, consensus and application layer. But it can go further than that. Basically all features can be modularized, so that governance, non-fungible-token and whatever comes to mind is something that can be added. The idea is not new for Blockchain 3.0, but has had a lot of success in web development and might be the main reason why Javascript is so successful. Even though Javascript often leads to dirty code, the high chance of being able to use another solution into your own code has boosted Javascript to one of the most prominent languages. The concept is even not original in the Javascript ecosystem, since it was already prominent earlier in Ruby. But this is not the topic of this article. We just want to note here, that Blockchain 3.0 aims for such improvements. Another big impact on developer experience is existing infrastructure which is different to the ecosystem part we have already explained. We have mentioned this infrastructure aspect quite often now and in contrast to usable software in the ecosystem it provides a live infrastructure to build on, where nothing has to be done. So Ethereum is the main example, where the blockchain runs and you only need to deploy the smart contract. This is huge, since for some projects setting up infrastructure can be a real killer. There is something in between hosting everything yourself and having a full blown infrastructure and this is <strong>Shared/Pooled Security</strong>. In this case there is an existing infrastructure and you deploy your blockchain as another chain into the existing infrastructure. By doing so you help to secure the existing infrastructure and the existing infrastructure secures your chain. You still have to run servers, but there is an automated way how to connect to the rest and get it all up.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/1*iuPuZlWEdiXNxOINL9LGNw.jpeg" /></figure><p>One last important thing to mention here is standardization. Over the last years crypto has become very big and many different approaches have emerged and they are not compatible. This applies to many different layers, starting at the top from having smart contracts, which have their own specific language on whatever blockchain implements them and goes all the way down to the consensus and network layer, where different network topology, different networking paradigms, different hash functions and different consensus mechanisms lead to a very fractured landscape of different non-compatible approaches. One way to fix this is standardization. However the crypto ecosystem has not failed in ending up in this state. If something is new nobody knows which paths are the right ones to take. When the settlers arrived in america, I bet many of them went on really shitty paths, but of course we only know from the success stories today, like Klondike. But in order to find the good path, many paths must be tried out. That is why we are now in this situation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*N_HuNhLx3uhHJJfXywigKQ.jpeg" /><figcaption>After WWII Germany was split up to try out 2 different approaches to car production. One part of the country explored all possible paths of car production and the other standardized everything and build a single type of car, which you can see on the picture. With standardization a lot of friction was prevented. The other part ended up producing many different cars with many companies like VW, Audi, BMW and Mercedes, what a waste.</figcaption></figure><p>The question is, if it is really necessary to standardize everything? I think the answer is no. If we standardize the communication between blockchains and allow for open interoperability, then it doesn’t really matter that all these smart contracts are totally different, that hashing algorithms are different (well, this might cause higher cost for cross blockchain communications). In a world where all blockchains can interconnect, developers can pick the approach they prefer and still not be siloed in this specific technology. The only thing that needs to be standardized is communication between blockchains. This is why interoperabiltiy touches this section as well and we see now how big the importance of this very part is.</p><h4><strong>4. Governance</strong></h4><p>Finally we arrive at the last part and this is Governance. At the beginning of crypto nobody would have thought this is so important. That is why Satoshi Nakamoto did not think about it in the whitepaper. I’m sure Satoshi thought a lot about having all the mechanisms so that Bitcoin stays decentral and does not centralize on some nodes. But there is more to it. In Bitcoin there are 3 groups, the users, the miners and the developers. So far so good. The problem is that these 3 groups do not always have their goals aligned. The best example is fees. The <strong>users </strong>want them to be <strong>low </strong>and the <strong>miners </strong>want them to be <strong>high</strong>. In between are the developers, who can code stuff to improve this, to make the users happy, but it must be something the miners accept. And I’m very sure there are Bitcoin maximalists out there who will say, that this is working perfectly and as intended. To these I usually ask the question, why Bitcoin was forked so often? Because all groups have agreed and aligned? Or because there is no proper way to govern Bitcoin. This misalignment is a very old problem and it can now also be experience on Ethereum with <strong>EIP-1559</strong>.</p><p>There are the users who can threaten the miners to leave the system and use something else and there are miners, who can decide to just not upgrade a certain improvement or move to a different fork because this fork does something they prefer. Ethereum has a similar technology, when it comes to this aspect, but differs a bit, since there is the Ethereum Foundation. It is like the wise old men giving advice to the community. In addition there is Vitalik Buterin, who fulfills a similar role, he might not resemble an old man though. In other communities there are leaders who often leave their projects and move on creating a new project, doing another big ICO, even though the old project could have been improved. I don’t want to fingerpoint these projects or leaders here, but there is a reason why some 3-letter projects did not make this list. <strong>Leaders </strong>like <strong>Vitalik Buterin</strong> have a lot of worth, because they help to steer such a project to long term success. Of course decentralization means not having such leaders and that is why we want to have proper governance described by the protocol.</p><p>Proof-of-Stake provides a little bit of help, since the owners of the coins (for Bitcoin the users) and the producers of the blocks are the same entities. So the power relations are more aligned from the beginning, but the real innovation and improvement here is voting. By having on-chain vote, these communities are able to officially measure what the users want. This usually ends up in doing upgrades only if users have approved them and then they are mandatory. Another improvement is <strong>in-protocol upgrades</strong>, which means that the software upgrade happens automatically once the vote is over. This means there cannot be a Donald Trump who really don’t want to accept his defeat. This is another great improvement. In previous sections presenting some nice improvements always ended up with a lot of problems that needed to be solved as a consequence. Not in this case. Here come the good news: Voting is already working in many blockchains and even in-protocol upgrades are only a matter of time until they will be available in many modern blockchains.</p><p>If you have read until this point here, then my plan has worked. The idea of this article was to lure all the readers into it by revealing which project will give <strong>$$$ </strong>in the future but then once the readers are trapped here, teaching them basics about blockchains.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/858/1*W9KISkZ98QM_KdgSP_uaVQ.jpeg" /><figcaption>I know this meme is very old and you know it already, but I had to make a decision. This is basically a triangle, where you can only have 2 properties. The corners are “quality, quantity and novelty”. I hope you appreciate, that I wanted the best memes but then sacrifice the novelty while still having many.</figcaption></figure><p>But now we will discuss the 3 projects from the title now, Cosmos, Polkadot and Ethereum 2.0. And the news get even better: After reading all of the previous stuff it will be much easier to see the specifics and differences of these projects. We will start with Cosmos, then have a look into Polkadot and finally into Ethereum 2.0. There are good reasons to start with Cosmos, one is that it will launch earliest. After this part we will summarize what we have learned with some nice master tables and graphs and also look at some example dApps and see which platforms offer the best performance for each.</p><h3>Cosmos</h3><p>The whitepaper of Cosmos came out in 2016 and its influence was easy to see as many interoperability projects presented very similar ideas afterwards in their whitepapers. Let’s first look at the design goals of Cosmos:<br><strong>Multi-Token</strong> — Cosmos supports any number of token/coins on its blockchain. From the very basic design it is possible to have any number of denominated currency units on an address. <br><strong>Bottom-up </strong>— The whole system follows a bottom-up design principle. This means there should not be systems at the top directing smaller parts on the bottom. The goal is to have the small parts interact in an emergent way. This might be a bit intangible here, but we will come back to this quite often.<br><strong>Building Blocks</strong> — Cosmos and blockchains built with it, should consist of building blocks. One example for this is the separation of network layer, consensus layer and application layer. We have already mentioned this point in the developer experience part and this is one materialization of the building blocks concept. Another one is that the application layer consists of building blocks. So if you want to have governance, then the module can be added to your application, if you need smart contracts, add an ethereum virtual machine or a <strong>WASM </strong>smart contract virtual machine. Building blocks also means that you can change parts, so if you don’t want to use something else than PoS it is possible by changing the building block in the consensus layer. <br><strong>Stop energy waste </strong>— This one mainly means that the high energy consumption of PoW shall come to an end by providing everyone an easily accessible way to build a PoS blockchain. <br><strong>Interoperability as protocol </strong>— There are more ways to skin a cat and so are ways to implement interoperability. Cosmos aims to realize interoperability as a protocol. This means that you do not necessarily need the Cosmos blockchain to have interoperability. You can use the protocol specified by Cosmos in order to connect two blockchains. This is a very open approach to interoperability and plays into the bottom-up point already mentioned. By giving everyone the ability to connect to other blockchains, an <strong>Internet of Blockchains</strong> emerges instead of being engineered.<br><strong>Internet of Blockchains </strong>— Cosmos wants to start an era where all blockchains connect to each other and siloed technologies are no longer the common. This internet shall not be organized by single entities. <br><strong>Overcome “One coin to rule them all” </strong>— A mentality heavily criticized especially in the early days of Cosmos. One coin to rule them all basically means that a blockchain tries to accumulate all dApps and all users under its main chain, thus forcing them to buy their coin. This mentality is seen as unpractical since there won’t be a single blockchain solution to all problems, just in the same way as there is not a single operating system for all computers. But these different operating systems can still communicate over a single protocol, ethernet for example.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/568/1*DWrTIaBlyQvJOyk-qg9r_A.png" /><figcaption>This is just a funny meme, not investment advice.</figcaption></figure><p>One really important thing when it comes to Cosmos is Tendermint. This is the software that has been written before Cosmos was tackled by the same people. Tendermint is basically an implementation of practical Byzantine Fault Tolerance, so it allows replicated state across a decentral network of participants, where a minority can be malicious. Wait, so Tendermint is already the thing? Don’t forget, pBFT alone is not permissionless. To make it permissionless, one needs a way of introducing new participants and that is Proof-of-Stake. So from a chronological point of view, Cosmos continues the development of Tendermint by implementing PoS. This allows to launch a public blockchain, which is Cosmos (Atom), that is live since March 2019. Since we have learned these specifics already in the previous section, we can now lean back and just state the facts. Cosmos uses the validator/delegator split, has instant finality and uses slashing to solve the usual PoS problems.</p><p>Did the Cosmos team invent all of those things back in the time with Tendermint? No, but some of them. The idea of doing delegation to reduce the number of validators comes from Daniel Larimer’s BitShares and the term slashing was coined by Vitalik Buterin. But using the ideas from Liskov’s <a href="http://pmg.csail.mit.edu/papers/osdi99.pdf">pBFT paper</a> is Jae Kwon’s merit. From this follows instant finality and the idea of having bonded token, where punishment applies, when misbehavior is displayed. This concept has proven to be quite reliable so nowadays it used in most PoS systems. The approach of Cosmos was quite down to earth, since they first build a software that is useful and is being used by other companies and then found a way to build a blockchain from this. Where in contrast other players keep publishing new blockchains, that do not really work at this point and it doesn’t really matter, because they move on to the next project, before they have to bother.</p><p>In addition Cosmos asked for a funding with an ICO of $17 million. Actually they asked for less, but the demand was high. Other ICOs in the same space have demanded and gotten 10 times or more of that and yet Cosmos is among the few to deliver. After the ICO Cosmos was developed and launched in March 2019 with 100 Validators and has increased the number to 125 with up to 300 planned. The block time is 7s and Cosmos is able to handle over 1000 tx/s. We don’t know the exact number, since the Cosmos Hub has never been under such a load that it could show its maximum capacity. There have been testnets/simulations, which achieved 4000 tx/s, but it is very unlikely that the real network achieves this number. Still it delivers what PoS made people hope for. This is something one always should remember, often these numbers are theoretical numbers and some new awesome blockchain claiming 10k tx/s or TPS, do this in a experimental simulation. The same goes for fees and delay. Sometimes blockchains advertise for themselves with very low fees, which are often also only so low because the network does have a much lower valuation than for example Ethereum. For Cosmos the fee for a transaction is currently at 0.0075$, assuming a price of $10 (price when this line was written…) for an Atom. This is a very low fee and it might go up in the future. I don’t want to do a bold shill for any coin here based on such numbers, so I hope I don’t miss that goal.</p><p>Why do we still want to look at these numbers? Because they indicate if there is a real technological advance or just some kind of reset. A reset applied to Ethereum would also reset the fees to a lower level. So when Ethereum had a valuation comparable to Cosmos ($2B), a typical fee was $0.02. So this means the difference here is not that big, like a factor of 2 or 3. Does this mean, that Cosmos is more efficient by a factor of 2 only? No, not really and the reason is how full the bucket is. For Ethereum after it hit $0.02 as fee, just a couple of months later the fee increased to $1. The reason for this is that there were already quite a lot of transactions going on and once the ICO craze started in summer 2017, fee increased a lot. This would not have happened if Ethereum was able to handle 1000 tx/s and not 15 tx/s. Ok, but if there is so much space left in the blocks in Cosmos, why is the fee not much much lower? The reason is that this fee at the lower limit does not represent how full the bucket is, but rather what kind of fee was chosen as a spam protection. You can build a blockchain with Cosmos that does not charge fees, but then anybody can just flood your blockchain with many pointless transactions to do harm. A small fee prevents that. So a blockchain might also pick a very low value here to give the impression the system is very efficient.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/245/1*NMq7GreCrCd8eXjyUOvMFA.png" /><figcaption>CAP-Theorem with examples</figcaption></figure><p>The image above shows a triangle with letters C, A and P. It represents the CAP-Theorem, which states that you cannot have Consistency, Availabiltiy and Partition Tolerance all at max in a distributed system. I have already discussed this in other articles, but here again it helps a lot to understand. With the design choices of Cosmos there is a big difference to Bitcoin or IOTA. I like to pick IOTA here, because it is one of the extremest examples. It might be better to have a look at Avalanche in this regard, since this is something that might actually work, but that is something for another article. High <strong>Availability (A)</strong> means that many transactions go through and are processed, the system is live and available. High <strong>Consistency (C) </strong>means that there is a single state that everyone agrees too, so there is not much confusion about what is in the blockchain and what is not. High <strong>Partition Tolerance (P)</strong> means that it doesn’t really matter if nodes go offline or do nasty things. Bitcoin is the king in this regard and that is because any node can validate blocks without the others and any fraction of miners can go on producing new blocks if the rest go bust in an earthquake.</p><p>At first glance it might seem not very clever to pick high consistency when designing a blockchain, but I’m convinced this is a very smart pick and I will explain why. Also the first thought might be, how does it matter, if it takes 10 minutes longer until something becomes final, if you sacrifice other things for it? Sure it would be cool, if Cosmos stays live if half of the validators are taken offline like Bitcoin. However it is not the end of the world if it happens. Even if half of the network stays offline, it is still possible for the remaining validators to make a decision and move on, depending on what has happened. It is very unlikely that a natural catastrophe can take out that many nodes and if it really does, it is something like a meteor vaporising all life on earth. This is tragic, but the biggest concern in such a situation is not, why the Cosmos network has gone offline. If it is something else, for example many countries on the planet have formed an alliance and prohibit blockchain, then the validators might know in advance and move their servers to nice countries. Also I guess such an event (prohibition) is much more likely for Bitcoin, because of its tremendous energy waste in contrast to PoS blockchains. In the 2 years Cosmos is running now, there has not been any event that made the chain halt. Ok, but now lets understand why Consistency is such a good choice and it has a lot to do with <strong>Interblockchain Communication</strong>, which we have discussed in general, but not with direct regard to Cosmos.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/601/1*CTnqLxhC3UMCZiSS_5c8jg.gif" /><figcaption>CZ Shilling Cosmos and our reaction.</figcaption></figure><p>So how does Cosmos want to implement Interblockchain Communication? I think some might have guessed it already, because from the design goals of Cosmos and what we have introduced about it, it might be clear:</p><p><strong>IBC Protocol standardizes communication<br>No pre-defined network topology<br>2-way peg via validators<br>Peg-Zones for non-IBC chains<br>Hubs and Zones<br>Shared Security (later)</strong></p><p>So the idea with Cosmos is that there exists a way to connect all blockchains together and don’t necessarily have to be build with Cosmos technology, it is sufficient if they implement the IBC Protocol. Implementing the IBC Protocol might not be possible for everyone, there are some limitations, but these are quite low. What you need is something like a <strong>global state</strong> and <strong>finality</strong>. Finality must not be instant, but it must happen at some point in time. We have discussed this and we understand how easy it is for PoS blockchains to have it. Is it impossible for PoW blockchains to have it? Well, if it is just pure Nakamoto Consensus then yes, but some have PoS mechanisms included for checkpointing. Does this mean, these coins will never be able to be on the Cosmos blockchain, for example like Bitcoin? No. This just means they cannot implement IBC and make this process very easy. It is still possible to include them via <strong>Peg-Zone</strong>s (or bridges).</p><p>A Peg-Zone is basically a blockchain, which implements IBC and its validators also manage wallets on the Bitcoin blockchain (or whatever is pegged). On the Bitcoin side it comes down to our example with the multisig wallets and on the Peg-Zone, there are the validators who punish each other for misbehavior and transfer the coins into the Cosmos ecosystem. Ok, how about global state? Well, this basically means that there is some state of the blockchain, let’s say all acounts and their balances and for a given point in time all nodes on the earth have agreed to this state. Hold up… How would that not be the case? Well, again let’s pull out IOTA. This network does not agree on a specific global state, but rather each individual transaction points to a history of other transactions. It might always be the case that there is a huge arm of the tangle (their type of “blockchain”) that has yet not come in contact with your knowledge of the tangle. All these explanations do refer to IOTA how it is meant to function, not how it works today with a coordinator running. The coordinator solves this problem, but also renders the blockchain not being decentralized and making it slow. As long as it is running, IOTA is basically a database with a lot of overhead. But also IOTA can be connected via a Peg-Zone. The only question is, if there are enough validators for such a zone, who want to take the risk of routing transactions, which might be reverted later. If these validators are connected to many nodes in the IOTA network, they are still able to mitigate this risk.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/1*IZXgd3_UHzoDChwYZO9X9Q.jpeg" /><figcaption>Reading this article reduces the amount of time to 3 years.</figcaption></figure><p>Cool, next for this “no topology thing”. Well, IBC only defines how 2 blockchains can talk to each other. The rest is up to the ones who build the network. Cosmos speaks of a <strong>Hub and Zones</strong> model, where some blockchains should be hubs, connecting different zones together. But this defines the network topology in the same way as Ethernet hubs or switches define the topology of the internet. And for Ethernet it is the same. It does not pre-define the topology that has to be used. This is displayed by the most basic connecting units of such a network, the hub and the switch, already deploying different topologies. So the hub has a bus topology, sending all packages to all connected devices. The switch in contrast has a star topology, only sending each package to where it belongs. This upgrade from hub to star makes a lot of sense, since both devices have cables going out in a star topology anyway. The thing was just that it was more expensive back in the days to have chips that do the package routing. But again, we are drifting off.</p><p>So these IBC Hubs can connect to each other in any manner they desire and this all depends on which hubs they want to connect to and thus think these are secure. In my opinion this is a very good thing, since the best solutions can be engineered instead of just saying, ok we have this approach and we hope for the best. Of course the drawback is that it must be engineered and does not instantly give a huge speedup just like having 1000 shards. But since the engineering of how to set up these shards is much more work, this might be arguable.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QPx727gCC9f7RKONaYqcXg.jpeg" /><figcaption>This is a visualization of the Cosmos Hub. It is represented by the Atom on the top. But in reality it consists also of its validators being on the bottom of it, producing blocks through the Tendermint Consensus Engine. The validators again are backed by the delegators on the very bottom. On the top the Hub also connects to other blockchains directly via IBC or via Peg Zones (BTC and ETH).</figcaption></figure><p>Now let’s finally get back to why consistency is so great. We have understood why the sacrifice in Partition Tolerance is not a big deal. We have seen that some networks (IOTA), sacrificing consistency for Availability are not able to play out their advantage, because the low consistency makes a coordinator necessary.</p><p>When do we need consistency? If we want to have really low settlement times for whatever happens on a blockchain. Blocks can be produced really fast, but only if the blocks are final, we can move on with actions depending on finality. So let’s assume we have some stuff ongoing between some blockchains, for example on the Cryptokittie Zone is a very precious Cryptokittie and it should be transferred to the DeFi Zone, where it is possible to stake this Kittie in some strange manner (strange for us normal people, for DeFi people everything can have its derivatives and be staked and the staked stuff can be staked again and so on). But before this happens, the Cryptokittie should have Sex with a Cryptodragon on another Zone, to enrich its value or something. So the Cryptokittie first gets transferred to the Cryptodragon Zone, which might happen fast, but it must be final, because the owner of the Cryptodragon is not giving away the precious Cryptodragon juice as long as the Cryptokittie is not really there. After that the Cryptokittie is transferred to a Hub, which connects to the Defi Zone (we assume the Cryptokitties and Dragons are directly connected Zones). The Hub will only transfer the Kittie to the Defi Zone if it is really there, so waits for finality. And finally the Kittie is staked. This takes 7+7+7 until the Kittie arrives at the Hub and then 7+7 until it arrives at the DeFi Zone and is staked. So 35 seconds.</p><p>Let’s take another network of blockchains, which have 4s instead of 7s block time and 900 block finality. This blockchain is able to process more transactions because of the relaxed finality restrictions, but it has a finality time of 60 minutes. So this whole process takes 5 hours there. There can be a lot more Cryptokitties being transferred in parallel, but each single one takes quite a while to pass between the different blockchains. Now one might argue that the instant finality blockchain network is only better as long as it is not congested. Once it becomes congested, the 7s don’t help you anymore. And there comes the next reason why high Consistency is good:<br>What do you do with a blockchain that is in a network of blockchains, if it gets congested? You duplicate it and connect both together. If you can’t handle all the trades on one DEX chain, then just spawn more chains. Split the trade pairs to different zones. If 2 aren’t enough, spawn 100. Connect the 100 to a Hub and let the hub do the routing. If 100 are too many and the hub cannot handle so many subzones, then just connect only 10 to a hub, do this 10 times and connect the 10 hubs in a mesh network or whatever makes most sense. If stuff really gets crazy and you need 1000 zones, then just go by another layer of 10. This means multiply everything by 10 and put another mesh network of 10 on top.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1KV2cneZaeu_q6q_HR8R3Q.jpeg" /><figcaption>Guess what happens if you have something like these wrapped Bitcoin on something like Cosmos or Polkadot? Yes, no longer $20 fees and fast transactions.</figcaption></figure><p>What is the downside of this approach? Everything now happens in different zones. Let’s say for example you want to trade Fupacoin into DaddyOFive-coin. You might be lucky and the trade pair Fupa/DO5 is on the same zone and you can do it directly, but maybe you need to trade Fupa/USD and then USD/DO5 and these trade pairs are on totally different zones and these zones connect to different hubs and these hubs only come together at the very top mesh network of 10 main hubs. In this case the USD from the Fupa/USD deal need to go up 2 hops, 7+7s, there go down to the other final zone 7+7s and might go somewhere else, for example leave the DEX network, again 7+7+7s, this means it takes 49s to do this kind of thing. If we compare this to an alternative network, where we don’t need 1000 zones with 1000 TPS each, but we only need 10 zones with 100k TPS each. But these zones have 60 minutes finality (they can only achieve 100k TPS by giving up instant finality), so it is much more likely you have Fupa/DO5 on the same chain and can trade directly. In this case it takes you 60 minutes to leave the network after the trade. If you have bad luck and need to transfer once and leave the network afterwards, it is 60+60minutes. So both cases are much worse than the 49s we had in the other example. There 49s was the worst case already. This is the reason why this choice for instant finality makes a lot of sense and is very awesome in an internet of blockchains. It makes this bottom up design go round.</p><h3>Polkadot</h3><p>We arrive at the next blockchain and that is Polkadot. We have started this article with what is wrong in Blockchain 2.0 and 1.0 and what needs to be fixed. The opener was mostly taken from the Polkadot whitepaper. The Polkadot whitepaper does a very good job in describing what needs to be done to take blockchain to the next level. No wonder Gavin Wood was able to directly draw a big fellowship when he decided to leave Ethereum and start Polkadot. This brings us to the first design goal of Polkadot, but let’s list all of them together:</p><p><strong>Solve the scalability problem faster than Ethereum 2.0<br>Parachains shall provide more freedom than shards<br>Scalable heterogenous multi-chain<br>Tendermint+HoneyBadgerBFT <br>Minimal, simple, general, robust</strong></p><p>The first and already mentioned point outlines very much Gavin Wood’s frustration with how things were going in the Ethereum ecosystem. Not that he was thinking the direction was totally wrong or people were being incompetent, more like the whole thing had become a bureaucratic monster already and steering it was slow and inefficient. Especially when it comes to switching to PoS. He knew that it will be much easier to directly build a new PoS blockchain than to transition something from PoW to PoS. And he was right, since both Polkadot and Cosmos are online today with a PoS consensus and Ethereum is still running on PoW. But the idea is not only to do exactly what was planned with Ethereum, but also to provide more freedom. Instead of shards, Polkadot aims for Parachains and these are full fledged blockchains. This means they can have their own set of functions, they might not even have smart contracts or have different interpreters running these contracts. Unfortunately we have not discussed Ethereum 2.0 now, so it might seem a bit strange to explain Polkadot in contrast to Ethereum. It is, but actually we want to explain Polkadot in Contrast to Cosmos, which works pretty well after we understand all these details about Cosmos. Once we understand Polkadot, going over to Ethereum 2.0 will be easy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*9YgNsWeVr0IpPO5BbPXLyQ.jpeg" /><figcaption>Don’t fall for pump and dumps. If something has outdated technology it is either Bitcoin and has the first mover advantage or it is a shitcoin. Not that hard to understand.</figcaption></figure><p>The next point is<strong> scalable heterogenous multi-chain</strong>. This heterogenous exactly represents what we just said about different Parachains being able to have different functionality. I hope scalable is now clear to any reader, who made it to this point. If you have no idea what this means, well, maybe the article was not written enough on a real idiot level :D. Multi-chain means that Polkadot is not just a simple chain, that might or might not connect to other chains, but rather being a strong team of many chains working together. I avoid saying network here, since the connection is stronger than in the network of Cosmos blockchains.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/912/1*e1aWvt5yt2eKFohRIiwqRg.jpeg" /><figcaption>The honey badger seen from different perspectives.</figcaption></figure><p>The next point is <strong>Tendermint+HoneyBadgerBFT</strong>, where the first part is already known to us and Polkadot acknowledges the great success of Tendermint here and practically says we also want to be a BFT based blockchain. The <a href="https://eprint.iacr.org/2016/199.pdf">HoneyBadgerBFT</a> is different to the pBFT protocol we have seen for Cosmos. Reading its whitepaper might help as much as watching the <a href="https://www.youtube.com/watch?v=4r7wHMg5Yjg">video of the Honey Badger</a>. The basic idea is that the Honey Badger don’t care. In contrast pBFT does care and goes offline if too many Cobras show up. So in more blockchainy terms, the HoneyBadgerBFT changes mostly the constraints on synchronicity and is thus asynchronous. This means it does not halt that fast and can go on even if more nodes fail. For our beloved triangle this means we trade away Consistency for more Availability and Partition Tolerance:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/245/1*kkGdMoGx_CJ330okJD6GYg.png" /><figcaption>Now we have added Polkadot and its ability to go on under less strict constraints represents a loss of Consistency in comparison to Cosmos but means advantages in Availabilty and Partition Tolerance.</figcaption></figure><p>There is a good reason why Polkadot does this and we will understand it later. In order to finish this part, let’s check the last point <strong>Minimal, simple, general, robust</strong>. <strong>Minimal</strong> means that Polkadot should not be overloaded with functionality that does not serve its main purpose. The idea of Polkadot is to have these Parachains and connect them with a relay chain. The priniciple of minimality especially applies to the relay chain, which shall have no functionality except connecting the Parachains. On the relay chain there will be no smart contracts. Smart contracts are nice, but do not serve the purpose of connecting Parachains.</p><p><strong>Simple </strong>aims in a very similar direction, saying that all features should be implemented in a simple fashion, even if there are more sophisticated approaches, these are not taken as long as they are not ultimately needed. One implication is that the Relay chain also does not have the concept of Gas. Wait no gas? Well, there are transaction fes and gas is only needed if you need to calculate a dynamic price for a transaction, which calls smart contracts. Depending on the complexity of the smart contract, there are more or less gas costs. If you don’t have smart contracts, you don’t need gas. This doesn’t mean you cannot have it on your Parachain. A parachain can have whatever crazy complexity someone wants to build into it. It is not that surprising here that Jae Kwon (the founder of Cosmos) has advocated for the same principle with the Cosmos Hub. It does not have smart contracts for the same reason. The solution is to just connect Zones with the functionality you need.</p><p><strong>General</strong> means that everything should be possible. There should be no constraints on what to build on a Parachain set up by the relay chain. This something where Gavin Wood might have thought about Ethereum 2.0 and how it will only allow smart contracts with Solidity and nothing else. Here he wants to provide more freedom than is possible with Ethereum.</p><p><strong>Robust </strong>is quite easy. It just means the multi-chain should be secure and attacks should not be possible. This is a given. If a blockchain does not have it, it is worthless. Something that might have been also good to include to this list would have been distributed or decentral, which means that the centralization potentially happening should be kept in check.</p><p>We now understand the design goals of Polkadot, but do not really understand its consensus mechanisms yet. We might fall to the trap that we already know that it is some Tendermint with more asynchronity, but there’s more to it. Polkadot is quite interesting because transactions into other Parachains are not much different for the user than internal transactions. In order to achieve this, there must be a clever way to connect all these Parachains. Polkadot is designed to directly deploy with Shared/Pooled Security, a feature that Cosmos plans to implement later. We have already mentioned this feature and it says that you don’t deploy your Parachain alone, but rather connect to the relay chain and add to all other Parachain’s security as well as all of them add to yours. How is this possible? To understand it, we must look at Parachains a bit more like shards and apply all the knowledge we already have from the beginning of this article. The validators of a Parachain do not stay with their chain, but rather rotate around. This opens up for the <strong>Data-Availability-Problem</strong>, which is solved exactly how we described it. In Polkadot there are Collators and these are the data nodes. They propose blocks to the validators, which check the validity of the blocks. Beside collators and validators there are also nominators, who are the same as delegators, just a different name. But there is a new group of participants in the consensus process and these are fishermen. Their job is to find misbehavior and we come back to this soon.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*S1oGdOZfUp0REbWMKRKjoQ.jpeg" /><figcaption>If you are looking for patterns in candle stick charts, maybe counseling is cheaper on the long run.</figcaption></figure><p>First let’s have a look how the block producing process works. In principle it is similar to Cosmos, so validators vote if they agree on a block and if the majority agrees, the block passes. But the block is not proposed by a validator but rather a collator. The validators also need the necessary data to be able to check the validity of a block. So a validator does not tell everyone its opinion of the validity of a block but also its own availability. Stating that a validator is available means that the necessary data from the collators was provided, so a rational decision on the validity can be made. This means in order to accept a block, 2/3 of validators must vote valid and none must vote invalid as well as 1/3 must state their own availability as positive. If this does not happen, because anybody votes invalid or availability is not given an exceptional condition is thrown, which means the case must be investigated. In the Cosmos consensus it would not really make sense to throw such a condition. Since all the validators have all the information, there is nothing to investigate. Investigation means more information must be gathered in order to make an informed decision. This decision is mostly on how to proceed and especially on who to punish. In the case of Cosmos or say pBFT+PoS there is no need for investigation, even though there might be punishments because of misbehavior. The split of active participants in different roles and on different shards has the consequence that nobody has all the information. This is why fishermen as a 4th role make sense. They do not participate in the process of producing blocks directly but watch over it and try to find inconsistencies or misbehavior and report it. These occasions might be very rare, but the reward is good. That is why they are called fishermen.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/1*Wqqv37vS8efLhtiGqWYetg.jpeg" /><figcaption>This is what it looks like when you are trying to explain why the price will move to a certain direction.</figcaption></figure><p>In order to understand this better, let’s look at some examples. What might be a misbehavior of a collator? A collator could propose a block that is invalid and also provide wrong information making this block look valid. This is only possible if many collators of the same Parachain do so, but can be detected by fishermen if they get hold of the real data and the data that is wrong. In most cases this won’t work anyway because there are too many collators, but in case of a collusion this might become more intersting. Still the collators cannot easily forge blocks, since signatures must be correct, however censoring data can be lucrative. Beside the usual thing for validators to sign invalid blocks, a validator can also signal its availability but in reality did not collect any data. After that the validator also signs the block as valid. At first this seems not very rational, but we all know this behavior from school. When the teacher asks who has done the homework and we know how the ones are selected who present their homework, it might make sense to signal availability of done homework and later also agree to what others say, but in reality there is just an empty page in front of us. Why does this happen? Because the reward is given if availability is signaled and the block is valid. The validator has participated in the active consensus process, but did not have to bother to download data. When too many start to do this, the whole validation process might become pointless. This is basically the situation in which the two hardworking girls in the front row raise their hand every morning and present their homework and everyone else in the class has done nothing. The teacher then often switches to the fisherman protocol, where they point out individuals randomly and check if they really have data availability given. Sometimes there is another protocol, where teachers go round and check all homework. Sometimes this is cheated with the mimicry method, where pupils just present any kind of written text. This only works if the teacher only checks superficially. But if the teacher checks thoroughly this process becomes very inefficient, wasting a couple of minutes at the beginning of each class. In the school it is possible, since there are 30 pupils only, but with thousands of participants and understanding the Data-Availability-Problem, we know why this isn’t an option.</p><p>There are some more details, we can’t cover all here but very important is of course how all these Parachains work together by meeting on the Relay Chain. What happens within each Parachain is so far understood quite well by us, but what happens if you want to send a transaction to another Parachain? The promise of Polkadot is that there is no difference for the user whether a transaction is inside a Parachain or goes outside. There is so called egress transaction information, which is something that wants to leave the Parachain for another Parachain. This egress must be validated, since if for example some coins of a Parachain leave it, but aren’t actually there, the whole system might be compromised. This is why the validators have to state, if they have enough information to make a decision on these egress transactions. Once the Parachain block is sealed it can be processed on the Relay Chain, where a sealed block leads to the distribution of these transations into the ingress of the Parachain where it belongs. So the Relay Chain does the job of routing all these transactions to the proper Parachains. The Relay Chain also takes care that the validators rotate properly among the Parachains and of course punish misbehavior.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/762/1*A6EHwAxg27yzFOiAmB_G3Q.png" /><figcaption>This almost overseeable image shows how all these actors in Polkadot relate to each other. Transactions come in from the top left in green and are proposed by the Collators. The validators of this parachain then validate these transactions and they are processed on the Relay chain (white) to the ingress of other Parachains attached to it. On the bottom we see a Parachain bridge to Ethereum.</figcaption></figure><p>Ok, now we understand this part, but what happens if on a Parachain an exceptional condition is thrown and the case is investigated? Does the Parachain halt until it is resolved? Actually, since one Parachain can corrupt the whole multi-chain, then everything must be halted, right? Well, yes. But there is a big difference here between Polkadot and Cosmos. Remember, instant finality is given up, so blocks do not become final instantly, this means there is more time to revert. Polkadot chooses 900 blocks until finality. This means there is a lot of time to investigate a case. So if on one Parachain bad things happen, the transaction can propagate through the network, while the case is still examined and new blocks can be produced. Also the process of sealing a block is not as strict as with the Tendermint+pBFT from Cosmos. There we have a round where a block is proposed and then everyone votes and once everyone has voted, the next block is produced. In Polkadot we don’t need to wait for everyone, if we have enough votes, the block can become valid, but later there can still be a validator voting this block as invalid, even though the 2/3 majority was already given. This is because HoneyBadgerBFT allows asynchrony. This allows to be more robust and more available even though misbehavior might happen, which needs time until enough actors get hold of it. We discussed the price of this already and that is Consistency.</p><p>This is the reason why Polkadot aimed for 4s block time in their whitepaper. However they ended up with 6s now, which so they state might reduce in the future. Cosmos is currently running with 7s. Polkadot aimed for 144 validators in the whitepaper but is now at 297, so it has more validators than planned and this might mostly explain why the block time is at 6s rather than 4. Here again the sacrifice of instant finality allows for having more validators without slowing the process. Cosmos has increased its validator set from 100 to 125 and wants to go to 300 in the long run. Polkadot wants to have 1000 validators in the future. Now we only need to understand the differences in this Interblockchain thing. Let’s start with the outsiders, because neither Cosmos can assume everyone will implement IBC nor Polkadot can assume everyone will buy a slot for becoming a Parachain. Buy a slot? We will explain this soon. So what about the outsiders? In Cosmos these can be integrated with Peg-Zones into the network and for Polkadot it is the same. There is no real difference, except that the <strong>Peg-Zone</strong> or in the words of Polkadot the <strong>Bridge </strong>is automatically a blockchain with shared security. In Cosmos these Peg-Zones make a lot of sense being implemented with shared security, but until this feature is finished, they will be included as normal zones. Including outsiders is very similar and this is a good point to also state that there wll be a <strong>limited number of Parachains</strong>. Since this number is limited the slots are auctioned off.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*agLM2t1nPqQDU4y1C1onjw.jpeg" /><figcaption>Be skeptical if a project throws around a lot of buzzwords and solves goat herding.</figcaption></figure><p>Hold up, what? You need to pay to become a Parachain? Yes. But you do not just get the slot, you also become a validator and earn rewards from participating in the shared-security model. It is not possible to connect any arbitrarely high number of Parachains and that is why this number must be limited. It is still possible to have Parachains connected, which are again Relay Chains for other chains below it, so this is where we see the star topology of the Polkadot network. This is the big difference between Polkadot and Cosmos. In both cases you can develop your own blockchain and have your own tokenomics as well as functionality. So if you need a feature, which other Parachains do not support, you can go and create your own. But you cannot just connect to the Relay Chain whenever you want to. You need to win an <strong>auction</strong> or buy a <strong>slot </strong>from another winner. If we look at smart contracts and data integration, then again Cosmos and Polkadot are quite similar. Parachains can have their own smart contracts, even different engines for processing them, same as Cosmos zones, but a Parachain cannot read out the data of a smart contract on another Parachain. The same holds true for Cosmos but is different for Ethereum. Smart contracts can be called of course across chains by sending transactions to these.</p><h3>Ethereum 2.0 or Serenity</h3><p>Just like the others we start with the design goals of Ethereum 2.0. This part on Ethereum 2.0 will be much shorter than the others and this is not because Ethereum steals their ideas from them, but because we explain it in this order. In fact Ethereum has introduced smart contracts and has coined most ideas for sharding. But let’s get to the goals:</p><p><strong>Smooth transition from PoW to PoS<br>Vertical scaling through PoS<br>Horizontal scaling through Sharding<br>Seamless processing of smart contracts</strong></p><p>The process of making Ethereum 2.0 a reality is much more a long odyssey of several upgrades than just an engineering process where a specific whitepaper is implemented as is the case with Cosmos or Polkadot. This is already reflected by the fact that there is no single whitepaper for Ethereum 2.0 that describes all important bits. We can’t dive into all of these different paths that ended up contributing a lot or just a tiny bit to the whole process. This article is already very long and we mostly want to focus on what will be the outcome at the end and what does it mean for the end user and developers. We have already learned a lot about it, because the initial statements about how sharding can be made scalable and secure is from the research on Ethereum. So let’s start thinking about the design goals.</p><p><strong>Smooth transition from PoW to PoS</strong> is a problem the other two projects do not have to solve. But it is a real problem for Ethereum. There is already a running blockchain, in my opinion Ethereum is the blockchain that demonstrated most how much the functionality of Bitcoin can be improved. Whoever thinks today that Bitcoin has all the functionality needed, does not understand Ethereum or is ignorant. So there is a good reason to keep this network alive, but the transition to PoS is very disruptive. Not only a very important group in the Ethereum ecosystem gets obsolete, the miners, but in fact the group with the most power over what happens to Ethereum. So by just saying, well guys, the party is over, we move on to the next thing, you are obsolete now, the miners might not support that step. This is why there is a need of a smooth transition. There are other projects out there, where the leader did not decide to improve the project that is already there but rather leave the ship and start something new in order to make this process simpler or just collect new ICO funds. Irony has it that one of these projects is presented here as well with Polkadot. However there are much worse examples and the reasons for Gavin Wood are understandable, also Polkadot does not only throw away PoW but has a lot more differences. Still in my opinion it is a very good sign, that Vitalik Buterin stays with the ship and tries to fix these problems.</p><p>The next point <strong>Vertical scaling through PoS </strong>is easy and we already understand this after reading the article. So let’s go back and understand how Ethereum wants to get to PoS. The thing is that Ethereum does not just want to switch that part, but also do <strong>horizontal scaling through sharding</strong>. This means the design goal of Ethereum 2.0 is to have a cleverly tinkered roadmap which achieves this goal by implementing several new features step by step. That way there is no single point in time where PoW switches from being everything to being totally removed. Also shards are tested in a real world environment before smart contracts run on shards. How is this done? By first starting a <strong>Beacon Chain</strong>, which is a PoS blockchain, but is completely independent of the original Ethereum mainnet, that will still run as usual. This is phase 0. The next step is to launch shards, which are coordinated by that Beacon Chain, this is very similar to the Relay chain of Polkadot. Once this is achieved, the mainnet becomes a shard and is thus integrated into the PoS network. After this, phase 2 starts and finally implements cross shard transfers and smart contracts as well as 100% PoS.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/674/1*gf8CrvSROFJMtqbJFOkeBQ.png" /><figcaption>This graphic visualizes how the different Eth chains interact.</figcaption></figure><p>Last but not least there is another design goal and that is <strong>Seamless processing of smart contracts</strong>. This means that neither the user nor the developer has to bother about the shards. Running a smart contract should be the same as in original Ethereum. Behind the scenes there will be data passing between shards and stuff like this, but all of that should be automated by the infrastructure. The bar is set quite high here and this differentiates Ethereum 2.0 from the other two projects, Cosmos and Polkadot.</p><p>Now that we have understood the design goals, let’s try to understand the consensus mechanism of Ethereum 2.0. It is quite similar to Polkadot, which is not a big surprise, since Polkadot tries to be something like Ethereum 2.0, but faster and with more freedom. So we have a PoS-based BFT algorithm, that sacrifices instant finality for less overhead and more throughput. Guess what, there is another triangle and researchers of distributed software love these triangles. So I present it here and fill it with dots representing blockchains. This is always just an abstract model and reality is more complex, but it helps us understand basic ideas:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/772/1*57fOpjctMGRmAcmNqEGn8Q.jpeg" /><figcaption>This triangle represents other corners and edges than the previous one, but still there is quite some overlap. Consistency is closely related to Low Latency Finality, Partition Tolerance to Large Number of Nodes, but Availability is not the same as High Overhead. Keep in mind the former are also not the same, there is just overlap. This triangle is from the Ethereum Foundation and I have added the dots. The dots for each tech can only be an approximation and are relative to the others. If you put in something else, the relative position of some dots might move together or apart…</figcaption></figure><p>In order to understand why finality is dropped we need to understand one important concept and that is described in a <a href="https://arxiv.org/pdf/2003.03052.pdf">paper</a>, which explains how Casper and GHOST can be combined. Unfortunately we have not really explained what <a href="https://arxiv.org/pdf/1710.09437.pdf">Capser FFG</a> is and we have not explained what GHOST is. In addition this is just a part of the story, since there is also CBC-Casper and the Ethereum 2.0 has changed quite a lot over time and is changing right now, since the importancy of zk-Rollups has increased by a lot in the recent weeks. So the plan might change further in the future, since Ethereum 2.0 is not just a matter of implementing specifications. There is also a huge roadmap posted by Buterin on Twitter and it shows how much stuff is going on:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LrseZCKBh4xjX1GECKn68g.jpeg" /><figcaption>Ethereum 2.0 roadmap as posted by Vitalik Buterin. The green rectangle on the left is “Today”. From there are depicted the many paths that lead to the Goal of Ethereum 2.0. I have marked 4 different segments and we will have a closer look at these.</figcaption></figure><p>The first event for a broader public is the Phase 0 launch and this has recently happened (December 2020). This process can be seen in the purple part of the roadmap:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/670/1*vHX7wc1YHI-2cPKe176m4Q.jpeg" /><figcaption>The purple rectangle depicts the process of launching the Beacon chain and reviewing its performance for the Phase 1 launch.</figcaption></figure><p>Ok, so what is Casper FFG. The FFG stands for Friendly Finality Gadget and it is called Gadget because it is not a full consensus mechanism, it is not even a necessary part of such a mechanism, it is an addon that can introduce finality — even for Nakamoto Consensus based blockchains. In the case of Ethereum this is great, because Eth is currently based on this Nakamomo Consensus and we already know that it is very nice to have finality. With some kind of finality the current mainnet of Ethereum cannot just become a shard of the beacon chain, but also implement IBC and become a part of the Cosmos network. Casper FFG in essence introduces checkpointing via PoS and is designed like the other BFT approaches we have seen. There again is also some inspiration from Tendermint, but the fork choice rule is different and called GHOST. This is not part of Casper FFG, but Gasper is the combination of both, which we already mentioned. With Cosmos the fork choice rule was a no-brainer, because with instant finality there are no real forks. Whoever starts a fork is directly punished. But by giving up instant finality, the fork choice rule becomes again an interesting topic.</p><p>So what does GHOST mean? Greediest Heaviest Observed SubTree, sounds complicated but in essence it is very close to the longest chain rule. Actually it should be called heaviest chain rule, so that heavy is the same here, the only difference is that heavy stands for how many stake vouched for the validity of this block instead of how much work was put in this block. A SubTree is the part of the blockchain between two checkpoints, however Observerd SubTree was only picked to end up with thee acronym GHOST, which gives some nerd humor points, since Casper is a ghost. It could have been called greediest chain rule as well, but that’s fine. Actually it is called LMD GHOST, the LMD stands for Latest Message Driven, that basically means that the latest attestations of the validators are counted. Attestation is vouching for the validity of a block.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/828/1*7DMA4-b0mzAYyMikJRaNWg.png" /><figcaption>This is taken from the Gasper paper and it already has a caption. This caption will self-destruct whenever Medium supports this feature. (Nerd humor points for me…)</figcaption></figure><p>LMD GHOST and Casper FFG together form something that we have called consensus mechanism or in other terms blockchain protocol. Now that we understand how fork choice works, we understand how delayed finality can work here. Why is delayed finality now a good thing for transaction throughput? Because we don’t have to wait for everyone to give attestation and can go on producing more blocks even though some validators are lagging behind. It might also be the case that a conflict has arisen (exceptional condition) and some participants of the protocol need to get more data to make a decision. In the case of instant finality the network has to halt and now we understand why halting is not an issue with the HoneyBadger or with Gasper.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/496/1*mwR2EtMWtytEaGK1ujegww.jpeg" /><figcaption>The red rectangle of Vitalik’s roadmap depicts the part of the roadmap that is making the PoW part of Ethereum (the current mainnet) compatible with the PoS part (the current Beacon Chain)</figcaption></figure><p>Here are the steps, which have to be taken in order to make light clients possible for Ethereum. At some point in the past Jae Kwon (founder of Cosmos) said that light client is the holy grail of interoperability, so somehow it must be important. So what is a light client? In contrast to a full node a light client does not download the whole blockchain and does not need to be online all the time. The ressources needed for a computer to run it are very small, in contrast to a full node. Still, a light client is different to just requesting some blockchain data from a full node providing these via an API. Let’s call the latter a naive client. The naive client trusts whatever information it gets and it might send some transactions to the full node, hoping these will make it to the blockchain. The naive client does not know if this has happened for real or if the full node is making up a different reality, the only way out is connecting to another full node as well and checking if both realities match and hoping both are not in collusion. The light client in contrast has some information (headers of validators and blocks) that allow it to verify most things and it is quite hard to fake things when talking to a light client. If the light client connects to other full nodes, it becomes impossible. Obviously this is something we want to have, especially since most blockchains have more read than write operations. Statelessness is connected to it and it also helps with rollups, which we will discuss next.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/711/1*gGrtdABJ_FCaLXE8mlgsfA.jpeg" /><figcaption>The green rectangle depicts the part of the Ethereum roadmap, which will enable rollups.</figcaption></figure><p>Rollups are a very interesting, but we have a big problem here. This article is very long already and we can’t double its length by explaining all kinds of layer-2 solutions. If you know what layer-2 protocols are, this is nice, if you don’t know, I try to make it short. The basic idea of all layer-2 solutions is to move things that originally happen on the blockchain and do it off-chain. This is not an intrinsic scaling of the blockchain, but since more data and even transactions can be processed, it is an extrinsic scaling method. There are many different ways to implement layer-2 solutions, channels provide a way to make many transactions between different parties and only when the channel is closed, they need to make a real transaction on the blockchain. To make it secure, they have to lock-up their coins, which are used in the channel.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/796/1*QmYKLoS7brXmFURD4zeJWA.jpeg" /></figure><p>Over the time these layer-2 technologies have evolved into multi-party channels, commitchains (for example Plasma), sidechains or said rollups. If one investigates these concepts and their evolution over time it can be seen that these sidechains or off-layer solutions become more and more like smaller blockchains attached to the big one. The newer versions also have penalties for misbehavior and similar concepts. So one remark I want to make here is, that IBC allows to attach other blockchains in a very similar fashion to sidechains, but with the difference that it is a full fledged blockchain being attached and it is 2-way instead of 1-way attachment. In some abstract sense one could say that IBC is the most scalable and adjustable approach to sidechains. And building a network of blockchains connected via IBC is a layer-2 solution, where each blockchain connected is a second layer to each other.</p><p>But why are we talking about this in the Ethereum 2.0 section? Because zk-Rollups are an interesting concept and in the last days Vitalik Buterin seems to increase their priority a lot. What can they do for Ethereum? They allow to move the computation of smart contracts off-chain and only settle the result on-chain. This reduces the data on the blockchain by a lot and much more things can happen on a block. But what happens if some actor is malicious? Well, it is possible to challenge the results of such a computation and if someone does the case is inspected and penalties apply if some wrongdoing was found.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*FUNyuDM8D2mX2jkMWpAVfg.jpeg" /><figcaption>Rollmops — not to confuse with Rollups.</figcaption></figure><p>Another awesome thing is that these improvements do not need the PoS part of Ethereum to be functional. It can be done with the improvments shown in the green rectangle and thus allows for a speedup of Ethereum mainnet in the near future. But now let’s proceed with the real advanced stuff:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/722/1*2OU9Sdmugmx0Pw4SvgGq9Q.jpeg" /><figcaption>The blue rectangle depicts the part of the Ethereum 2.0 roadmap, which is the most advanced technology. Here sharding becomes possible and some more modern things like post-quantum cryptography and SNARKs/STARKs become available as well as CBC Casper.</figcaption></figure><p>The thing here is that CBC Casper is not yet a finished specification but more of an open research. But it defines the direction sharding will work in Ethereum 2.0. Basically it extends LMD GHOST + Casper FFG by what is needed to make shards work together and solve the <strong>Data Availability Problem. </strong>The <a href="https://github.com/cbc-casper/cbc-casper-paper/blob/master/cbc-casper-paper-draft.pdf">paper</a> itself is not finished and there might be a lot of things that change in the future. Understanding this, we know why it might take 2 years or even more until Ethereum 2.0 is available with all its main features. There are also some additional features and these are <strong>post-quantum cryptography </strong>and <strong>zk-SNARKs/STARKs</strong>. Post-quantum cryptography basically means changing the hash algorithms to those, which are resilient against attacks from quantum computers. This is not really important right now, since quantum computers have very few Qubits at the moment. But one day they will become bigger and there are some problems, which can be solved very efficiently by quantum computers. These problems are integer factorization, discrete logarithm problem and elliptic curves. Unfortunately these are the building blocks of asymmetric cryptography, which is used extensively in blockchain technology. But since there are quite good alternatives, we don’t have to be afraid that quantum computers will destroy crypto currencies. Or at least not those being able to adapt to the future (I’m looking at you Bitcoin).</p><p>The zk (zero-knowledge) stuff is about anonymity, which means that transactions can be processed but parties can stay anonym while doing so. Wait, didn’t we have zk-Rollups in the near future already? Why is it now again in the distant future? Well, there is a difference whether rollups are based on zero-knowledge schemes (zk) or if the state transition of Ethereum supports that (the latter is more complicated). Some people think Bitcoin and Ethereum are already anonym. This is wrong. <strong>Pseudonymous </strong>is not the same as <strong>anonymous</strong>. If you have an address in Bitcoin or Ethereum, it is possible to link it with your identity and once someone has done that, he or she can track all your actions on the blockchain. With zk-SNARKs it is possible to do real anonymous transactions that cannot be tracked. This is especially useful if one day elections and votes happen on a blockchain, since votes should be anonymous. Unfortunately this won’t stop some egocentric politicians claiming the vote was rigged.</p><h3>Summary &amp; Comparison</h3><p>It is unbelievable, we made it. We made it to the final section of this article. Here we will have a look at what all of this means for real world applications. First we compare features of these blockchains and what are these good for. Then we will go through some examplary applications and finally I will present what are the biggest concerns in the crypto community about each project and what I think about it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*81-Z38Gy_F4AvQlDdsGJqw.png" /><figcaption>This table gives an overview by describing how each technology tackles each category.</figcaption></figure><p>Looking at the table there is something we can see quite easy: All points where Cosmos shines, Ethereum is weak and vice versa. Polkadot is somewhere in the middle. This is not a surprise, since the approaches of Cosmos and Ethereum vary a lot, but Polkadot tries to give a lot more autonomy to projects than Ethereum, but also offers more already set up infrastructure with Shared Security than Cosmos. The three important things here are Infrastructure, Autonomy and Data Access. Why? Because depending on your application you need some aspects of these fields. We have discussed all of these aspects here already, but I will give a short recap: Infrastructure means how much blockchain is already there, on which you can build your dApp. With “already there” is not meant how much has been realized, but in how far the technology itself is designed to allow you to use an existing network. Autonomy is in how far your freedom is limited. It is in some sense the other side of the coin. A big and strong infrastructure enforces many things, which limits your autonomy. We discussed the most important parts of autonomy in the Compatibility section. With the following examples we will understand more of its importance. Data Access is about the ability to read data of other applications or smart contracts. This is interesting for financial derivatives or Vitalik Buterin’s most beloved example: CryptoDragons eating CryptoKitties. So whenever an application is interconnected with many other applications it is nice if the integration is high. This is also somewhat contrary to autonomy, since integration is much easier when everything is standardized. Since we have learned already about the love of triangles, here comes the next one:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*h029h9IVlfxSudC2Y513Jg.png" /><figcaption>This triangle shows 3 different aspects of Blockchain 3.0 and where each technology shines. The more a circle covers the more of this aspect is available with the given technology.</figcaption></figure><p>This image shows us further how Polkadot and Ethereum 2.0 overlap more with each other than with Cosmos. So Polkadot sometimes calling itself “Ethereum-Killer” might be a correct classification for itself. If it can really kill Ethereum is an interesting question we will discuss here.</p><p>So let’s now discuss some examples:</p><p><strong>DEX</strong><br>This is short fordecentral exchange and is a well known concept for many in crypto. Today all useful DEXes are built with Ethereum and you can only trade Ethereum token on it. That limits the usefulness quite much and in addition Ethereum fees are high, so that it is not cheaper than central exchanges. But at least it has the real advantage, that the exchange cannot go bust and you lose all your money. So here we see that only 2 things limit DEXs from exploding and becoming the main thing, scaling and cross chain token transfer. Both is solved by all blockchain 3.0 approaches. Nice. Is there some advantage for either technology? Well, since Data Access is not necessary it all comes down to Autonomy vs. Infrastructure. Infrastructure means a startup can build such a thing very fast and given that many DEXes already exist and only need to upgrade to allow non-Eth token as well, it looks like Ethereum 2.0 will be at a main advantage here. But autonomy allows to collect the fees by yourself, so if a project decides to build a DEX, then it might want to get the fees themselves instead of letting this stream of income run to the Ethereum validators. So an autonomous solution on its own chain (the Cosmos approach) will have less friction and might win over time. But there is also the time to launch to consider and here this is a big one. There is no first mover advantage with Ethereum 2.0 if it takes 2 years until it goes online and Cosmos IBC is available on 18th of February. So in a few days. It is very unlikely that no project will be able to build a interblockchain DEX within 2 years. Especially since it is already possible to build such a thing for some years now on Cosmos and go live soon. Polkadot lies in between, since the launch is not as far away as Ethereum and it offers more autonomy, but it is also not an ecosystem like Ethereum were many many token are already there.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/598/1*1TkoeVCutNnSaFZyw5ozNA.jpeg" /></figure><h3>DeFi</h3><p>This stands for decentral finance and is the big hype right now. Well it is of course a big thing, since the classic finance sector with all its derivatives and options is huge. I think here are 2 different important classes, one is synthetic assets and the other is derivatives. Synthetic assets are real world assets, for example the Apple Stocks, mirrored on the blockchain. This already exists for Ethereum (Synthetix Network) and for Terra (Mirror protocol). Terra is based on Cosmos, so we might say, the future is already here. Derivatives also do exist for a long time already and these are financial products with an underlying and some interesting mechanic. Typical examples are stock options, longs, or shorts. For example if you think a company will lose value in the future you might want to buy sell options (puts) to hedge your risk of losing the money invested in the stock. Or you could buy these puts without having the real stock, then you bet on the price moving down. You can also go all in and short a stock, for example of a company selling games in retail stores. Shorting means that you lend the stock and sell it. When the price drops, you can buy it back and make a profit by giving it back and cashing in the difference. The problem might be that the price of the stock can also rise and then you have to buy it back at a higher price, since you owe someone stocks of that type. This has happened with Gamestop stocks (GME) and was quite popular in the media. Derivatives can also be on crypto coins, so you can also short Ethereum or other token things. Like DEXs this was only available for Ethereum based token, since everything needs to be on the same network. So buying call options for Bitcoin in a decentral way is not possible without interblockchain communication. This field will open up in the future with Blockchain 3.0 and it might become a big thing.<br>Discussing this is quite similar to DEXs, so in some sense Cosmos would win, but especially for derivatives it is very nice to have Data Access. So if many products that want to be derivatived are build with Ethereum, then this can be built very efficient and fast with Ethereum 2.0. Polkadot unfortunately lacks this feature, so it cannot combine the advantages of both worlds in it for derivatives. However Polkadot has Shared Security, which is interesting for DeFi products, but if someone builds a DeFi platform, the scope is so big that the investment for a self-hosted infrastructure is manageable. Then there is also instant finality, which is nice for DeFi in many circumstances and Cosmos has it. If you try out Mirror Protocol and compare it to Synthetix Network, it is much better for the User Experience to have confirmation after 7s and go on very fast. Especially since often these things are bought, put in a liquidity pair and then staked or activated in the end, so every step can only begin after the previous was done, this feels much better with fast confirmation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/735/1*0hjOkja0P1Qf07YzQZVuiw.jpeg" /></figure><p><strong>Games<br></strong>This is a very wide field. Games differ a lot. Playing poker has other demands than playing World of Warcraft. Especially when looking at the infrastructure needed to support it. For Poker it is possible to let it fully run on today’s Ethereum. A transaction costs roughly $10, which is a lot, but if you play poker with $10.000 pot and more, this might be not a big problem. Unfortunatly most players play a bit smaller pots, so fees become a problem here. And poker is a game with really low transaction number per game, so other games are much more complicated. Often these games resort to having only ingame items and game outcomes on the blockchain. I’m working on a <a href="https://crowdcontrol.network">decentral trading card game</a>, where users can create their own cards and vote on cards of others to bring the game to balance. Magic the Gathering is the origin of this genre and cards usually cost $0.01 to couple of hundred dollar, some are even more expensive, but most cards are valued less than $1. So having transaction fees of $10 is not possible here, even if one moves the game off-chain and only reports game outcomes and ownership of game items on the blockchain.</p><p>This is the reason why CryptoKitties are so expensive and all other game items, that came after it. Let it be robots that fight each other or some other collectibles. There is no sense in having collectibes with lower valuation than the transaction fee. So here we understand, why for games blockchain 3.0 will be a total game changer. Games are often indie projects or start small and become quite big once a big partner is found. This means for many projects it is not feasible to have enough money to buy a slot in the Polkadot network. This means the game has to resort to using smart contracts on another Parachain, but then the fees flow off to other validators and not the ones actually programming the game or funding the project itself. For Ethereum 2.0 the same applies. So if somone builds something like CryptoKitties, which is more about collecting rare assets than being a real game, then it is fine. But if a game is more like World of Warcraft or a trading card game, Polkadot and Ethereum 2.0 have a big disadvantage compared to Cosmos.</p><p>Here autonomy is really important. The idea of having <strong>application specific blockchains</strong> makes a lot of sense for games. In addition it is not really necessary to have a Shared Security model, because for games when they launch there are not millions of $ in game assets. This is different for DEXes or DeFi, where anybody hosting such a project wants to onboard as much value as possible right from the start. Then you need security. In contrast the valuation of the staking token of a game’s blockchain can grow together with the game assets, which are collected by the players, thus building up security. Another good addition is freedom in setting up the infrastructure. If someone wants to build such a crazy thing as WoW, then it is possible to split up different areas also on different zones, to scale it to infinity. This can be done, so that interacting users end up on the same zone.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/1*1UsL-2spSnndldc2RuI5LA.jpeg" /></figure><p><strong>DAOs<br></strong>This stands for Decentral Autonomous Organizations. A big and famous DAO was<strong> TheDAO </strong>in 2016, which ultimately collapsed, because its smart contracts were not designed well. The idea was to collect funds, then invest in startups and give returns back to the members of the DAO. So in essence this was an investment fund decentralized. DAOs can be all kinds of things and this example again works on Ethereum, because high transaction fees are not a big thing if you are investing $100k or millions in a decentral way. But you can also create a sports club as a DAO, a worker union or a political party. These examples are much different, because there is no way that this will work, if every vote costs the voter $10. When a sports club elects a new treasurer and everyone has to pay $10 to give their opinion, this might be problematic. After that the president is elected and so on. So the yearly member’s meeting will cost a couple of $100 quite fast — for each participant.</p><p>For Ethereum 2.0 these fees might become really small, but still members have to get some Ethers to participate, which might be annoying. So for smaller DAOs, Cosmos might be a real winner here. Especially because once the DAOs grows to a meaningful size, it can easily connect to other Hubs and become part of the network. Since the range of possible DAOs is quite wide, there are of course other examples, where Ethereum 2.0 will shine. These might be some concepts, where the DAO interacts a lot with other Ethereum dApps. Unfortunately most DAOs are more isolated than other things and the advantages of Polkadot and Eth2.0 over Cosmos are not relevent in contrast to having maximum autonomy.</p><p><strong>Interactive Smart Contracts<br></strong>It is hard to find a suitable name for this. But what it means is dApps that are strongly integrated or interacting with other dApps. One example was already given and that is CryptoDragons eating and digesting CryptoKitties. You could have a virtual hotel with a casino, where in the base floor there are a lot of slot machines and anyone can buy such a machine and host their smart contract on this machine. On the upper floors there are meeting rooms, where DAOs come together and so on. This makes a lot of sense on Ethereum 2.0 and does not really work with Cosmos. With Polkadot this might also work quite well. It strongly depends on how much you need reading the data of other smart contracts. This is the main difference between Polkadot and Ethereum 2.0. If your interconnected dApp does not really need this feature than the additional freedom of Polkadot is an advantage. If you need maximum integration, then Ethereum 2.0 is the winner.</p><p>Whenever many different actors build something together with various smart contracts, then we are in this realm here. The vast majority of dApps that comes to mind, does not really need this feature. In most cases there is a single product, which is build by one group and if it is able to transfer and receive foreign coins, it is fine. Even the feature to send data to other blockchains might not be necessary for many dApps. But we have to keep one thing in mind here: This is the case because we come from a thinking, where such things were just not possible. Maybe totally new things will emerge from this and then a data integration like Ethereum 2.0 is a killer feature. We don’t know this yet. Maybe it will be disappointing and there are some examples like this hotel with virtual rooms etc., but this turns out to be some funny thing to play with like CryptoKitties and not a fundamental new way of interacting.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-f259ELUU5M5BqDE0ceITg.png" /><figcaption>An overview of different types of dApps and how each technology performs.</figcaption></figure><p>We have now arrived at the very last part and this is the candy at the end. The “when moon”-questions are answered here. Let’s first discuss what the typical criticism of each project in the crypto community is:</p><p><strong>Cosmos is shit, because it accrues no value!<br></strong>This critique is very interesting and the main idea is that Cosmos or better say the native token of the Cosmos Hub, Atom, is not able to become valuable. For Polkadot you need to buy DOTs to become a Parachain and for Ethereum you need to buy Ether to run the smart contracts or let’s better say your users need to buy Ether for the smart contracts. But for Cosmos anyone can just start and build their own Cosmos based blockchain and Atom does not get any value from that. This is actually right. This results from trying to overcome the “one coin to rule them all”. But this critique is also short sighted. On one hand it reduces the value generation of a token to forcing others to buy it in order to use the tech. This is not correct. For example if you have a DEX on a platform, then users will transfer some coins over the network in order to use the DEX. All validators profit from these transfers by fees, even if on the DEX a trade is done between 2 different projects and neither of these might be build with the same tech as the DEX. So if a Polkadot based coin is trades for an Ethereum based token on a Cosmos DEX and the Cosmos Hub connects these chains, then the Cosmos validators profit from this. So there is another reason, why Atoms might have value and that is that using the Hub for routing token to other Zones gives fees to Atom stakers. On the other hand if you say that Polkadot and Ethereum will make great profit from forcing projects building on them to buy their coin, then you are basically saying that you understand this mechanism but the projects building on Polkadot or the users using smart contracts on Ethereum will not. Because the Cosmos based solutions might compete with the others. If you are able to have the same service on Cosmos with less fees than on Ethereum, you might just use the Cosmos version. If you are able to build your project with Cosmos technology and not pay for the auction price of a Polkadot slot, then you might just go with Cosmos. If you think these projects will be forced succesfully into these projects paying something they don’t have to pay with Cosmos, then you are basically betting that the developers of these projects understand less than you do. I don’t want to bet on this — since I know that I’m a real idiot. And furthermore if for many projects it is cheaper and easier to build with Cosmos, then which network will most likely have the highest number of transactions, generating most fees? Well, the one that did not force others in the first place.</p><p>Edit: I have made a personally frustrating experience with the Cosmos community managers. When I wrote this article here the experience already happened and I did not find it fair to write something like that into this article here, since sometimes shit just happens. However now I have made a second frustrating experience and I can exclude that it’s just a misunderstanding. So when Hackatom 5 happened, the Cosmos hackathon, there was a community winner called “King of Cards”. This project copied the frontend from our Cosmos based trading card game and handed it in. Well, somehow nobody noticed that the github project does not have any blockchain running (somehow they did not bother to copy the Cosmos blockchain) and also nobody got suspicious by a single commit putting in a complete website and the website connecting to a blockchain on another domain (ours), but well. That is shit that happens. Not a big deal. I told the organizers, it was a community prize, so the community voted for this project and it was not selected by the jury. So I said, they should declare us as the winners, but I totally understand that they did not. But at least I demanded they would at least mention us as the project that was plagiarized here. They did not. This was frustrating for our team. I tried to explain the community managers that most plagiarism harms by making genuine ideas more public without giving attribution to the originators. Only a small part of plagiarism are actually products being copied in low wage countries being sold for a cheaper price. Well they did not really care, I did not understand that, since somehow it would make sense to support your own community, but maybe that’s just my opinion. Then they proposed to write something about our project or say spotlight us in some article and I said, yeah that is nice. In December I was told it would take a bit longer, in January I only got an answer after asking several times and was told that the Cosmos upgrade draws a lot of time from them and that is totally understandable. I asked back how it’s going with the article about us in February, March and April, once a month, I tried not to be annoying, but in all these cases I did not get an answer. So a second time as a community member who really tries to build awesome stuff for Cosmos, I was left standing in the rain by the community managers. Since after several months I can’t believe this is just some random misunderstanding, I think it is not unfair to write something like this experience in this article here. It is a personal experience and others might have other and hopefully better experiences. Furthermore I highly appreciate the marketing mission launched for Cosmos (see <a href="https://www.mintscan.io/cosmos/proposals/34">https://www.mintscan.io/cosmos/proposals/34</a>). I have had contact to some of the administrators of this fund and all of them have been always helpful and thoughtful. This fund gives me hope that Cosmos can achieve a better marketing and community approach than what I have experienced so far.</p><p><strong>Polkadot is shit, because Parity Wallet was hacked!<br></strong>The company that started Polkadot is also a provider of some widely used Ethereum infrastructure. The parity wallet is from them and it was hacked twice. The first time in mid of 2017 and the hacker stole 150k Ether (~30M USD at that time) and the second time the “hacker” did not steal anything but wiped out 500k Ether (~152M USD at that time) on parity wallets. In November 2017 it happened accidentally so the “hacker” claimed. The problem here is that it should not have been possible in the first place and this is Parity’s fault. The hack is a result of an incomplete initialization of the smart contract of said wallet and the addition of an unneeded function to kill it. Polkadot collected 480k Ether in its ICO of which 2/3 were lost when the last of these 2 hacks happened. That was really sad news for Polkadot, but luckily it was able to run a second ICO or say token sale, where 60M USD were collected so the project was not ultimately harmed by these events. However besides the direct damage by lost funds these events also raise some big question marks whether the team of Parity might do more mistakes in the future, thus causing more severe hacks to happen. It might also mean that they have learned their lesson and due diligence will be on a very high level now.</p><p>Such things always have something of fingerpointing, but I still decided to put it here, because it has a lot to do with technical expertise, which is important I think. For Cosmos there is also an event that I could have picked, which is the founder Jae Kwon becoming a little bit crazy. This happened after Cosmos launched its mainnet, when IBC was still under heavy development. Somehow he identified with being Cosmuhammad Bitcoin Jaesuestain some kind of crypto prophet madness. But these events were crazy when they happened, however the team of Cosmos was able to move on and delivering their vision on 18th of February in 2021, so this event of the founder freaking out a bit did not mean the technical expertise was harmed. Similar things happened to Tezos, which was also able to move on after the split with their founder. Also I try to present here what the most popular critique of each project is and that Cosmos is not able to accrue value from projects joining the network is something you hear very often in contrast to the Jae Kwon story. For Polkadot you also hear quite often that the project is shit, because you have to buy DOTs to get in and the important fact being left out often is that these DOTs are not burned but give you also staking rewards.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/439/1*0z7SPK7_HCSmNUHWcg8QHw.jpeg" /></figure><p><strong>Ethereum 2.0 is shit, because it will never be finished.</strong><br>Yeah, well, what can I say. Never might not be right, but it takes a hell lot of time. The critique is not wrong. Often along comes criticism because the plan for Ethereum 2.0 has changed quite often and might also change in the future. Some people even predict that shards will never come and everything will be done with rollups, because these shards too complicated. Well, if that is true, then Polkadot must have the same problem, since the Parachains and the shards are not that different and Ethereum can also lower their seamlessness of smart contracts to the level that matches Polkadot. But I think this prediction is not correct. Shards are no crazy magic nobody actually understands, but it is a lot of engineering work, even after the basic approach has been finalized. Another reason why it takes so long is of course that Ethereum has an existing ecosystem and blockchain already running, built on older technology. But this has a big advantage: Ethereum is already at all exchanges, everyone knows it and after the transition there is no resetted ecosystem but instead there is a big community waiting for the upgrade. So even if this critique here is correct, the question is if Cosmos or Polkadot can outrun Ethereum 2.0 in their 2 or 1 years headstart.</p><p><strong>Final words<br></strong>Thank you for reading and congrats you made it to the end. It got a bit longer than I intended. Also there will be some things wrong, some things outdated. That is hard for me to avoid, especially since some things I have learned quite some while ago and it might have changed recently. I might be biased in many things, that is because I’m human and I try to minimize it, but it is not really possible to totally overcome it. If you find mistakes or stuff that is outdated, just let me know in the comments. I’ll be happy to correct it.</p><blockquote>Join Coinmonks<a href="https://t.me/coincodecap"> Telegram Channel</a> and<a href="https://www.youtube.com/c/coinmonks/videos"> Youtube Channel</a> get daily <a href="http://coincodecap.com/">Crypto News</a></blockquote><h4>Also, Read</h4><ul><li><a href="http://Top 4 Telegram Channels for Crypto Traders">Crypto Telegram Signals</a> | <a href="https://medium.com/coinmonks/crypto-trading-bot-c2ffce8acb2a">Crypto Trading Bot</a></li><li><a href="https://medium.com/coinmonks/top-10-crypto-copy-trading-platforms-for-beginners-d0c37c7d698c">Copy Trading</a> | <a href="https://medium.com/coinmonks/crypto-tax-software-ed4b4810e338">Crypto Tax Software</a></li><li><a href="https://coincodecap.com/grid-trading">Grid Trading</a> | <a href="https://medium.com/coinmonks/the-best-cryptocurrency-hardware-wallets-of-2020-e28b1c124069">Crypto Hardware Wallet</a></li><li><a href="https://medium.com/coinmonks/crypto-exchange-dd2f9d6f3769">Crypto Exchange</a> | <a href="https://medium.com/coinmonks/buy-bitcoin-in-india-feb50ddfef94">Crypto Apps in India</a></li><li><a href="https://medium.com/coinmonks/best-crypto-apis-for-developers-5efe3a597a9f">Best Crypto APIs</a> for Developers</li><li>Best <a href="https://medium.com/coinmonks/top-5-crypto-lending-platforms-in-2020-that-you-need-to-know-a1b675cec3fa">Crypto Lending Platforms</a></li><li>An ultimate guide to <a href="https://medium.com/coinmonks/leveraged-token-3f5257808b22">Leveraged Token</a></li><li><a href="https://coincodecap.com/koinly-review">Koinly Review</a> |<a href="https://coincodecap.com/binaryx-review"> Binaryx Review</a> |<a href="https://coincodecap.com/hodlnaut-vs-cakedefi-vs-celsius"> Hodlnaut vs CakeDefi</a></li><li><a href="https://coincodecap.com/best-telegram-channels">40 Best Telegram Channels</a> |<a href="https://coincodecap.com/1xbit-review"> 1xBit Review</a> |<a href="https://coincodecap.com/keevo-wallet-review"> Keevo Wallet Review</a></li><li><a href="https://coincodecap.com/buy-ethereum-in-india">How to buy Ethereum in India?</a> |<a href="https://coincodecap.com/buy-bitcoin-binance"> How to buy Bitcoin on Binance</a></li><li><a href="https://coincodecap.com/use-bitmex-in-usa">How to use BitMEX in the USA?</a> |<a href="https://coincodecap.com/bitmex-review"> BitMEX Review</a> |<a href="https://coincodecap.com/buy-solana"> Buy Solana</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3b6f0e0cfb2f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/coinmonks/polkadot-vs-cosmos-vs-ethereum-2-0-for-real-idiots-3b6f0e0cfb2f">Polkadot vs. Cosmos vs. Ethereum 2.0 — for real idiots</a> was originally published in <a href="https://medium.com/coinmonks">Coinmonks</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to fix climate crisis for real idiots]]></title>
            <link>https://patrick-wieth.medium.com/how-to-fix-climate-crisis-for-real-idiots-66aa0fc32763?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/66aa0fc32763</guid>
            <category><![CDATA[environment]]></category>
            <category><![CDATA[global-warming]]></category>
            <category><![CDATA[climate-change]]></category>
            <category><![CDATA[carbon-emissions]]></category>
            <category><![CDATA[paris-agreement]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Wed, 12 Feb 2020 15:57:34 GMT</pubDate>
            <atom:updated>2020-02-12T16:10:40.479Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EtQ-qPeMZ0cZhSVnrpwV4Q.jpeg" /></figure><p>Whenever you face a problem and want to solve it, there is a hierarchy that should be considered. Especially if you are a real idiot and we are all real idiots once we deeply acknowledge our limits. So here we go:</p><ol><li>physical realm</li><li>epistemic sphere</li><li>technological reach</li><li>ressources limit</li><li>time-frame</li><li>economic feasibility</li><li>sociology</li><li>psychology</li></ol><p>That’s a lot, but we break it down and then it becomes easy. The first is the physical realm and fixing a problem on the physical realm is not possible. We must accept that. The ultimate example is time travel. People want to travel through time, it is possible but only forward. We do it all the time, but travelling backwards is not possible. Whoever works on finding a way to travel backwards is wasting their time, because causality. The next thing is whatever we can understand. It might be that we don’t understand causality and time travel. In essence we don’t understand how our world really works and time travel is actually possible. However it might be just so far out of our understanding that we just think it is not possible. So if our understanding of reality is correct and there is causality, there can’t be time travelling to the past. If we don’t understand enough of our reality to have a meaningful concept of time travel, then the problem is on the “epistemic sphere”. If it is just impossible in our reality, then the problem is on the “physical realm”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/660/1*WixYeUTc4ILBeWIguecX-w.jpeg" /><figcaption>Either it was colder in the past or our grandparents are exaggerating a bit.</figcaption></figure><p>For example some bacteria or viruses enter a host and there are plenty ressources, they populate and grow exponentially and at some point there is not enough food left and the daily stream of new food raining or flowing into the host is also not enough and unfortunately all bacteria have to die. If they knew this from the beginning, maybe they could reproduce less and not die because of their inability to live a sustainable life, but well they are the real idiots here. For us humans there must be problems like this, because our understanding of the world is limited. Some people claim that climate change is too complex and CO2 is not the root source of increasing temperature, they argue that the problem is limited by our ability to understand. However there are about 10k scientists who say otherwise. In addition the greenhouse effect is not totally incomprehensible. So happily we can go downwards in this hierarchy and this is very good news. Because the lower a problem is, the easier it can be fixed or it can be fixed at all.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/420/1*J3LgnVULhMsjpk9oEvBD4g.jpeg" /></figure><p>Number 3 is “technological reach” and it is just what we can do with our technology. We know that fusion power plants are possible to build. But our technology is not there yet. Thus we don’t know exactly how such a power plant will look like and it might be that we also face a “ressources limit”. A ressources limit mainly means that with all ressources within grasp for humans it is not possible to solve the problem. If it turns out that all material on earth is not enough to build a fusion reactor, than it would be on this level. Or if you want to build a second moon that completely consists out of diamonds, well then there are not enough diamonds around to do that. Forging all these diamonds from carbon might be possible, but the time-frame of that is extremely long. “Time-frame” is the next point and these three points belong together. In many cases the ressources we have today might not be enough but the ressources in 1000 years are sufficient for solving a problem. Often it is also a matter of technological reach if it is possible to harvest these ressources. And the technological reach is a matter of the time frame. If you wait long enough there might be a technology that solves the problem, however it might be far out of our scope. So these 3 realms belong together and given enough time it might be able to solve to problem, but within our lifetime there is no way. And again the good news is, that fixing climate change is not in these 3 realms. We have wind power, we have solar power, we have nuclear power, all three are mostly carbon neutral, are available today and we can build these in a reasonable time frame.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/1*ZoWqVSoLa2o7qi8Ldy9DGQ.jpeg" /><figcaption>If you think the meme game in this article is getting absurd, be assured it has only yet begun.</figcaption></figure><p>The final 3 levels are again connected. Some things are possible but do not make economic sense. Some things make economic sense, but cannot be done, because of politics, contracts and social restraints. For example building things like the three gorges dam. It makes sense to build hydro power economically, but not in all societies it is possible to force people out of the valley where the water will be. It might also be that such a project is no longer economically feasible if the compensation is fair. This is why these things are interconnected in the same way the previous three levels were interconnected. So these 8 levels can be divided in 3 meta-levels, a) really not solvable b) not solvable at the current state of the art and c) solution blocked because of social and individual constraints.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/672/1*WMUVM_YyHPEBxRo91k1Fdw.jpeg" /></figure><p>So for climate change we arrive at category c), which is very good, because here we are within the reach of our current society. We don’t know how expensive the damage will be, if we don’t act. But some fundamental thing has changed in the last couple of years.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/723/1*RBlG9-i37QNaKY8Zow5SQw.png" /><figcaption>This chart shows how expensive it is to produce electricity with newly build power plants. It factors in inflation and capital cost and does not have subsidies included. Numbers are for the United States. But it is very much the same for other countries like UK, Germany or Japan.</figcaption></figure><p>The above chart shows that the old paradigm, where solar and wind are just too expensive to be competitive has ended. 10 years ago, wind was about double the price of the best alternatives and solar was 8 times more expensive. Today wind and solar are competitive and don’t depend on fossil fuel prices. In essence only the most advanced gas turbines can compete with these renewables, coal and nuclear are more expensive. Of course there is the problem of storing energy of renewables, since fluctuations are a problem, but it is nothing that will stop this general trend. Especially solar is still getting cheaper and the price of batteries for storing energy is getting cheaper a lot over time. But how can this be? Isn’t the media full of people saying this stuff is too expensive? Sure, there are people whose main motive is not aligned to the best outcome for the whole society. Sometimes something is very good for an individual but not good for the whole society. If you own a lot of stock in coal and the coal power plants are already built, then this type of energy might be the best in your opinion. There might be a reason why some people think that clean coal is the best, but it is no longer the most efficient source of energy, even if you neglect carbon emissions. Also the price for producing electricity from coal in the chart above is without sequestration, so if you want to speak of clean coal, you end up with an even higher price. Sequestration means putting the emitted carbon dioxide back into the earth.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/1*qDRKTy5j5QmYWh3enNrILQ.jpeg" /></figure><p>So all of this means it is economically the best to build solar and wind? Well, it is not that easy. The chart shows the price of energy from newly build power sources. The cheapest is to just go with whatever is currently deployed. Especially the renewables have almost all of their cost upfront and are very cheap to run. This means the price per MWh above is calculated over the lifecycle of a power source, but the already deployed sources don’t have any cost for building them, because they are already there. Furthermore if you build solar and wind today, you don’t profit from the increase of efficiency in the future and even worse compete with the solar and wind that is build in the future for a cheaper price. Wait, this sounds like it never makes sense to build new technology, because this will always be the case? No, some special circumstances are necessary for this. Take the silicon chips for example, every 1.5 years the computing power doubles for roughly the same price. Great. So in 1.5 years whatever silicon chips you develop today will be outdated, still it makes sense to run this business, because you need much less than 1.5 years to sell enough chips to make a profit. If you deploy power plants it usually takes more than 10 years to break even. Since solar cells also depend on silicon technology, the tech advances before these 10 years are over. It’s not as crazy as for computer chips, but still it is fast. And finally photovoltaics (the actual right term) is just on a par with the rest regarding its efficiency. So it is not like you totally outcompete the old plants with new ones, as it is the case with new computer chips. These circumstances lead to a market, where it is risky to heavily invest in this technology. Furthermore there is the fluctuation of these types of energy. The more we build, the more we need to buffer it’s fluctuations, which increases the cost. For countries with low percentage of renewables this can be buffered away in the existing supply network, but this ceases once renewables are deployed on a big scale.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/1*tOrRDQO6ZqxzUq-7UU9GRA.jpeg" /><figcaption>This is a very classic meme. More Ice than ever is just a wrong statement. NASA gives the data <a href="https://climate.nasa.gov/vital-signs/ice-sheets/">https://climate.nasa.gov/vital-signs/ice-sheets/</a> and it shows that there is surely not more ice than ever but rather the least ice ever (except for pre-historic times). Al Gore’s prediction might be wrong, but he was not referring to all ice caps but rather the antarctic sea ice caps in summer, which were also at the lowest. For the discussion it doesn’t really matter if Al Gore was making a perfect prediction or it is wrong, because the proof for global warming still holds. A lot of Greta Thunberg statements will turn out wrong in the future, too. This is like Bill Gates statement, that 640 KB of memory will always be enough. This statement might be wrong, still the general direction he headed to was very fine. And analogous to the citation of Al Gore, Bill Gates did not say exactly that. It is falsely attributed, he said that 640 KB might last for the next 10 years, which was also wrong. It is alluring to just overthrow a complete hypothesis, just because one follower of it states one thing that is not 100% accurate.</figcaption></figure><p>Summed up this means the energy source for carbon neutrality is no longer economically nonsense, but it not yet a no-brainer. At this point we can see, that there are real problems for solving the climate change problem on the economic level. In contrast to all above levels, we have arrived at a level, where serious problems do exist and must be solved. Fun fact: We already know the solution and this is carbon credits or more descriptive, tradeable certificates for carbon dioxide emissions. The concept is very easy, there is a fixed amount of credits, which are defined by the amount of carbon that should be emitted as decided by the governments of the world. If you emit carbon dioxide you have to buy these credits and the profit from the sale is used to subsidize projects and technology that reduces carbon emissions. So on the one hand it limits the carbon that is emitted, because certificates become more expensive, when there are more emissions, hence more buyers. On the other hand it subsidizes solutions to the problem. You as an individual don’t have to do anything, the companies that produce the gas for your car or the propane for your BBQ already buy the certificates and nicely increase the price of their products for you. It is the best solution, because it does not force a specific solution, which might not be ideal, it just turns the external cost of carbon emissions into internal costs, which are then calculated by the individuals making decisions. Therefore the best solution will be found by swarm intelligence (the market) and not by “clever” leaders. This is always the better approach and the reason why the Soviet Union does not exist anymore. Because in the end even the greatest leader is just a real idiot.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/735/1*c_For07hulNUjaaUO5c62A.jpeg" /></figure><p>Why does this make a big difference? Because it changes carbon neutral energy sources from being on par with conventional sources to being superior. Which carbon neutral energy source and at which site does not matter. If Russia prefers nuclear power over solar energy, that’s fine. If Saudi-Arabia prefers solar over nuclear it is fine as well. These are good examples, where one thing makes more sense than the other, because environmental differences are big (more sun in the desert than in Murmansk). Once the carbon neutral sources are superior, investing in them makes economic sense. The thing we need for that are carbon credits, certificates that give a price to carbon dioxide emissions. So perfect, the problem is solved? Well, this solution must be deployed world-wide. And guess what, many big players are just saying “well it doesn’t help if we decrease carbon emissions, because China!”. The US being the worst player in this regard. So many others are thinking, well it is not really necessary to do anything as long as the US is not joining the program. Enter the T<a href="https://en.wikipedia.org/wiki/Tragedy_of_the_commons">ragedy of the Commons</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/495/1*DueZZt0uKpe28STa9ha_JQ.png" /></figure><p>The tragedy of the commons is basically the reason why in shared flats the dishes are never done. When you live in a shared flat and you have just finished your meal, well, then you can wash all the dishes flooding the sink or just put your single plate on top of the pile of undone work. “A single plate does not make a difference” this is the basic thinking. Furthermore “the others are flooding the sink for much longer”, “you have eaten outside the last days”, so it is not your responsibility. These rationalizations are the core ingredients for a race to the bottom. Maybe some members of the flat clean the dishes but at some point in time they realize that they always do the work and stop doing it, because they don’t want to be the only idiots doing it. Then the race to the bottom fully unfolds and close to the bottom people will start arguing that “the dirty dishes are not really a problem” and “if you want to eat you always find something to put your stuff on” and essentially “the dishes are not done that rarely, remember like this event 3 weeks ago, the dishes were done”. “It is just some flat mates are just too hysterical”. At this point deploying a solution is quite hard and this is a key problem in climate crisis discussion. Some folks still argue if climate change is a real thing. If yes, then they argue if it is really man-made. Having doubts is a clever thing, right? Yeah well, at this point it means delaying to work on the problem. Insead of working on a solution the problem is denied. Remember if the dish situation in the flat becomes unbearable enough for some, they might act and solve the problem. If you can just troll enough, then maybe someone else has to pay the cost of solving the problem.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/225/1*lbybqkh2B086YyXNo-iG-w.jpeg" /></figure><p>Here we have arrived at the sociology and psychology part of the problem. Social constraints are also commitments to the coal industry, which leads to saying that coal is very clean and of course that our coal is the best coal. Other coal in the world is inferior and so on. Even if coal is now outdated, it might still be used because of these constraints. But keep in mind that proponents of clean coal do not ratify the Kyoto Protocol. If coal was clean, it would be easy to ratify it. We have used the phrase “tragedy of the commons” here, another label would be P<a href="https://en.wikipedia.org/wiki/Prisoner%27s_dilemma">risoner’s Dilemma</a>. In this dilemma you are best off, if you betray your partner, but if the partner also betrays, then both are worse off. So the best situation is achieved if everyone is loyal, but for each individual this situation is not pareto-optimal. This means that your situation can improved if you change your strategy. Unfortunately if everyone changes to betrayal, you are back at the situation that is bad for everyone. Therefore you need to find an agreement, which makes everyone stay loyal. In the context of climate change the Kyoto Protocol is such an agreement. Keep in mind that this Kyoto Protocol is from 1997. Over 20 years ago the countries of the world understood the problem and decided to sign such a contract. Evidence was only piled higher and higher since then and we are still discussing.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/590/1*F-RMydP-5KCA-4CaG564Dw.jpeg" /></figure><p>If we think about the dish washing in the shared flat again, we arrive at a situation, where the members have agreed to clean the dishes and have a contract. However some members don’t want to join the contract or just join it pro-forma without wanting to fulfill its demands. Over time this will erode the commitment of the honest members, who don’t know if their effort is worth it. This is why the members of the shared flat need to find a solution to this problem. For example removing the dirty dish from the sink and putting it into the room of the dishonest member. This is a great solution, because it disincentivizes the bad behavior. Of course it only works if the dishonest member is known.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*y8L8iqFNMXuECXunupBraQ.jpeg" /></figure><p>Now we have arrived at the very core of the climate change problem. With carbon credits (Kyoto Protocol) it makes sense to deploy renewables, without it is ok, but it makes more sense to just let the old power plants run. Therefore some members of the world’s carbon emitters don’t want to join carbon credits contracts (US) or just don’t meet their goals (Germany). On the sociology level we have already stated what the problem is. It is mostly commitments to coal or oil industry, for example getting rid of coal means layoffs or even high payments, because of premature shutdown of power plants. We have not viewed the last, the psychological level. This is mostly condensed by the saying “Why should I reduce carbon emissions, if the others don’t?”. Or just denying that climate change is real. Keep in mind that the leaders of the world are not super idiots, they are just normal idiots like you and me. They do understand that climate change is real. They also understand this Prisoner’s dilemma / Tragedy of the Commons thing. When they deny climate change, they do not think, that it is not real, they just know that it is beneficial to not act and let the others act. Some idiots will do something about it, you know.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*kTe-jeq7RTJAJmQHwM-Qrw.jpeg" /></figure><p>Of course there are also people who are trolled into believing climate change is not real. Or it is not man-made, sure it is not man-made only, there are also other factors, but the man-made contribution is the biggest contribution. So among deniers there are those who understand and deny because they don’t want to act and there are those who are just fooled by the former and fall for it.</p><p>Now I’ll try to present an addition to the current solution and it is mostly derived from the solution for the dirty dishes in the shared flat. In the shared flat after a contract has formed and people still don’t do their dishes, it is possible to pick these up and put them into the room of the wrongdoer. For climate change this is not possible, because you cannot pick up the carbon dioxide and put it into some other country or pick up the natural disasters or the increased temperature. The problem here are the ones who don’t want to join the carbon credits club. These benefit off from the majority of nations who will act to stop climate change. They don’t have to invest in emission free energy sources, they can just wait until the technology is really cheap, meanwhile the others bear the associated costs. This demotivates others and makes them argue “why should I, when China/US/Russia does not?”. The idea here is to say that outsiders are just included by international trade. If a country does not want to join the endeauvor of reducing carbon emissions, then the price of emission certificates is just added via trade. Whenever you want to trade goods or services into the international alliance that reduces carbon emissions (Kyoto Protocol), then you have to pay a carbon duty. If the US decides not to participate and they want to sell gas from fracking, then there is a carbon duty placed on the trade, just as if a certificate was bought. Therefore it doesn’t matter if a country like the US does not want to join and does not want to internalize the cost of carbon dioxide emissions. They want to keep the cost external for their own benefit. Fine, but if the rest of the world is in the alliance it is no longer possible to do this, because this carbon alliance will add the cost when trade happens. This works if most countries of the world agree on it, a few ignorants are not a problem.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/806/1*E_8MVWUBefs_V5Pbd6fAHw.jpeg" /><figcaption>There are 2 Papers on that and sometimes these are attacked by saying that instead of 97%, there are only 32% of scientists, who agree. This is wrong. 32% are the papers, which conclude if global warming man-made (<a href="https://iopscience.iop.org/article/10.1088/1748-9326/8/2/024024">https://iopscience.iop.org/article/10.1088/1748-9326/8/2/024024</a>). The remaining 68% of papers just don’t investigate on this question. These papers are also on climate change, for example conclude how fast temperature rises or something else, but these papers don’t take a position if it is man-made. So these papers are definitely not opposed to man-made climate change. Of the 32% papers, which do position on climate change, 97% say it is man-made, 1.9% reject it and 1% are uncertain. If you are still not convinced, there is also a paper, which investigates this 97% claim <a href="https://iopscience.iop.org/article/10.1088/1748-9326/11/4/048002">https://iopscience.iop.org/article/10.1088/1748-9326/11/4/048002</a>.</figcaption></figure><p>But what about products, that don’t emit immediately but emit carbon indirectly via it’s prerequisite products. Well, it is still known how much energy is roughly needed to produce a silicon chip or a car of a certain kind. Then this just get’s multiplied with the energy mix of that country and there you go. You can easily calculate how much carbon a product has emitted while it was being produced. So this solution of taxing outsiders through trade is great and I should get a Nobel Prize for this idea, I guess. Unfortunately this nice idea I’m presenting here is not a new idea and even the Nobel Prize for this idea was already granted to William Nordhaus in 2018. He had the idea long ago, before I was even born, so I think it is fair that he gets the Nobel Prize.</p><p>Ok great, so what does this article say? <br>1. Climate Change is not a problem on the physical/epistemological realm<br>2. Climate Change is not a problem on the technological/ressource-limit/time-frame domain.<br>3. Climate Change is not really a problem on the economic level. Solving it makes economical sense if you consider the whole world and neglect contracts with power providers and exclude subsidies.<br>4. Climate Change is a problem on the sociological and psychological sphere. <br>5. Solutions have been presented and selected, we just need to deploy more of these. <br>6. It looks like most countries of the world are interested in working together and solving this problem.</p><p>All this sums up to: There is hope. If we are lucky, not too many will vote for the trolls who deny climate change.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*XLr_8OUzbJ3Q7vSZBbFwTw.jpeg" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=66aa0fc32763" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Proof-of-Work Vs. Proof-of-Stake For Real Idiots]]></title>
            <link>https://medium.com/thecapital/proof-of-work-vs-proof-of-stake-for-real-idiots-6ca54ba6163?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/6ca54ba6163</guid>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[bitcoin]]></category>
            <category><![CDATA[proof-of-work]]></category>
            <category><![CDATA[proof-of-stake]]></category>
            <category><![CDATA[cryptocurrency]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Fri, 04 Oct 2019 05:32:51 GMT</pubDate>
            <atom:updated>2019-10-04T05:32:51.522Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IVa_srYA5tCgIf18lOG5fw.png" /></figure><h4>By <a href="https://medium.com/u/8e91a3236ca6">Patrick Wieth</a> on ALTCOIN MAGAZINE</h4><p>Up to this point, I have only written articles about specific coins, but maybe it is interesting to someone if I write about technology in general. The only way to find out is this article.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*EK8LJjPmzwpo1l-J.jpeg" /><figcaption>Image taken from <a href="https://de.wikipedia.org/wiki/Goldstandard#/media/Datei:McKinley_Prosperity.jpg">https://de.wikipedia.org/wiki/Goldstandard#/media/Datei:McKinley_Prosperity.jpg</a></figcaption></figure><p>As usual, I will try to write like a real idiot for real idiots. So let’s just remember what we are looking at:</p><p>Blockchains, cryptocurrencies, decentral ledgers, etc. are networks, where no authority exists. When we read about it, we bump into Proof-of-Work (PoW) and Proof-of-Stake (PoS) really often. Is there some other stuff? Yes, Proof-of-Authority (PoA) also exists and some derivatives like Proof-of-Intelligence or Proof-of-Assignment. But these are very similar to the others — so we will compare the first two. Sometimes these things are called Consensus algorithms, but thanks to Emin Gün Sirer we know that this is terribly wrong.</p><p>Unfortunately, as real idiots, we don’t understand this hairsplitting. So let’s try to understand more here, but keep it simple and have a look at economics, too.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/598/0*nrTkOlhmRd_BHSc9.png" /><figcaption>Emin Gün Sirer jizzing out a tiny fraction of his infinite knowledge. After reading this article, I promise, you will understand at least 2 technical terms he used in these tweets more than now.</figcaption></figure><p>A short note beforehand: This article is not as funny as my other articles. I’m sorry. I have included “funny” pictures to compensate for this. I hope this cheap trick at least fools some of you into reading further because there are some nice memes looming.</p><p>With Bitcoin Satoshi Nakamoto did not invent Proof-of-Work, he (or she, or them) invented Nakamoto Consensus — what a surprise. Nakamoto Consensus is the combination of PoW, longest chain rule and blocks. More precisely this is the combination of some arbitrary cost that has to be paid (PoW), a mechanism that ensures it makes sense to burn value (pay arbitrary cost) as well as synchronizes an unorganized network and something that binds together asynchronous transactions into synchronous blocks. This last thing was already solved.</p><p>In essence, there were hash signatures being printed by the New York Times already in 1995. The purpose of these signatures was to sign all the digital documents sent to Surety (read <a href="https://www.vice.com/en_us/article/j5nzx4/what-was-the-first-blockchain">here</a>). In principle, this was the first blockchain. It allowed someone to prove that a digital document is unaltered. Therefore it was sent to Surety, a company that created a hash for this document and you could use this hash to prove that your document is unaltered. In addition, you could also prove that your document was released at a certain point in time.</p><p>These points in time are the release times of the New York Times with the hash signature. This hash signed the collection in which the signature of your specific document was included. These collections are blocks and each newly published hash in the New York Times verified the previous blocks. Even though it was not called like that, it is a blockchain.</p><p>The paper published by Haber and Stornetta goes back to 1991 and Surety was offering its services in 1995. So could have Bitcoin be realized back then already? No. This approach works for signing digital documents well because trusting Surety is on a different level for these signatures then it is for a cryptocurrency. The service offered by Surety was not decentral. So you had to trust Surety, that they won’t alter your documents.</p><p>Fortunately, there is no real incentive for Surety to do that. At the point in time, when you transfer your document to Surety for signing, there is no real gain in manipulating your document. You will recognize instantly that your document was altered and you won’t use the received signature since it points to an altered document. For Bitcoin this is different. If a single entity was issuing blocks, there is definitely something to gain if this entity alters transactions, for example transferring all BTC to their own address and selling these before the market realizes the system is no longer trustworthy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/631/0*wKdYg-R8xjRvqqRu.jpeg" /><figcaption>Another invention that was already available before Bitcoin existed.</figcaption></figure><p>So how about PoW? Yeah, this was present already, especially in the form of hashcash, a way to make sending E-Mails more costly to reduce spam. In order to send an e-mail you had to find a hash, an operation which costs time and thus makes sending spam mails expensive. The idea is good but it did not make it. Satoshi did not think about how to include this hashcash into the blocks of signatures but rather how anyone can be Surety, the single entity that publishes collections of signatures. If just everyone can publish collections, how is assured that the network does not get flooded and how do they agree on conflicts?</p><p>The first problem is obviously solved by hashcash or say Proof of work. The second problem is much more complicated and is basically what makes blockchain revolutionary. And this is a decentral consensus. How do independent actors agree on a single truth, even when some parts of this truth are non-beneficial to some participants?</p><p>For Nakamoto Consensus the final piece for this is the longest chain rule. It is a typical puzzle piece that fits in perfectly. For example, when modeling algorithms or physical systems, you often encounter a situation, in which the major parts are there but the whole thing is shaky. Sometimes you find a piece that solves some remaining problems, but it creates a few new problems, then you find the next piece, it solves again a problem, but creates a new one, until you finally accept that this path does not lead to a consistent theory.</p><p>Many paths lead to solutions that create new problems, but then starting over again, you finally find the piece that just solves all remaining problems at once. It solves the problem you are currently looking at and also some other problems on the backlog of problems and you find out, this puzzle piece is the missing magic sauce. The longest chain rule is such a piece. It solves the problem of how the network agrees on a single truth. At the same time, it solves the problem of why anyone should do much work. It even solves the question of how does the cost of an attack increase with the value of the network.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/226/0*FaRb3Og4JRbmZRLW.gif" /><figcaption>A typical reaction when a crucial puzzle piece is found</figcaption></figure><p>It also comes with a very nice feature that the network participants do not have to communicate with everyone. This does not sound like a big thing, but just imagine you had to talk to all your neighbors who live in the same street as you do. Yes, the annoying ones as well as the super boring ones. You have to do it, every day. Otherwise, your street is closed and nobody can get in or out anymore. Once everyone has talked with each neighbor, you can unfold the sidewalks in the morning and at night the same happens, once you have agreed you fold up the sidewalks. Alternatively, your street has agreed on a special rule, that whoever wakes up first, unfolds the sidewalks and once it is dark and you come home, you fold up your sidewalks. Then nobody has to communicate. It works no matter how many people live in the street. This doesn’t come naturally, especially not in networking.</p><p>So what does it mean for bitcoin? Well, anyone can check if a block is valid, by looking at the previous block. And for conflicting blocks or chains of blocks (forks) anyone can check which is the longer chain and has more work put in it. And these checks can be done without talking to the whole network. Therefore the consensus mechanism scales with O(1), where this O is the Landau-O, that stands for complexity. O(n) means that the complexity of an algorithm scales linear with n, in this case, n is the number of network participants. A network where every participant has to talk to each other participant scales with O(n²), this means doubling the numbers means quadrupling the communication effort. Linear means double participants give double the effort. Ok, great, but then O(1) means doubling the participants does not give additional effort? Yes.</p><p>Keep in mind this is only the algorithm, the real network needs a bit more because you have to send the new blocks around, but this can be done in a way, where one node sends to 1000 other nodes and this scales very well. And only the ones who found new blocks need to do that kind of broadcasting. But wait for a second, this means infinite scaling right? Yes. But everyone knows Bitcoin does not scale. True, but when people say this, they are not talking about participants (miners) in the network but rather the number of transactions. So the number of users reading the blockchain scales very well, but the number of transactions these users can send per time unit does not scale at all.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/889/0*tDJLiWiJz2kc4tVt.jpeg" /><figcaption>A funny image. Not related to my sex life.</figcaption></figure><p>Interesting, but isn’t this stupid? It sounds like a restaurant with infinite tables, but only a fixed amount of food or drinks orders can be handled. Yeah, we will discuss this also, but one can see here that this mismatch in active user scaling and passive user scaling is problematic. There are some solutions, like the Lightning network, where you open a lot of other restaurants and each of these restaurants sends one guy to the original Bitcoin restaurant, where he orders all the stuff the people in his restaurant want to eat. The customers in his restaurant can also order virtual drinks, which become real later, once they have finished all their virtual drinks and the waiter goes to the original restaurant to turn the virtual drinks in real ones.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/805/0*uS225j01CurP-MMh.jpeg" /><figcaption>Me doing research for this article. In the background are all kinds of analogies I have thought out. Like the one you just read above this image.</figcaption></figure><p>But let’s go back to complexity. It can also be applied to other things, it must not be networking, a typical example is sorting. If you are a computer scientist, this part of the article might be very boring, but since this article is for real idiots, I have to go into detail. The task is quite easy, sort whatever list of words alphabetically or numbers by value — in principle it does not matter, sorting is the same for a list of whatever as long as there is a property for which a transitive order exists. However we are leaving the real idiot realm right now, so let’s stick to sorting numbers by value. The task is simple but there is an infinite number of approaches, some are faster than others. For example, Bogosort shuffles the numbers randomly and then checks if the order is right. Another example is the Bubblesort. It is the prototypical example, because it is easy and how we as humans often sort, which is by swapping neighbor elements until the set is sorted.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*Y4jH1juv5EYnQCx9.gif" /><figcaption>Bubblesort animated — not that this is important to understand the article, but let’s just rest a while and warm our hearts by seeing algorithms at work. Image taken from Wikipedia.</figcaption></figure><p>The third example is Quicksort, which is the prototype of a divide and conquers sorting algorithm. In this example, the list is divided into 2 lists, one list with all elements smaller than an initially selected element and a list with the rest. Then again the resulting 2 lists are divided in the same way until there are only lists with single elements remaining. At this point, the order is found and the lists only have to be merged back together reflecting which one was the sublist that contains the larger elements on each level. So this algorithm is much more complicated and we will look at complexity soon to understand why this extra effort might be worth it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/280/0*k4_w16_DxmFJUlGW.gif" /><figcaption>Quicksort animated — With higher complexity comes better heart warming. Image taken from Wikipedia.</figcaption></figure><p>The last example is Sleepsort, which spawns a subprocess for each number that waits (sleeps) for as many seconds as the value of the number is and then adds the number to the final list. This is a joke algorithm, but it will be a very interesting example here. So let’s look at complexity to compare these algorithms. At first one needs to know that checking the order of a list has a complexity of O(n). In the best case, Bogosort is sorted after the first shuffle, which gives O(n) for the check. In the worst case, the complexity is infinite, because shuffling can go on forever at worst. But what is the average? As everyone knows the average between n and infinite is n•n! (faculty). No, I’m just kidding, we can calculate this. I will try to demonstrate how this can be derived easily: There are n! possibilities to arrange n elements. This is because for the first element there are n spots, for the second (n-1), for the third (n-3) and so on. All these possibilities combined give<br>n•(n-1)•(n-2)•(n-3)•….. = n!</p><p>so on average one needs to shuffle n!/2 times and do a check after each shuffle. For complexity we omit factors, so the complexity is n•n!. Bubblesort has an average of n², because, in essence, you have to compare each element with each other, which is n•n. In fact, you only need to do half of that, but we omit the factor of 1/2 again. This is much faster than Bogosort. The complexity of n² in such a system is an important learning, we have seen this in the first part already for network communication. Quicksort introduces hierarchy and is a bit more efficient because a lot of redundancy is prevented systematically. This yields a complexity of n•log(n), which is better, because a logarithm of n is always smaller than n, especially for large numbers of n.</p><p>There are a lot of optimizations of Quicksort, which are better in the worst or best case or need less memory. But we won’t cover that here since this is quite special for this type of sorting algorithm. Let’s get to the most interesting algorithm. The best of all sorting algorithms is Sleepsort because it has a complexity O(n) always. Therefore it is much less work than all of these very sophisticated algorithms. You don’t need to check, for each element you only have to spawn a single process, which has a fixed amount of computation.</p><p>This all sounds too good to be true? Exactly. You still have to wait really long until the solution is ready. For each number, you have to multiply it with a factor which then gives the duration of the sleep. Before you have checked the list, you don’t know what is an appropriate factor, but we could do this and complexity would not increase. But for this factor, we cannot switch from seconds to milliseconds or nanoseconds, because this time must be significantly longer than the time a subprocess needs to spawn.</p><p>The sorting takes an amount of time that is equal to the largest number times the factor. So the complexity is the lowest possible of all sorting algorithms, but in fact, this is not reflecting the time it takes. This is only the computational complexity. A lot of computational effort is masked in the subprocesses and the factor cannot be chosen freely, it needs to be high enough to separate the subprocesses from each other if the sorted numbers are small and it needs to be high enough if there are a lot of numbers, so that the computer is able to spawn all subprocesses in a time shorter than the time interval between two close numbers.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*juxagqEWuDe6MdMF.jpeg" /></figure><p>Now we have talked a lot about these algorithms and one might ask what this has to do with blockchains. Well, we have seen that Nakamoto consensus scales with O(1) and this is no coincidence. The infinite scaling in Sleepsort comes from the assumption that infinite subprocesses can be spawned and in Nakamoto consensus, infinite miners can search new blocks in parallel. But that does not mean that infinite transactions can be processed.</p><p>In fact, the number of transactions cannot really be increased by a lot. In the end, you have to wait until a block is found and this blocktime cannot be reduced, just like with Sleepsort. If you reduce the timing too much, the subprocesses spawn time dominates and there is no sorting. In the same way, if the blocktime is too low, the computation is so fast, that network communication dominates and the finder of a block has the highest chance to find the next block.</p><p>The same goes for the network participants who have low latency (ping) to the finder of the last block, they find the next block with a much higher chance than others. In this case, the consensus doesn’t really work. So we have seen a similarity here between Sleepsort and Nakamoto Consensus.</p><p>Understanding why Sleepsort is not very efficient at sorting means understanding why Nakamoto Consensus is not very efficient at processing transactions. Another similarity exists between BFT (Byzantine fault-tolerant, here we mean BFT with dPoS) consensus and Bubblesort, where each item has to be compared to each other or each participant has to agree with each other, thus giving n² scaling behavior. Sounds pretty inferior? Well, it’s not.</p><p>Because we have seen that infinite scaling for Nakamoto Consensus is only for passive users (readers/miners). For active users, BFT has a lot more scaling to offer, because you don’t have to offer a sufficient long block time to give everyone a fair chance. In BFT the limit is how long it takes to synchronize all participants. Whenever you find ways to increase this bottleneck, then you can handle more transactions. Also, you can separate some kind of overseers from the actual block producers. Cosmos has done this to keep the number of actual block producers low, called validators. There are 100, but all other stakeholders can participate as overseers by distributing staking power among validators. They are called delegators. With this approach, you don’t have to exclude everyone who is not in the top 100, but still, have the transaction throughput of small validator sets. Great. So BFT is just Superior to Nakamoto Consensus? Well, it’s not that easy. BFT does not necessarily come with PoS.</p><p>I will not introduce BFT/PoS here in detail, but <a href="https://medium.com/coinmonks/cosmos-tendermint-explained-for-real-idiots-ab4305cbb41">I have another article on that</a>. Instead of using PoS, a BFT network can also be run with a fixed set of public keys that allow only the owners of the corresponding private keys to become a validator. This is usually called Proof of Authority (PoA) and it has an obvious drawback, that it is permissioned. Bitcoin would never have been anywhere noticeable if it was permissioned.</p><p>So the permissionlessness of Bitcoin and Nakamoto Consensus is very important. I’d also say that the option to either buy Bitcoin or mine it, was also important for early adoption. But we wanted to compare Nakamoto Consensus to PoS BFT and in contrast to PoA PoS is permissionless. However, an interesting question is: Is it as permissionless as Nakamoto Consensus with PoW? I’d say no. In PoS it is possible that you cannot get into the system because everyone already in it has decided not to sell any stake (coins). In this case, the technology is permissionless but the real network does not permit you to get in. In PoW this cannot happen, because you don’t need anyone from inside to build new chips that can solve the puzzle e.g. mine BTC. However this feature also implies an attack vector that does not exist for PoS, in PoW if you acquire enough computation power, you can attack the network. In PoS to do that, you need to buy the coins from network insiders. You cannot get them from the outside. These insiders don’t want to have an attack on their network or only if they can sell all of their stakes. However, on the way to a 67% share, the price might rise astronomically, whereas the production of silicon chips can be scaled almost linearly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*6ted_yPSroda4kj-.jpeg" /><figcaption>Two crypto experts discussing peculiarities of different rate limiting approaches. The bearded man is also an expert in quantum computing.</figcaption></figure><p>It makes sense to introduce the CAP theorem here, which I already mentioned in <a href="https://medium.com/coinmonks/iota-experienced-as-a-real-idiot-ec72e872f753">another article</a>. CAP theorem basically states that we can’t have nice things in distributed systems. Either consistency ©, availability (A) or partition tolerance (P) is very limited or 2 of them are limited at the expense of one feature being very strong.</p><p>We talked a lot about how many transactions can be processed and this feature is availability (A). The more participants can be handled the higher the availability. PoS wins vs. PoW. Permissionlessness has a lot to do with partition tolerance. Is it possible to fill the gaps if half of the network is gone? Can the network even continue if too many participants break away? PoW wins vs. PoS. To be precise this is the killer feature of Nakamoto Consensus. It works if all but one node go offline. It works if there is a nuke creating an EMP wave shutting down half the globe electronically. After the nodes come back online they can join the network, see if the others have found some blocks.</p><p>Even if all wires in the oceans are cut, the continents can go on and produce blocks and compare which found the longest chain and merge the different realities back together once a connection is reestablished or each continues separately. Well, this specific example, Nakamoto Consensus might not be the best to merge different realities, but this is a story for another day. The only thing that has better partition tolerance are hydras, where you can even cut a single node in half and they still work.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/355/0*WBBmmlch_sFn_noa.jpeg" /><figcaption>Hydras are very partition tolerant. Image taken from <a href="https://www.dndbeyond.com/monsters/hydra">www.dndbeyond.com/monsters/hydra</a></figcaption></figure><p>Ok, so PoW wins at partition tolerance and PoS wins at availability, what about consistency? This feature basically means that there is a single reality that everyone knows and agrees with. So Bitcoin has synchronous blocks and these synchronous blocks contain asynchronous transactions. Sounds pretty consistent and it seems to be the same with PoS. But there are forks, so sometimes there are concurrent blocks and one chain will be omitted in favor of the one that turns out to be longer after more blocks are added. In PoS this is not necessarily a problem, fast finality is possible, which means that blocks are final once 2/3 of validators have agreed and everyone knows that it is final then. In this case, consistency is very high because there is a single truth everyone has agreed to and you don’t have to wait for some blocks until it becomes almost certainly final.</p><p>To put this into perspective, let’s compare it to IOTA, where such a single truth never exists. There can be transactions known in one part of the network which is unknown in another part and vice versa. After some time both sides get to know the transactions of the other part, but until this happens there are again new transactions created that are not known by everyone. This is because there are no synchronous blocks, there are only asynchronous transactions. Consistency is low, but availability is high. This is quite interesting since IOTA uses PoW for limiting transactions just like Nakamoto Consensus.</p><p>In contrast, it has does not have the longest chain rule and thus availability is increased at the expense of consistency. So here we can see there are different ways to use these buildings blocks of consensus mechanisms. And such a building block comes with specific properties, but how it works, in the end, depends on the whole combination of mechanisms.</p><p>Coming back to PoS with fast finality we have seen that this comes with very high consistency, but what is the drawback? We have seen that there are often drawbacks associated if some desired features are strongly favored. This is the reason why the CAP-Theorem says that you cannot have all properties at max. So the price for very high consistency is a loss of liveness, which means that the chain halts, if not enough participants are online. Fragile liveness is in essence low partition tolerance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*EWuu8b2cK8LCbzML.jpeg" /></figure><p>Now we have seen that Nakamoto Consensus is strong at partition tolerance (P), good at consistency (C) and bad at availability (A), in contrast to PoS with fast finality, which is bad at P, strong at C and good at A. We can accept the fact that there is no single consensus mechanism that is suited perfectly for all problems. Just like there is no best car. If you want to transport lumber out of a forest, a Ferrari might not be the best fit, even though for a race it is great. The same applies here. So we have to define the use cases, to make a decision about what might be a good fit.</p><p>If you open the website bitcoin.org, you’ll see it say “ Bitcoin is an innovative payment network and a new kind of money.”, which is great. The use case is making payments. What is the most important feature for payments? Availability. If you cannot make payments, because the network is congested, then it is not a good payment network. Funnily Nakamoto Consensus is not good in this regard and thus Bitcoin is not good in this regard. Consistency is also important because you want to be sure the payment is final and your trade partner knows this.</p><p>Looking at this, it is quite obvious that PoS with fast finality is much better suited for transacting payments. But wait, isn’t this proven wrong, because Bitcoin has the highest valuation of all cryptocurrencies? First of all the market does not tell you if something is good at what it wants to be good at, it only tells you if people are buying it. And second, there is something else than payment for which money can be used and this is a store of value.</p><p>A high market cap can also mean that something is a good store of value or people believe it is. When you store value, you don’t make payments. To store value you don’t need high availability. What you need are partition tolerance and high security. Consistency is also not utterly important, because it is fine if it takes some time until the whole network knows you have stored value or until you can access your value. So is there an analogy to an old asset? Yes, it is gold.</p><p>Gold is a store of value for a really long time now and it is a decentral asset. There is no central authority issuing it and there is nobody that can mark your gold savings as invalid. Gold is quite heavy and not as easy to carry around compared to bills for example. So if we look at gold like a distributed application, which is a bit ridiculous, but we do it for seeing analogies and education, then it is extremely partitioned tolerant. The availability is not very good. The consistency is bad. Nature keeps very accurate track of how the gold is distributed among holders of its value, but we cannot access this database of nature, we need to check individually if a stack of gold is real gold and not filled with lead.</p><p>These bad marks at C and A are the reason, why most people don’t hold gold physically but rather have bought gold on a second layer, where you hold the claim of a certain amount of gold. This allows you to sell the claim and for the buyer, it is not necessary to weigh any amount of gold and check if there is lead in it.</p><p>This second layer solution of trading gold claims increases C and A tremendously and decreases P. The latter is because the second layer solution can be destroyed more easily than the real gold. This is quite similar to second layer solutions for Bitcoin. And here we can see that the term digital gold is a really good description of what Bitcoin is. To make Bitcoin more available second layer solutions are necessary. The same goes for gold. In both cases, the second layer trades away P (partition tolerance) for C and A. So we have seen now that the intrinsic properties of Bitcoin or say Nakamoto Consensus make it a good store of value and a bad payment processor.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/0*QpiugBx-n6HI1wf0.jpeg" /><figcaption>McAfee doing prank calls impersonating himself as “future”</figcaption></figure><p>How about PoS? It seems to be a natural fit for payment networks since it is strong at availability. With fast finality, it is also great at consistency. Does that mean, PoW is good for a store of value and PoS good for the transaction of value? Don’t fall for the trap giving PoW, in general, the attributes that only Nakamoto Consensus has. Take for example IOTA, which has PoW but does not have a blockchain, rather a tangle, which gives high availability, weak consistency but good partition tolerance.</p><p>However, this is a theory. In practice, there is a coordinator, which does not really make it decentral, limits transactions and of course partition tolerance is not given. But in theory, IOTA is an example of PoW being good at doing transactions. The big problem comparing PoS and PoW directly is that both systems are not directly interchangeable. If we take Nakamoto Consensus and just plug in PoS, then the longest chain rule does not make sense at all. Vice versa plugging cryptographic PoW puzzles into the BFT consensus, does not really make sense. So we correct ourselves, BFT PoS with fast finality is good for payment networks and Nakamoto Consensus (including PoW) is good for a store of value.</p><p>But can payment processors and store of value really be disconnected? Can it be seen as distinct things? Isn’t it ultimately necessary for things that are used for payment to hold value and isn’t it ultimately necessary for things which store value to be transferable? Yes, but only in a static picture. If we look at time evolution, it clears up. Again we can compare to real world things. We have introduced gold already, now let’s introduce fiat, for example, Dollar. Both Dollar and gold are a store of value and something you can use to make payments. Are both features implemented to the same extent? Certainly no. Dollar is a much worse store of value since it depends on the existence of the United States of America, whereas gold only depends on the physical laws to be unaltered. For payment it is the other way around. Gold is heavy, complicated to split, not as easy to check for validity and harder to count.</p><p>So here we have two real-world examples that both are used, make sense and still have different characteristics. If we look at time scales, we see that payment processing of fiat money can be done really fast, in contrast to gold. If we ask the question, on which time scales both functions as a store of value, then it is clear that gold performs well on the scale of 1000 years, but there are many examples of fiat money only 100 years old, that have no value today. It is no coincidence that PoS systems resemble the characteristics of said fiat money and BTC resembles the characteristics of gold. Does it make sense to have both systems? Yes, absolutely. Taking out a chunk of gold and cutting off a small piece to pay for your train ticket in the morning is analog to using a Lambo to carry lumber out of a forest.</p><p>There might be many that argue, the gold standard should have never been abandoned, but there won’t be many who will say gold is not limited by its physical properties. A monetary system based on coins and banknotes allows for a much wider design space than a physical asset like gold. So whoever argues that the gold standard should have never been abandoned does not take into account how important it has been to actually do monetary politics on the one hand. And to be able to process daily payments really fast on the other hand. Furthermore, our fiat money system is not really based on coins and banknotes anymore but is rather an electronic credit money system, which has an even wider design space and allows for faster transactions. So does blockchain has an even wider design space and allows for new properties? Partially. When it comes to decentralization, yes, but there are limitations given because of cryptography and its demands.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/517/0*8pMxVUCQEbcPHyOC.jpeg" /></figure><p>Nakamoto Consensus is much heavier on these constraints than PoS systems. Again this is a resemblance to gold vs. fiat. If we look at the way the different systems are secured economically we find another similarity. In PoS economic security comes via penalties and in Nakamoto consensus, it comes via intrinsic mechanisms. The nothing-at-stake problem does not exist in the latter. For those who do not know, this problem describes the circumstance that in naive PoS systems there is no reason to pick a specific fork of the blockchain. It makes economic sense to follow each fork of the chain. Since this is very problematic for the users and makes the network pointless, this behavior must be disincentivized by protocol defined penalties. We will call these penalties from now on punishment.</p><p>For Nakamoto Consensus — in this case, it is sufficient to say PoW — this problem does not exist, because the work can only be spent once and therefore only on a single fork of the chain. This is not a property designed by the protocol but rather an implication from the physical world. Coming back to gold and fiat, we can see that the security for gold comes from physical properties and the security of fiat comes from penalizing misbehavior. It makes economical sense to print your own banknotes or to hack bank systems to steal money. But we punish whoever does so and thus make the system economically secure. Even more important is the 51% (67% for BFT) attack, which must not make economic sense. If it makes sense in a given network to form a cartel or even buy 51% and after the attack, you make a net profit, then the network is doomed.</p><p>For Bitcoin, this implies buying a lot of miners or renting mining power and then transfer all Bitcoin to your own address in order to sell them to all open orders on the internet. The goal is to make more money than you spend on mining power. After such an attack smart money will move out of bitcoin and only the maximalists will stay and tell you that everything is fine. This is why it is important to cash out fast before everyone realizes bitcoin is now worthless. However, an attacker can calculate what this amount of mining power costs and how much there is to gain and if this equation always yields a net loss, nobody will perform the attack. Of course, there are more problems — on the cash outside, there might be a problem getting all the money from the exchanges before the attack becomes public and on the preparation side there are a lot of ASICs to be purchased, which might draw a lot of attention.</p><p>But this doesn’t matter as these mechanisms should not be the final barriers for such an attack. The attack must be infeasible regardless of such external problems. So how is this achieved? For bitcoin, there is network value, the market cap, which is currently $140 bn. Then there is something like the value of all miners. If we assume this value is $140 bn as well, then you need to spend an additional $141 bn to get 51% mining power and to steal all the bitcoin, worth $140 bn. Since there are not enough open buy orders, it is not possible to sell the bitcoins, before bitcoin is worthless because of this attack. But what ensures that the mining power is worth more and more, as the bitcoin market cap grows?</p><p>This is why there is difficulty and why it increases as more and more miners are connected. Mining gives block rewards and fees. So if the bitcoin value increases, then the mining rewards increase in the same way. This means all miners make more profit now. Now it makes sense to buy more miners and supply more mining power. This, in turn, increases difficulty, which makes mining less profitable. But how is assured that the mining equipment value rises in the very same way as the bitcoin price? Well if all other parameters stay the same, then this is a consequence of all involved equations being linear. If the price doubles, then the return from miner doubles, and there will be more miners until the marginal return from an additional miner exceeds the marginal cost of mining. I’m a bit afraid we are leaving the realm of real idiots here again. The key takeaway is, that the amount of miners doubles when the price doubles. As long as there are no other effects, like a change in electricity price, new technology, etc.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*oPUionVw7Gcjfp_X.png" /><figcaption>This plot shows the bitcoin difficulty and the bitcoin price over time. In green is the difficulty with scale indicated on the right and in orange is the price with scale indicated on the left. (Plot created on data.bitcoinity.org)</figcaption></figure><p>Of course, this is a very simple picture. In reality, there will be different marginal costs for different types of miners and different locations of mining. Marginal return will not exactly match marginal cost, because there must be a spread, which mostly depends on what time-to-value investors in mining can accept. For real estate investments in first world countries with AAA rating, for investors, it is fine to accept 30 years of time-to-value, but nobody who has some basic understanding in economics will start a bitcoin mining operation with 30 years time-to-value. If we look at the plot above, we can see that the price and difficulty curves match, but at a closer look, we see that the difficulty increases by 2 decades whenever the price only increases 1 decade.</p><p>This means difficulty rose from 100 to 10.000.000.000.000, or 1⁰² to 1⁰¹³ and price has only increased from 0.1 to 1⁰⁵. So here are two lessons to be learned: 1) Always look at the scaling of a plot. You can always make two curves match. The question is, does the scaling make any sense? And lesson 2) there is something that makes difficulty increase much faster than price. And the answer is a technological advances. At the beginning there was CPU mining and its source code was improved over time until there was the first code for GPU mining, which increased the hashrate per $ significantly. This code was improved as well and of course, silicon chip technology itself has improved but not at such a fast speed as the crypto community has improved mining hardware. The ASICs came out and the die size went from 120 nm to finally 16 nm and might even be smaller in the future.</p><p>So in this regard, there happened a lot. It is still remarkable, that you cannot see spikes for these milestones in the difficulty curve. The same goes for halving events (“the halvening”). Whenever the block reward of bitcoin was halved, many went crazy and thought insanse things will happen to difficulty but looking at the plot, you can’t recognize these events without knowing where they are (yellow lines). The reason for that is the heterogeneous distribution of miners. If new miners (more efficient) are put to service, then the difficulty rises, but many shut down their old now inefficient miners and the difficulty does not rise anymore. This is why the curve is so smooth. Of course, the difficulty rises because of these events, but it takes time until enough old miners are driven out and the new marginal cost has established a new balance with the marginal return. Still halving the reward is important for the token economics of bitcoin. It makes the supply of bitcoin ultimately limited. Without it, bitcoin would not necessarily be deflationary.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/654/0*n8KnCpRcbYi8xfd8.png" /><figcaption>Not only is he in for the technology but also to learn about token economics and unforgeable costliness.</figcaption></figure><p>Maybe you are asking now, why is such a long story about mining here? And the reason is, that in order to understand the economic implications of the Nakamoto Consensus there is no way around these things. What we have learned now is the relation between miners, security, and throughput of the network. In the first part, we have learned that there is no increase in throughput if there are more miners and in the last part we have learned that it is incentivized to have more miners when the network valuation has increased. Also, it is necessary to increase the value of miners in this case.</p><p>For Proof-of-Stake these things do not matter at all. These economic relations are not given. In PoS the coins are the (virtual) miners so the value of both cannot decouple. However, it is necessary to prevent a very similar attack in which stakers double spend and afterward instantly sell off their stake. This is easily solved by freezing the coins that are “virtual miners” for some time. If harm is done by stakers the frozen coins will be destroyed to punish the bad behavior. The same goes for following different forks to solve the nothing-at-stake problem. Mostly I’m presenting the solutions chosen by the developers of Cosmos. This is because Cosmos looks like the most decentralized and least attack prone of all PoS networks to me. However this is not the important part, maybe there is something better than Cosmos, feel free to convince me in the comments.</p><p>Important is the similarity to fiat vs. gold and PoS vs. PoW. For gold and PoW, physical resources are devalued in case of bad behavior whereas in fiat and PoS punishment is applied mostly by devaluing/removing virtual assets. But isn’t it better to have physical assets as collateral? Isn’t it safer? It depends on how you define “safe”. Virtual assets cannot be swept away by floodings, this has happened with Bitcoin miners already. Virtual assets cannot be destroyed in a hurricane. But virtual assets can be hacked, when stored in an unsafe way. It is always harder to carry away physical assets. Nobody will doubt that the biggest heists in the 21st century will all be done with virtual assets.</p><p>But this is just one aspect of safety. Another one is a corruption of authorities. Gold is the indisputable king. Nobody can corrupt physics. Changing the consensus parameters in a PoS system is much easier than for PoW. On the other hand, hashing algorithms might be broken by quantum computers. For PoS only the private keys might be broken by quantum computing. But this can also happen to PoW systems. Still, it is much easier to increase the key length than switching the hashing algorithm. But we have also slid over from (physical) safety to (IT) security. Regarding all of these things, there is only one thing that can be said for sure: Whoever says PoS or PoW is safer no matter what is wrong. Maximalists tend to extremes and the matter is more difficult than they want to believe. Making a decision in which systems are safer is mostly a decision in which scenario is more relevant in your opinion.</p><p>But some people say PoS doesn’t work at all? Why do they say that? Well, I will give two examples:</p><p><a href="https://github.com/zack-bitcoin/amoveo/blob/master/docs/other_blockchains/proof_of_stake.md?source=post_page-----a23ac4565649----------------------#why-pos-fails">zack-bitcoin/amoveo</a></p><p>The point made here works as follows: If someone bribes validators (stakers) to destroy a blockchain it makes most economic sense for them to accept the bribe and destroy the blockchain even if the sum used for a bribe is tiny. The author basically describes a prisoner’s dilemma. If the blockchain is destroyed, then if you accepted the bribe, you lose all stake but get the bribe. If you did not accept the bribe, then you have nothing. So in this scenario, it is better to accept the bribe. In the other scenario, the blockchain is not destroyed and in both cases, you have your stake but if you accepted the bribe, you also have the bribe. So in both scenarios, it is better to accept the bribe to maximize your outcome. But in the case of destruction, you might lose a big stake in contrast to the case of non-destruction.</p><p>So the author factors in the probability that your action is the one that makes the scenario flip. This probability allows calculating a ratio of network valuation to bribe sum. In the example, only 0.45% of the network valuation is necessary to bribe 100 validators, who stake 90%. What is constructed here is a Nash equilibrium just like for the prisoner’s dilemma, where it makes sense to defect the other prisoner to maximize your outcome. Even if both defect each other, there is no way to improve an individual outcome by just switching a single decision (Pareto optimum), both need to switch at the same time and cooperate together to get to a better outcome. And this is exactly what validators do all the time. They work together. They help each other and work to keep the chain online. I’m mostly arguing here why a PoS blockchain can work, even if the assumptions of zack-bitcoin are correct. In fact, they are not, but let’s get to this point later.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/469/0*41Q5nx2KNB4cYuY8.jpeg" /><figcaption>Accepting the bribe vs. rejecting it.</figcaption></figure><p>So the validators are constantly working together and are cooperating. In the case of a bribe, the validators know each other for quite some time and have communicated quite a lot. Now the bribe wants to destroy the blockchain either via consensus or governance. For governance it is quite easy since everyone can see how the votes are progressing, there will be someone who is certainly the tipping voter, who is responsible for the destruction of the blockchain. For this voter, the bribe of 0.45% of their stake is never enough to give the tipping vote. If this vote is not given, then for the next tipping voter, 0.45% is not enough and so on. Accepting the bribe only makes sense as long as the vote fails. Only if the bribe is higher than the stake, it makes sense to be the one who fills the quorum.</p><p>The other scenario is a consensus. So not governance destroys the blockchain but the consensus that creates a new block. Here the validators will be bribed and of course talk to each other and sure, they will tell each other, that they won’t take the bribe. Then the block comes, for which the bribe is given and a majority agrees to destroy the blockchain. Then they have betrayed the others, but in fact, they have done it, because they don’t want to be the suckers who did not even get the bribe. Basically such a scenario is more like blackmail than a bribe. Because in fact, the actors lose money, it is just that they don’t lose all if they submit to the blackmail.</p><p>So let’s assume here the worst case has become real. A majority lied to the rest, saying they won’t take the bribe, but they did and destroyed the blockchain. What happens next? The honest validators will be very pissed and they will restart the network without the dishonest validators. The users now might choose to use the fork with only &lt;1/3 of validators remaining or pick the destroyed fork, which is not really an option. The majority (2/3) who destroyed the network, might relaunch a non-destroyed version, but now we arrived at a scenario, where 2 networks compete, one in which validators have proven that they are honest and one in which they have proven they are dishonest. This is actually good news because it is a filter mechanism, which sorts out cartel forming dishonest validators.</p><p>Of course, for investors, this might not be the best news, because the network will lose a lot of valuation and might take a lot of time to get back to the old valuation. But and this is the most important part: The attack has failed. The attack has only removed the dishonest validators and took away 99.55% of their investment. The network was overvalued though because more than 2/3 of its validators are prone to dishonest behavior. Ok great, but why does the prisoner’s dilemma does not explain this?</p><p>The prisoner’s dilemma is a simple and static scenario and this is exactly a situation equivalent to it. Again evolution of time makes a difference, we will see soon why. The author (zack-bitcoin) also states the “tragedy of the commons”, which is the theory why in shared flats the dishes are never done. But in reality, there are rare examples of shared flats in which the dishes are done. And this is because the flatmates are cooperating over time and if there is a majority who never does the dishes, then a fork will be performed, in which the other flatmates label their dishes and do their own dishes and separate the commons so that this tragedy ends.</p><p>The tragedy of the commons is only a real tragedy if the commons cannot be distinguished. For example, the climate crisis is such an example and carbon dioxide emissions are not distinguishable. It is not possible to label carbon dioxide and then relocate natural disasters caused by it proportionally to the emitters. However, as the time comes into play and participants cooperate over multiple instances of decision making the situation looks different. If you look up research on the prisoner’s dilemma, where many instances of the game are played and different strategies compete against each other, the ones that punish others for uncooperative behavior and cooperate with the other ones are the strategies that succeed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*65G6C7UWPk90UykH.jpeg" /><figcaption>When one meme is not enough to illustrate accepting bribe vs. rejecting it.</figcaption></figure><p>Ok great, so there is hope for PoS? Well, there is even more hope, because there is a very crucial assumption being made, that is wrong. In fact, there is punishment for trying to break consensus. Zack-bitcoin assumes there is no punishment, which is only right for old and naive PoS systems. But in Cosmos, there is a stake being slashed and validators being jailed. With this punishment, the given 0.45% does not exist. If the punishment for trying to destroy the blockchain is high enough, then the risk is too high and there is no Nash equilibrium for defective behavior. To use more of these heavy-laden economic terms, we can say there is a Shelling point at which validators cooperate and won’t accept bribes. If you want to understand more, read about Shelling points, Nash equilibrium, the prisoner’s dilemma, the tragedy of the commons and of course King Midas, who turns everything into gold, or in a modern version bitcoin, without doing PoW.</p><p><a href="http://www.truthcoin.info/blog/pow-cheapest/">http://truthcoin.info/blog/pow-cheapest</a></p><p>This is another piece that describes problems of PoS. But it does not say that PoS doesn’t work. It says that it cannot be more efficient than PoW. It is a very good read though it is very long. Together with the following piece, it might be one of the best articles about what makes PoW viable: <a href="https://nakamotoinstitute.org/shelling-out/">https://nakamotoinstitute.org/shelling-out/</a><br>I highly recommend reading these articles.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/620/0*DZQPRg0EkXWdcxll.jpeg" /></figure><p>The most important takeaway is that PoS “costs” the same. It costs the same on paper. Where PoW burns electricity, PoS locks away capital and prevents investment in other fruitful endeavors. Does this make any difference? It actually does. Imagine the example of a world where the climate crisis brings almost all nations together to ban things that emit too much carbon dioxide (CO2). Fossil fuel might be banned and also PoW blockchains. In this scenario there is no need to ban PoS since PoS only locks capital away from CO2 producing investments, PoW, in contrast, destroys electrical energy to secure its ledger. Of course, Bitcoin maximalists don’t want to hear this argument. And it is not really important here.</p><p>It is mostly to understand that even though something costs the same on paper, for real-world implications the type of cost might be very important. Let’s further investigate this. Locking away capital also has some other interesting property. Imagine a PoS blockchain starts with a $10 mio valuation and over time users come, do transactions, buy token and the valuation goes up to $100 mio. Now the frozen stake has increased in value, more capital is locked away but no real-world resources were burned to do this.</p><p>In a PoW blockchain, this is not possible, since real miners must be bought and real electricity must be spent. The value of the miners is uncoupled from the blockchain valuation. Many opponents of PoS will tell you at this point that therefore only PoW really commits to the future and for PoS there is nothing really committed. Another argument here is that PoW stores the burned electricity as a value in the ledger. Let’s be honest, this is a false belief. If people are not willing to buy Bitcoin, the price will fall.</p><p>There is no reason why anybody will say “there is this much electricity already put in this blockchain, I’m willing to pay more for a Bitcoin, it is undervalued. For buying stocks this is a usual behavior and what Warren Buffett does quite often. He realizes the market cap of a company is lower than the valuable objects in the corporation of the stock. He then buys. These valuable objects are mostly real estate, intellectual property, and long-lasting contracts. These are things you can take out of a company and sell individually. The burned electricity cannot be taken out of the Bitcoin network. If people reduce their demand for Bitcoin nobody will be willing to pay more, just because there is that amount of electricity in it.</p><p>People will buy coins because it is faster than mining the same amount of coins or maybe because it is cheaper. In the latter case, mining is declining. What keeps the price up is the prospection of the future. And the stake is just as good in harvesting block rewards as miners. Selling stake might be different to selling miners though. If you have ASICS and these are used for the biggest PoW chain as is the case for Bitcoin, then it is really hard to sell the miners, if the interest in the blockchain is declining. The same goes for the PoS stake. In contrast, if there are other blockchains, maybe bigger ones, which use the same hash algorithm, then it is easy to sell the miners. For PoS there is never another blockchain that accepts your stake. So at this point, we see that for an investor it really doesn’t matter how much electricity was put into a blockchain or how much stake was bonded in the past. The important thing is the future. For the future PoS relies much more on a belief of the market. Also, it can scale to 10x or 100x in value without spending a lot of physical resources.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*E5XRD7zX6yMQipFw.jpeg" /><figcaption>Stages of crypto anarchist’s enlightenment</figcaption></figure><p>The thing that makes people behave are punishments in PoS and not fear of devaluating mining equipment as in PoW. It is how nothing-at-stake and long-range-attacks are solved. The punishment easily scales with increasing valuation. Investing in mining equipment binds miners to a certain cost for producing a block. When staking you don’t commit to a certain price for producing more stake. You hope that the rewards for block producing are worth it, if the price stays the same. In a scenario where the blockchain grows and now many use it, it is not necessary to burn more electricity. Maybe some validators will think, wow my stake is now worth that much, I need to invest more in IT security. But these scales very well.</p><p>So a PoS chain can easily adjust to much more demand, which is a consequence of what we discussed early in the article, that transaction throughput can scale, but it is also a consequence of virtual punishments vs. binding of physical miners. So for technical network properties as well as for token economics, PoS scales much better. The capital locked away can come from an intrinsic increase in valuation, still, people will be afraid to lose that value and won’t misbehave. For a mining operation to be profitable it is necessary to cover the costs of electricity. The users of the blockchain have to pay for this — either in transaction fees or via inflation.</p><p>In contrast, the punishment of PoS must not be paid by the users. Wait a second? Many will oppose to that. We have learned that MR=MC in Paul Sztorc’s article and the risk of punishment will be factored in when investing in staking coins. Whoever runs a validating operation must factor in this risk and hand over the cost to the users in the same way the miners do. This is the argument we have learned, deeply condensed in MR=MC. This is flawed if presented in such a reduced version. To understand more, we need to differentiate punishment by 2 different sources. The first source is attacks like long-range-attack, following forks (nothing-at-stake-problem), short-range-attacks (corresponding to 51% attack in PoW) and similar things, the bribe example also belongs to this category.</p><p>The second source is being punished for going offline, server malfunction, etc. If you run an honest server only the latter is relevant and only for this risk you need to push the cost to the users. If you are honest, you already know that the first source of punishment will not affect you. This punishment is for the validators who want to gain from misbehavior. If you do not belong to this type, there is no risk involved. Since you know for yourself, if you belong to this kind, then you don’t have to factor in this cost. But what if a malfunctioning version of the blockchain code is uploaded to GitHub and I as a validator pull and run it and I’m slashed (punished)?</p><p>Well, in such a case a lot of other validators will also misbehave because they download the same software. If &gt;33% of nodes fail, then the network will halt and there will be a software patch and a rollback to the block before validators were slashed. There is no reason to apply this punishment. If you look at the DAO fork of Ethereum, there is even an example in PoW, which is much less severe and still, the network decided to rollback. So don’t think there can’t be rollbacks in PoW. This separation of 2 sources is the reason why in the Cosmos blockchain, there is a distinction between double spending (good for short-range attacks) and nodes going offline regarding the severity of punishments. The punishment for the first is much harder than for the latter.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*WSxXa-sg0Mi1FkNq.jpeg" /></figure><p>Now that we have understood this, we can finally draw a comparison to fiat again. Fiat money also is secured by punishing misbehavior. Therefore fiat money can be printed without buying a lot of valuable gold and put it in bunkers. If you fake fiat money, you are fined and go to jail. This is why fiat scales much better than gold. The question if PoS works or not, is not a question of weak subjectivity, of nothing-at-stake or unforgeable costliness, it is a question if you believe the gold standard can be overcome by introducing punishment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vHEtowfLCPGp377318tdKg.png" /></figure><p><a href="https://twitter.com/Alt__Magazine">Altcoin Magazine</a></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fupscri.be%2F3c1144%3Fas_embed%3Dtrue&amp;dntp=1&amp;url=https%3A%2F%2Fupscri.be%2F3c1144%2F&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=upscri" width="800" height="400" frameborder="0" scrolling="no"><a href="https://medium.com/media/4b37fd61c8660dc2cbfe8232dfa683e2/href">https://medium.com/media/4b37fd61c8660dc2cbfe8232dfa683e2/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6ca54ba6163" width="1" height="1" alt=""><hr><p><a href="https://medium.com/thecapital/proof-of-work-vs-proof-of-stake-for-real-idiots-6ca54ba6163">Proof-of-Work Vs. Proof-of-Stake For Real Idiots</a> was originally published in <a href="https://medium.com/thecapital">The Capital</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Proof-of-Work vs. Proof-of-Stake for real idiots]]></title>
            <link>https://medium.com/coinmonks/proof-of-work-vs-proof-of-stake-for-real-idiots-a23ac4565649?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/a23ac4565649</guid>
            <category><![CDATA[token-economy]]></category>
            <category><![CDATA[consensus]]></category>
            <category><![CDATA[proof-of-work]]></category>
            <category><![CDATA[bitcoin]]></category>
            <category><![CDATA[proof-of-stake]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Tue, 01 Oct 2019 11:13:38 GMT</pubDate>
            <atom:updated>2021-01-06T18:00:30.179Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*Wutw_UBiE7nW1xHX6r26HQ.jpeg" /><figcaption>Image taken from <a href="https://de.wikipedia.org/wiki/Goldstandard#/media/Datei:McKinley_Prosperity.jpg">https://de.wikipedia.org/wiki/Goldstandard#/media/Datei:McKinley_Prosperity.jpg</a></figcaption></figure><p>Up to this point, I have only written articles about specific coins, but maybe it is interesting to someone if I write about technology in general. The only way to find out is this article. As usual, I will try to write like a real idiot for real idiots. So let’s just remember what we are looking at:</p><p>Blockchains, cryptocurrencies, decentral ledgers, etc. are networks, where no authority exists. When we read about it, we bump into Proof-of-Work (PoW) and Proof-of-Stake (PoS) really often. Is there some other stuff? Yes, Proof-of-Authority (PoA) also exists and some derivatives like Proof-of-Intelligence or Proof-of-Assignment. But these are very similar to the others — so we will compare the first two. Sometimes these things are called Consensus algorithms, but thanks to Emin Gün Sirer we know that this is terribly wrong.</p><p>Unfortunately, as real idiots we don’t understand this hairsplitting. So let’s try to understand more here, but keep it simple and have a look at economics, too.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/598/1*5MigBmEe9TRmuZz7cNEPMQ.png" /><figcaption>Emin Gün Sirer jizzing out a tiny fraction of his infinite knowledge. After reading this article, I promise, you will understand at least 2 technical terms he used in these tweets more than now.</figcaption></figure><p>A short note beforehand: This article is not as funny as my other articles. I’m sorry. I have included “funny” pictures to compensate for this. I hope this cheap trick at least fools some of you into reading further because there are some nice memes looming.</p><p>With Bitcoin Satoshi Nakamoto did not invent Proof-of-Work, he (or she, or them) invented Nakamoto Consensus — what a surprise. Nakamoto Consensus is the combination of PoW, longest chain rule and blocks. More precisely this is the combination of some arbitrary cost that has to be paid (PoW), a mechanism that ensures it makes sense to burn value (pay arbitrary cost) as well as synchronizes an unorganized network and something that binds together asynchronous transactions into synchronous blocks. This last thing was already solved.</p><p>In essence, there were hash signatures being printed by the New York Times already in 1995. The purpose of these signatures was to sign all the digital documents sent to Surety (read <a href="https://www.vice.com/en_us/article/j5nzx4/what-was-the-first-blockchain">here</a>). In principle, this was the first blockchain. It allowed someone to prove that a digital document is unaltered. Therefore it was sent to Surety, a company that created a hash for this document and you could use this hash to prove that your document is unaltered. In addition, you could also prove that your document was released at a certain point in time.</p><p>These points in time are the release times of the New York Times with the hash signature. This hash signed the collection in which the signature of your specific document was included. These collections are blocks and each newly published hash in the New York Times verified the previous blocks. Even though it was not called like that, it is a blockchain.</p><p>The paper published by Haber and Stornetta goes back to 1991 and Surety was offering its services in 1995. So could have Bitcoin be realized back then already? No. This approach works for signing digital documents well because trusting Surety is on a different level for these signatures then it is for a cryptocurrency. The service offered by Surety was not decentral. So you had to trust Surety, that they won’t alter your documents.</p><p>Fortunately, there is no real incentive for Surety to do that. At the point in time, when you transfer your document to Surety for signing, there is no real gain in manipulating your document. You will recognize instantly that your document was altered and you won’t use the received signature since it points to an altered document. For Bitcoin this is different. If a single entity was issuing blocks, there is definitely something to gain if this entity alters transactions, for example transferring all BTC to their own address and selling these before the market realizes the system is no longer trustworthy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/631/1*DhFj1eatz8aAfeVbf7DoNA.jpeg" /><figcaption>Another invention that was already available before Bitcoin existed.</figcaption></figure><p>So how about PoW? Yeah, this was present already, especially in the form of hashcash, a way to make sending E-Mails more costly to reduce spam. In order to send an e-mail you had to find a hash, an operation which costs time and thus makes sending spam mails expensive. The idea is good but it did not make it. Satoshi did not think about how to include this hashcash into the blocks of signatures but rather how anyone can be Surety, the single entity that publishes collections of signatures. If just everyone can publish collections, how is assured that the network does not get flooded and how do they agree on conflicts?</p><p>The first problem is obviously solved by hashcash or say Proof of work. The second problem is much more complicated and is basically what makes blockchain revolutionary. And this is a decentral consensus. How do independent actors agree on a single truth, even when some parts of this truth are non-beneficial to some participants?</p><p>For Nakamoto Consensus the final piece for this is the longest chain rule. It is a typical puzzle piece that fits in perfectly. For example, when modeling algorithms or physical systems, you often encounter a situation, in which the major parts are there but the whole thing is shaky. Sometimes you find a piece that solves some remaining problems, but it creates a few new problems, then you find the next piece, it solves again a problem, but creates a new one, until you finally accept that this path does not lead to a consistent theory.</p><p>Many paths lead to solutions that create new problems, but then starting over again, you finally find the piece that just solves all remaining problems at once. It solves the problem you are currently looking at and also some other problems on the backlog of problems and you find out, this puzzle piece is the missing magic sauce. The longest chain rule is such a piece. It solves the problem of how the network agrees on a single truth. At the same time, it solves the problem of why anyone should do much work. It even solves the question of how does the cost of an attack increase with the value of the network.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/226/1*IZVMSg1KeLQW_3kG28LlxQ.gif" /><figcaption>A typical reaction when a crucial puzzle piece is found</figcaption></figure><p>It also comes with a very nice feature that the network participants do not have to communicate with everyone. This does not sound like a big thing, but just imagine you had to talk to all your neighbors who live in the same street as you do. Yes, the annoying ones as well as the super boring ones. You have to do it, every day. Otherwise, your street is closed and nobody can get in or out anymore. Once everyone has talked with each neighbor, you can unfold the sidewalks in the morning and at night the same happens, once you have agreed you fold up the sidewalks. Alternatively, your street has agreed on a special rule, that whoever wakes up first, unfolds the sidewalks and once it is dark and you come home, you fold up your sidewalks. Then nobody has to communicate. It works no matter how many people live in the street. This doesn’t come naturally, especially not in networking.</p><p>So what does it mean for bitcoin? Well, anyone can check if a block is valid, by looking at the previous block. And for conflicting blocks or chains of blocks (forks) anyone can check which is the longer chain and has more work put in it. And these checks can be done without talking to the whole network. Therefore the consensus mechanism scales with O(1), where this O is the Landau-O, that stands for complexity. O(n) means that the complexity of an algorithm scales linear with n, in this case, n is the number of network participants. A network where every participant has to talk to each other participant scales with O(n²), this means doubling the numbers means quadrupling the communication effort. Linear means double participants give double the effort. Ok, great, but then O(1) means doubling the participants does not give additional effort? Yes.</p><p>Keep in mind this is only the algorithm, the real network needs a bit more, because you have to send the new blocks around, but this can be done in a way, where one node sends to 1000 other nodes and this scales very well. And only the ones who found new blocks need to do that kind of broadcasting. But wait for a second, this means infinite scaling right? Yes. But everyone knows Bitcoin does not scale. True, but when people say this, they are not talking about participants (miners) in the network but rather the number of transactions. So the number of users reading the blockchain scales very well, but the number of transactions these users can send per time unit does not scale at all.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/889/1*zT4sUmY5CdBz4aIpjYpc9g.jpeg" /><figcaption>A funny image. Not related to my sex life.</figcaption></figure><p>Interesting, but isn’t this stupid? It sounds like a restaurant with infinite tables, but only a fixed amount of food or drinks orders can be handled. Yeah, we will discuss this also, but one can see here that this mismatch in active user scaling and passive user scaling is problematic. There are some solutions, like Lightning network, where you open a lot of other restaurants and each of these restaurants sends one guy to the original Bitcoin restaurant, where he orders all the stuff the people in his restaurant want to eat. The customers in his restaurant can also order virtual drinks, which become real later, once they have finished all their virtual drinks and the waiter goes to the original restaurant to turn the virtual drinks in real ones.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/805/1*ChvEq0mfW5Qb9Tq0vh76Yw.jpeg" /><figcaption>Me doing research for this article. In the background are all kinds of analogies I have thought out. Like the one you just read above this image.</figcaption></figure><p>But let’s go back to complexity. It can also be applied to other things, it must not be networking, a typical example is sorting. If you are a computer scientist, this part of the article might be very boring, but since this article is for real idiots, I have to go into detail. The task is quite easy, sort whatever list of words alphabetically or numbers by value — in principle it does not matter, sorting is the same for a list of whatever as long as there is a property for which a transitive order exists. However we are leaving the real idiot realm right now, so let’s stick to sorting numbers by value. The task is simple but there is an infinite number of approaches, some are faster than others. For example, Bogosort shuffles the numbers randomly and then checks if the order is right. Another example is the Bubblesort. It is the prototypical example, because it is easy and how we as humans often sort, which is by swapping neighbor elements until the set is sorted.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*6Zm0brdV_x3JhPydhAkQew.gif" /><figcaption>Bubblesort animated — not that this is important to understand the article, but let’s just rest a while and warm our hearts by seeing algorithms at work. Image taken from Wikipedia.</figcaption></figure><p>The third example is Quicksort, which is the prototype of a divide and conquer sorting algorithm. In this example, the list is divided into 2 lists, one list with all elements smaller than an initially selected element and a list with the rest. Then again the resulting 2 lists are divided in the same way until there are only lists with single elements remaining. At this point, the order is found and the lists only have to be merged back together reflecting which one was the sublist that contains the larger elements on each level. So this algorithm is much more complicated and we will look at complexity soon to understand why this extra effort might be worth it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/280/1*1OXQp3jbpyUJHYMX5rrJOQ.gif" /><figcaption>Quicksort animated — With higher complexity comes better heart warming. Image taken from Wikipedia.</figcaption></figure><p>The last example is Sleepsort, which spawns a subprocess for each number that waits (sleeps) for as many seconds as the value of the number is and then adds the number to the final list. This is a joke algorithm, but it will be a very interesting example here. So let’s look at complexity to compare these algorithms. At first one needs to know that checking the order of a list has a complexity of O(n). In the best case, Bogosort is sorted after the first shuffle, which gives O(n) for the check. In the worst case, the complexity is infinite, because shuffling can go on forever at worst. But what is the average? As everyone knows the average between n and infinite is n•n! (faculty). No, I’m just kidding, we can calculate this. I will try to demonstrate how this can be derived easily: There are n! possibilities to arrange n elements. This is because for the first element there are n spots, for the second (n-1), for the third (n-3) and so on. All these possibilities combined gives <br>n•(n-1)•(n-2)•(n-3)•….. = n!</p><p>so on average one needs to shuffle n!/2 times and do a check after each shuffle. For complexity we omit factors, so the complexity is n•n!. Bubblesort has an average of n², because in essence, you have to compare each element with each other, which is n•n. In fact, you only need to do half of that, but we omit the factor of 1/2 again. This is much faster than Bogosort. The complexity of n² in such a system is an important learning, we have seen this in the first part already for network communication. Quicksort introduces hierarchy and is a bit more efficient because a lot of redundancy is prevented systematically. This yields a complexity of n•log(n), which is better, because logarithm of n is always smaller than n, especially for large numbers of n.</p><p>There are a lot of optimizations of Quicksort, which are better in the worst or best case or need less memory. But we won’t cover that here since this is quite special for this type of sorting algorithms. Let’s get to the most interesting algorithm. The best of all sorting algorithms is Sleepsort, because it has a complexity O(n) always. Therefore it is much less work than all of these very sophisticated algorithms. You don’t need to check, for each element you only have to spawn a single process, which has a fixed amount of computation.</p><p>This all sounds too good to be true? Exactly. You still have to wait really long until the solution is ready. For each number you have to multiply it with a factor which then gives the duration of the sleep. Before you have checked the list, you don’t know what is an appropriate factor, but we could do this and complexity would not increase. But for this factor we cannot switch from seconds to milliseconds or nanoseconds, because this time must be significantly longer than the time a subprocess needs to spawn.</p><p>The sorting takes an amount of time that is equal to the largest number times the factor. So the complexity is the lowest possible of all sorting algorithms, but in fact this is not reflecting the time it takes. This is only the computational complexity. A lot of computational effort is masked in the subprocesses and the factor cannot be chosen freely, it needs to be high enough to separate the subprocesses from each other if the sorted numbers are small and it needs to be high enough if there are a lot of numbers, so that the computer is able to spawn all subprocesses in a time shorter than the time interval between two close numbers.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*bjAaZGhB5Vh1JXwqZOigEg.jpeg" /></figure><p>Now we have talked a lot about these algorithms and one might ask what this has to do with blockchains. Well, we have seen that Nakamoto consensus scales with O(1) and this is no coincidence. The infinite scaling in Sleepsort comes from the assumption that infinite subprocesses can be spawned and in Nakamoto consensus, infinite miners can search new blocks in parallel. But that does not mean that infinite transactions can be processed.</p><p>In fact, the number of transactions cannot really be increased by a lot. In the end, you have to wait until a block is found and this blocktime cannot be reduced, just like with Sleepsort. If you reduce the timing too much, the subprocesses spawn time dominates and there is no sorting. In the same way, if the blocktime is too low, the computation is so fast, that network communication dominates and the finder of a block has the highest chance to find the next block.</p><p>The same goes for the network participants who have low latency (ping) to the finder of the last block, they find the next block with a much higher chance than others. In this case, the consensus doesn’t really work. So we have seen a similarity here between Sleepsort and Nakamoto Consensus.</p><p>Understanding why Sleepsort is not very efficient at sorting means understanding why Nakamoto Consensus is not very efficient at processing transactions. Another similarity exists between BFT (Byzantine fault tolerant, here we mean BFT with dPoS) consensus and Bubblesort, where each item has to be compared to each other or each participant has to agree with each other, thus giving n² scaling behavior. Sounds pretty inferior? Well, it’s not.</p><p>Because we have seen that infinite scaling for Nakamoto Consensus is only for passive users (readers/miners). For active users BFT has a lot more scaling to offer, because you don’t have to offer a sufficient long block time to give everyone a fair chance. In BFT the limit is how long it takes to synchronize all participants. Whenever you find ways to increase this bottleneck, then you can handle more transactions. Also you can separate some kind of overseers from the actual block producers. Cosmos has done this to keep the number of actual block producers low, called validators. There are 100, but all other stakeholders can participate as overseers by distributing staking power among validators. They are called delegators. With this approach, you don’t have to exclude everyone who is not in the top 100, but still, have the transaction throughput of small validator sets. Great. So BFT is just Superior to Nakamoto Consensus? Well, it’s not that easy. BFT does not necessarily come with PoS.</p><p>I will not introduce BFT/PoS here in detail, but <a href="https://medium.com/coinmonks/cosmos-tendermint-explained-for-real-idiots-ab4305cbb41">I have another article on that</a>. Instead of using PoS, a BFT network can also be run with a fixed set of public keys that allow only the owners of the corresponding private keys to become a validator. This is usually called Proof of Authority (PoA) and it has an obvious drawback, that it is permissioned. Bitcoin would never have been anywhere noticeable if it was permissioned.</p><p>So the permissionlessness of Bitcoin and Nakamoto Consensus is very important. I’d also say that the option to either buy Bitcoin or mine it, was also important for early adoption. But we wanted to compare Nakamoto Consensus to PoS BFT and in contrast to PoA PoS is permissionless. However an interesting question is: Is it as permissionless as Nakamoto Consensus with PoW? I’d say no. In PoS it is possible that you cannot get into the system because everyone already in it has decided not to sell any stake (coins). In this case the technology is permissionless but the real network does not permit you to get in. In PoW this cannot happen, because you don’t need anyone from inside to build new chips that can solve the puzzle e.g. mine BTC. However this feature also implies an attack vector that does not exist for PoS, in PoW if you acquire enough computation power, you can attack the network. In PoS to do that, you need to buy the coins from network insiders. You cannot get them from the outside. These insiders don’t want to have an attack on their network or only if they can sell all of their stake. However, on the way to 67% share the price might rise astronomically, whereas the production of silicon chips can be scaled almost linearly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*-ZBZ6UJWjbo-EprkOfRDOw.jpeg" /><figcaption>Two crypto experts discussing peculiarities of different rate limiting approaches. The bearded man is also an expert in quantum computing.</figcaption></figure><p>It makes sense to introduce CAP theorem here, which I already mentioned in <a href="https://medium.com/coinmonks/iota-experienced-as-a-real-idiot-ec72e872f753">another article</a>. CAP theorem basically states that we can’t have nice things in distributed systems. Either consistency (C), availability (A) or partition tolerance (P) is very limited or 2 of them are limited at the expense of one feature being very strong.</p><p>We talked a lot about how many transactions can be processed and this feature is availability (A). The more participants can be handled the higher the availability. PoS wins vs. PoW. Permissionlessness has a lot to do with partition tolerance. Is it possible to fill the gaps if half of the network is gone? Can the network even continue if too many participants break away? PoW wins vs. PoS. To be precise this is the killer feature of Nakamoto Consensus. It works if all but one node go offline. It works if there is a nuke creating an EMP wave shutting down half the globe electronically. After the nodes come back online they can join the network, see if the others have found some blocks.</p><p>Even if all wires in the oceans are cut, the continents can go on and produce blocks and compare which found the longest chain and merge the different realities back together once connection is reestablished or each continues separately. Well this specific example, Nakamoto Consensus might not be the best to merge different realities, but this is a story for another day. The only thing that has better partition tolerance are hydras, where you can even cut a single node in half and they still work.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/355/1*U3j86I6ML5UKbGt4vpIFlQ.jpeg" /><figcaption>Hydras are very partition tolerant. Image taken from <a href="https://www.dndbeyond.com/monsters/hydra">www.dndbeyond.com/monsters/hydra</a></figcaption></figure><p>Ok, so PoW wins at partition tolerance and PoS wins at availability, what about consistency? This feature basically means that there is a single reality that everyone knows and agrees with. So Bitcoin has synchronous blocks and these synchronous blocks contain asynchronous transactions. Sounds pretty consistent and it seems to be the same with PoS. But there are forks, so sometimes there are concurrent blocks and one chain will be omitted in favor of the one that turns out to be longer after more blocks are added. In PoS this is not necessarily a problem, fast finality is possible, which means that blocks are final once 2/3 of validators have agreed and everyone knows that it is final then. In this case, consistency is very high because there is a single truth everyone has agreed to and you don’t have to wait for some blocks until it becomes almost certainly final.</p><p>To put this into perspective, let’s compare it to IOTA, where such a single truth never exists. There can be transactions known in one part of the network which is unknown in another part and vice versa. After some time both sides get to know the transactions of the other part, but until this happens there are again new transactions created that are not known by everyone. This is because there are no synchronous blocks, there are only asynchronous transactions. Consistency is low, but availability is high. This is quite interesting since IOTA uses PoW for limiting transactions just like Nakamoto Consensus.</p><p>In contrast, it does not have the longest chain rule and thus availability is increased at the expense of consistency. So here we can see there are different ways to use these buildings blocks of consensus mechanisms. And such a building block comes with specific properties, but how it works, in the end, depends on the whole combination of mechanisms.</p><p>Coming back to PoS with fast finality we have seen that this comes with very high consistency, but what is the drawback? We have seen that there are often drawbacks associated if some desired features are strongly favored. This is the reason why the CAP-Theorem says that you cannot have all properties at max. So the price for very high consistency is a loss of liveness, which means that the chain halts, if not enough participants are online. Fragile liveness is in essence low partition tolerance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*lj_bW-WNEamF1hunJ_JbWA.jpeg" /></figure><p>Now we have seen that Nakamoto Consensus is strong at partition tolerance (P), good at consistency (C) and bad at availability (A), in contrast to PoS with fast finality, which is bad at P, strong at C and good at A. We can accept the fact that there is no single consensus mechanism that is suited perfectly for all problems. Just like there is no best car. If you want to transport lumber out of a forest, a Ferrari might not be the best fit, even though for a race it is great. The same applies here. So we have to define the use cases, to make a decision what might be a good fit.</p><p>If you open the website bitcoin.org, you’ll see it say “ Bitcoin is an innovative payment network and a new kind of money.”, which is great. The use case is making payments. What is the most important feature for payments? Availability. If you cannot make payments, because the network is congested, then it is not a good payment network. Funnily Nakamoto Consensus is not good in this regard and thus Bitcoin is not good in this regard. Consistency is also important because you want to be sure the payment is final and your trade partner knows this.</p><p>Looking at this, it is quite obvious that PoS with fast finality is much better suited for transacting payments. But wait, isn’t this proven wrong, because Bitcoin has the highest valuation of all cryptocurrencies? First of all the market does not tell you if something is good at what it wants to be good at, it only tells you if people are buying it. And second, there is something else than payment for which money can be used and this is a store of value.</p><p>A high market cap can also mean that something is a good store of value or people believe it is. When you store value, you don’t make payments. To store value you don’t need high availability. What you need is partition tolerance and high security. Consistency is also not utterly important, because it is fine if it takes some time until the whole network knows you have stored value or until you can access your value. So is there an analogy to an old asset? Yes, it is gold.</p><p>Gold is a store of value for a really long time now and it is a decentral asset. There is no central authority issuing it and there is nobody that can mark your gold savings as invalid. Gold is quite heavy and not as easy to carry around compared to bills for example. So if we look at gold like a distributed application, which is a bit ridiculous, but we do it for seeing analogies and education, then it is extremely partitioned tolerant. The availability is not very good. The consistency is bad. Nature keeps very accurate track of how the gold is distributed among holders of its value, but we cannot access this database of nature, we need to check individually if a stack of gold is real gold and not filled with lead.</p><p>These bad marks at C and A are the reason, why most people don’t hold gold physically but rather have bought gold on a second layer, where you hold the claim of a certain amount of gold. This allows you to sell the claim and for the buyer it is not necessary to weigh any amount of gold and check if there is lead in it.</p><p>This second layer solution of trading gold claims increases C and A tremendously and decreases P. The latter is because the second layer solution can be destroyed more easily than the real gold. This is quite similar for second layer solutions for Bitcoin. And here we can see that the term digital gold is a really good description of what Bitcoin is. To make Bitcoin more available second layer solutions are necessary. The same goes for gold. In both cases, the second layer trades away P (partition tolerance) for C and A. So we have seen now that the intrinsic properties of Bitcoin or say Nakamoto Consensus make it a good store of value and a bad payment processor.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*swhPQEZS0yDcet8X_nFvzA.jpeg" /><figcaption>McAfee doing prank calls impersonating himself as “future”</figcaption></figure><p>How about PoS? It seems to be a natural fit for payment networks since it is strong at availability. With fast finality, it is also great at consistency. Does that mean, PoW is good for a store of value and PoS good for the transaction of value? Don’t fall for the trap giving PoW, in general, the attributes that only Nakamoto Consensus has. Take for example IOTA, which has PoW but does not have a blockchain, rather a tangle, which gives high availability, weak consistency but good partition tolerance.</p><p>However, this is a theory. In practice, there is a coordinator, which does not really make it decentral, limits transactions and of course partition tolerance is not given. But in theory, IOTA is an example of PoW being good at doing transactions. The big problem comparing PoS and PoW directly is that both systems are not directly interchangeable. If we take Nakamoto Consensus and just plug in PoS, then the longest chain rule does not make sense at all. Vice versa plugging cryptographic PoW puzzles into the BFT consensus, does not really make sense. So we correct ourselves, BFT PoS with fast finality is good for payment networks and Nakamoto Consensus (including PoW) is good for store of value.</p><p>But can payment processors and store of value really be disconnected? Can it be seen as distinct things? Isn’t it ultimatively necessary for things which are used for payment to hold value and isn’t it ultimatively necessary for things which store value to be transferable? Yes, but only in a static picture. If we look at time evolution, it clears up. Again we can compare to real world things. We have introduced gold already, now let’s introduce fiat, for example Dollar. Both Dollar and gold are a store of value and something you can use to make payments. Are both features implemented to the same extent? Certainly no. Dollar is a much worse store of value since it depends on the existence of the United States of America, whereas gold only depends on the physical laws to be unaltered. For payment it is the other way around. Gold is heavy, complicated to split, not as easy to check for validity and harder to count.</p><p>So here we have two real world examples that both are used, make sense and still have different characteristics. If we look at time scales, we see that payment processing of fiat money can be done really fast, in contrast to gold. If we ask the question, on which time scales both function as a store of value, then it is clear that gold performs well on the scale of 1000 years, but there are many examples of fiat money only 100 years old, that have no value today. It is no coincidence that PoS systems resemble the characteristics of said fiat money and BTC resembles the characteristics of gold. Does it make sense to have both systems? Yes, absolutely. Taking out a chunk of gold and cutting off a small piece to pay for your train ticket in the morning is analog to using a Lambo to carry lumber out of a forest.</p><p>There might be many that argue, the gold standard should have never been abandoned, but there won’t be many who will say gold is not limited by its physical properties. A monetary system based on coins and banknotes allows for a much wider design space than a physical asset like gold. So whoever argues that the gold standard should have never been abandoned does not take into account how important it has been to actually do monetary politics on the one hand. And to be able to process daily payments really fast on the other hand. Furthermore our fiat money system is not really based on coins and banknotes anymore but is rather an electronic credit money system, which has an even wider design space and allows for faster transactions. So does blockchain has an even wider design space and allows for new properties? Partially. When it comes to decentralization yes, but there are limitations given because of cryptography and its demands.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/517/1*54nEenV_dfng8wvE-AvKOw.jpeg" /></figure><p>Nakamoto Consensus is much heavier on these constraints than PoS systems. Again this is an resemblance to gold vs. fiat. If we look at the way the different systems are secured economically we find another similarity. In PoS economic security comes via penalties and in Nakamoto consensus it comes via intrinsic mechanisms. The nothing-at-stake problem does not exist in the latter. For those who do not know, this problem describes the circumstance that in naive PoS systems there is no reason to pick a specific fork of the blockchain. It makes economic sense to follow each fork of the chain. Since this is very problematic for the users and makes the network pointless, this behavior must be disincentivized by protocal defined penalties. We will call these penalties from now on punishment.</p><p>For Nakamoto Consensus — in this case it is sufficient to say PoW — this problem does not exist, because the work can only be spend once and therefore only on a single fork of the chain. This is not a property designed by the protocol but rather an implication from the physical world. Coming back to gold and fiat, we can see that the security for gold comes from physical properties and the security of fiat comes from penalizing misbehavior. It makes economical sense to print your own banknotes or to hack bank systems to steal money. But we punish whoever does so and thus make the system economically secure. Even more important is the 51% (67% for BFT) attack, which must not make economic sense. If it makes sense in a given network to form a cartel or even buy 51% and after the attack you make a net profit, then the network is doomed.</p><p>For Bitcoin this implies buying a lot of miners or renting mining power and then transfer all Bitcoin to your own address in order to sell them to all open orders on the internet. The goal is to make more money than you spend on mining power. After such an attack smart money will move out of bitcoin and only the maximalists will stay and tell you that everything is fine. This is why it is important to cash out fast, before everyone realizes bitcoin is now worthless. However an attacker can calculate what this amount of mining power costs and how much there is to gain and if this equation always yields a net loss, nobody will perform the attack. Of course there are more problems — on the cash out side, there might be a problem getting all the money from the exchanges before the attack becomes public and on the preparation side there are a lot of ASICs to be purchased, which might draw a lot of attention.</p><p>But this doesn’t matter as these mechanisms should not be the final barriers for such an attack. The attack must be infeasible regardless of such external problems. So how is this achieved? For bitcoin there is a network value, the market cap, which is currently $140 bn. Then there is something like value of all miners. If we assume this value is $140 bn as well, then you need to spend an additional $141 bn to get 51% mining power and to steal all the bitcoin, worth $140 bn. Since there are not enough open buy orders, it is not possible to sell the bitcoins, before bitcoin is worthless because of this attack. But what ensures that the mining power is worth more and more, as the bitcoin market cap grows?</p><p>This is why there is difficulty and why it increases as more and more miners are connected. Mining gives block rewards and fees. So if the bitcoin value increases, then the mining rewards increase in the same way. This means all miners make more profit now. Now itmmakes sense to buy more miners and supply more mining power. This in turn increases difficulty, which makes mining less profitable. But how is assured that the mining equipment value rises in the very same way as the bitcoin price? Well if all other parameters stay the same, then this is a consequence of all involved equations being linear. If the price doubles, then the return from a miner doubles, and there will be more miners until the marginal return from an additional miner exceeds the marginal cost of mining. I’m a bit afraid we are leaving the realm of real idiots here again. The key takeway is, that the amount of miners doubles, when the price doubles. As long as there are not other effects, like change in electricity price, new technology etc.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Gl9c5y3YJCSTzKWNdCZFoA.png" /><figcaption>This plot shows the bitcoin difficulty and the bitcoin price over time. In green is the difficulty with scale indicated on the right and in orange is the price with scale indicated on the left. (Plot created on data.bitcoinity.org)</figcaption></figure><p>Of course this is a very simple picture. In reality there will be different marginal costs for different types of miners and different locations of mining. Marginal return will not exactly match marginal cost, because there must be a spread, which mostly depends on what time-to-value investors in mining can accept. For real estate investments in first world countries with AAA rating, for investors it is fine to accept 30 years time-to-value, but nobody who has some basic understanding in economics will start a bitcoin mining operation with 30 years time-to-value. If we look at the plot above, we can see that the price and difficulty curves match, but at a closer look we see that the difficulty increases by 2 decades whenever the price only increases 1 decade.</p><p>This means difficulty rose from 100 to 10.000.000.000.000, or 10² to 10¹³ and price has only increased from 0.1 to 10⁵. So here are two lessons to be learned: 1) Always look at the scaling of a plot. You can always make two curves match. The question is, does the scaling make any sense? And lesson 2) there is something that makes difficulty increase much faster than price. And the answer is technological advance. At the beginning there was CPU mining and its source code was improved over time until there was the first code for GPU mining, which increased the hashrate per $ significantly. This code was improved as well and of course silicon chip technology itself has improved but not at such a fast speed as the crypto community has improved mining hardware. The ASICs came out and the die size went from 120 nm to finally 16 nm and might even be smaller in the future.</p><p>So in this regard there happened a lot. It is still remarkable, that you cannot see spikes for these milestones in the difficulty curve. The same goes for halving events (“the halvening”). Whenever the block reward of bitcoin was halved, many went crazy and thought insanse things will happen to difficulty but looking at the plot, you can’t recognize these events without knowing where they are (yellow lines). The reason for that is the heterogenous distribution of miners. If new miners (more efficient) are put to service, then the difficulty rises, but many shut down their old now inefficient miners and the difficulty does not rise anymore. This is why the curve is so smooth. Of course the difficulty rises because of these events, but it takes time until enough old miners are driven out and the new marginal cost has established a new balance with the marginal return. Still halving the reward is important for the token economics of bitcoin. It makes the supply of bitcoin ultimately limited. Without it bitcoin would not necessarily be deflationary.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/654/1*1JQ9qDIiWAgkIyeOIVPGrg.png" /><figcaption>Not only is he in for the technology but also to learn about token economics and unforgeable costliness.</figcaption></figure><p>Maybe you are asking now, why is such a long story about mining here? And the reason is, because in order to understand the economic implications of Nakamoto Consensus there is no way around these things. What we have learned now is the relation between miners, security and throughput of the network. In the first part we have learned that there is no increase in throughput if there are more miners and in the last part we have learned that it is incentivized to have more miners when the network valuation has increased. Also it is necessary to increase the value of miners in this case.</p><p>For Proof-of-Stake these things do not matter at all. These economic relations are not given. In PoS the coins are the (virtual) miners so the value of both cannot decouple. However it is necessary to prevent a very similar attack in which stakers double spend and afterwards instantly sell off their stake. This is easily solved by freezing the coins that are “virtual miners” for some time. If harm is done by stakers the frozen coins will destroyed to punish the bad behavior. The same goes for following different forks to solve the nothing-at-stake problem. Mostly I’m presenting the solutions chosen by the developers of Cosmos. This is because Cosmos looks like the most decentralized and least attack prone of all PoS networks to me. However this is not the important part, maybe there is something better than Cosmos, feel free to convince me in the comments.</p><p>Important is the similarity to fiat vs. gold and PoS vs. PoW. For gold and PoW physical resources are devalued in case of bad behavior whereas in fiat and PoS punishment is applied mostly by devaluing/removing virtual assets. But isn’t it better to have physical assets as collateral? Isn’t it safer? It depends on how you define “safe”. Virtual assets cannot be swept away by floodings, this has happened with Bitcoin miners already. Virtual assets cannot be destroyed in a hurricane. But virtual assets can be hacked, when stored in an unsafe way. It is always harder to carry away physical assets. Nobody will doubt that the biggest heists in the 21st century will all be done with virtual assets.</p><p>But this is just one aspect of safety. Another one is corruption of authorities. Gold is the undisputable king. Nobody can corrupt physics. Changing the consensus parameters in a PoS system is much easier than for PoW. On the other hand, hashing algorithms might be broken by quantum computers. For PoS only the private keys might be broken by quantum computing. But this can also happen to PoW systems. Still it is much easier to increase the key length than switching the hashing algorithm. But we have also slided over from (physical) safety to (IT) security. Regarding all of these things, there is only one thing that can be said for sure: Whoever says PoS or PoW is safer no matter what, is wrong. Maximalists tend to extremes and the matter is more difficult than they want to believe. Making a decision which systems is safer is mostly a decision which scenario is more relevant in your opinion.</p><p>But some people say PoS doesn’t work at all? Why do they say that? Well, I will give two examples:</p><p><a href="https://github.com/zack-bitcoin/amoveo/blob/master/docs/other_blockchains/proof_of_stake.md">zack-bitcoin/amoveo</a></p><p>The point made here works as follows: If someone bribes validators (stakers) to destroy a blockchain it makes most economic sense for them to accept the bribe and destroy the blockchain even if the sum used for bribe is tiny. The author basically describes a prisoner’s dilemma. If the blockchain is destroyed, then if you accepted the bribe, you lose all stake but get the bribe. If you did not accept the bribe, then you have nothing. So in this scenario it is better to accept the bribe. In the other scenario, the blockchain is not destroyed and in both cases you have your stake but if you accepted the bribe, you also have the bribe. So in both scenarios, it is better to accept the bribe to maximize your outcome. But in the case of destruction you might lose a big stake in contrast to the case of non-destruction.</p><p>So the author factors in the probability that your action is the one that makes the scenario flip. This probability allows to calculate a ratio of network valuation to bribe sum. In the example only 0.45% of the network valuation is necessary to bribe 100 validators, who stake 90%. What is constructed here is a Nash equilibrium just like for the prisoner’s dilemma, where it makes sense to defect the other prisoner to maximize your outcome. Even if both defect each other, there is no way to improve an individual outcome by just switching a single decision (pareto optimum), both need to switch at the same time and cooperate together to get to a better outcome. And this is exactly what validators do all the time. They work together. They help each other and work to keep the chain online. I’m mostly arguing here why a PoS blockchain can work, even if the assumptions of zack-bitcoin are correct. In fact they are not, but let’s get to this point later.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/469/1*KpFXlkMXK79RU7OVJgsfeg.jpeg" /><figcaption>Accepting the bribe vs. rejecting it.</figcaption></figure><p>So the validators are constantly working together and are cooperating. In case of a bribe, the validators know each other for quite some time and have communicated quite a lot. Now the bribe wants to destroy the blockchain either via consensus or governance. For governance it is quite easy, since everyone can see how the votes are progressing, there will be someone who is certainly the tipping voter, who is responsible for destruction of the blockchain. For this voter, the bribe of 0.45% of their stake is never enough to give the tipping vote. If this vote is not given, then for the next tipping voter, 0.45% is not enough and so on. Accepting the bribe only makes sense as long as the vote fails. Only if the bribe is higher than the stake, it makes sense to be the one who fills the quorum.</p><p>The other scenario is consensus. So not governance destroys the blockchain but the consensus that creates a new block. Here the validators will be bribed and of course talk to each other and sure, they will tell each other, that they won’t take the bribe. Then the block comes, for which the bribe is given and a majority agrees to destroy the blockchain. Then they have betrayed the others, but in fact they have done it, because they don’t want to be the suckers who did not even get the bribe. Basically such a scenario is more like a blackmail than a bribe. Because in fact the actors lose money, it is just that they don’t lose all if they submit to the blackmail.</p><p>So let’s assume here the worst case has become real. A majority lied to the rest, saying they won’t take the bribe, but they did and destroyed the blockchain. What happens next? The honest validators will be very pissed and they will restart the network without the dishonest validators. The users now might chose to use the fork with only &lt;1/3 of validators remaining or pick the destroyed fork, which is not really an option. The majority (2/3) who destroyed the network, might relaunch a non-destroyed version, but now we arrived at a scenario, where 2 networks compete, one in which validators have proven that they are honest and one in which they have proven they are dishonest. This is actually good news, because it is a filter mechanism, which sorts out cartel forming dishonest validators.</p><p>Of course for investors, this might not be the best news, because the network will lose a lot of valuation and might take a lot of time to get back to the old valuation. But and this is the most important part: The attack has failed. The attack has only removed the dishonest validators and took away 99.55% of their investment. The network was overvalued though, because more than 2/3 of its validators are prone to dishonest behavior. Ok great, but why does the prisoner’s dilemma does not explain this?</p><p>The prisoner’s dilemma is a simple and static scenario and this is exactly a situation equivalent to it. Again evolution of time makes a difference, we will see soon why. The author (zack-bitcoin) also states the “tragedy of the commons”, which is the theory why in shared flats the dishes are never done. But in reality there are rare examples of shared flats in which the dishes are done. And this is because the flat mates are cooperating over time and if there is a majority who never does the dishes, then a fork will be performed, in which the other flat mates label their dishes and do their own dishes and separate the commons, so that this tragedy ends.</p><p>The tragedy of the commons is only a real tragedy if the commons cannot be distinguished. For example climate crisis is such an example and carbon dioxide emissions are not distinguishable. It is not possible to label carbon dioxide and then relocate natural disasters caused by it proportionally to the emitters. However as time comes into play and participants cooperate over multiple instances of decision making the situation looks different. If you look up research on the prisoner’s dilemma, where many instances of the game are played and different strategies compete against each other, the ones that punish others for uncooperative behavior and cooperate with the other ones are the strategies that succeed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*1vvSCDKbmWHHOuOwcJBBwQ.jpeg" /><figcaption>When one meme is not enough to illustrate accepting bribe vs. rejecting it.</figcaption></figure><p>Ok great, so there is hope for PoS? Well, there is even more hope, because there is a very crucial assumption being made, that is wrong. In fact there is punishment for trying to break consensus. Zack-bitcoin assumes there is no punishment, which is only right for old and naive PoS systems. But in Cosmos there is stake being slashed and validators being jailed. With this punishment the given 0.45% do not exist. If the punishment for trying to destroy the blockchain is high enough, then the risk is too high and there is no Nash equilibrium for defective behavior. To use more of these heavy-laden economic terms, we can say there is a Shelling point at which validators cooperate and won’t accept bribes. If you want to understand more, read about Shelling points, Nash equilibrium, the prisoner’s dilemma, tragedy of the commons and of course King Midas, who turns everything into gold, or in a modern version bitcoin, without doing PoW.</p><p><a href="http://www.truthcoin.info/blog/pow-cheapest/">http://truthcoin.info/blog/pow-cheapest</a></p><p>This is another piece which describes problems of PoS. But it does not say that PoS doesn’t work. It says that it cannot be more efficient than PoW. It is a very good read though it is very long. Together with the following piece it might be one of the best articles about what makes PoW viable: https://nakamotoinstitute.org/shelling-out/ <br>I highly recommend reading these articles.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/620/1*E_eekW5FwyR-26jgYx2-9w.jpeg" /></figure><p>The most important takeaway is that PoS “costs” the same. It costs the same on paper. Where PoW burns electricity, PoS locks away capital and prevents investment in other fruitful endeavors. Does this make any difference? It actually does. Imagine the example of a world where climate crisis brings almost all nations together to ban things that emit too much carbon dioxide (CO2). Fossil fuel might be banned and also PoW blockchains. In this scenario there is no need to ban PoS, since PoS only locks capital away from CO2 producing investments, PoW in contrast destroys electrical energy to secure its ledger. Of course Bitcoin maximalists don’t want to hear this argument. And it is not really important here.</p><p>It is mostly to understand that even though something costs the same on paper, for real world implications the type of cost might be very important. Let’s further investigate this. Locking away capital also has some other interesting property. Imagine a PoS blockchain starts with a $10 mio valuation and over time users come, do transactions, buy token and the valuation goes up to $100 mio. Now the frozen stake has increased in value, more capital is locked away but no real world resources were burned to do this.</p><p>In a PoW blockchain this is not possible, since real miners must be bought and real electricity must be spend. The value of the miners is uncoupled from the blockchain valuation. Many opponents of PoS will tell you at this point that therefore only PoW really commits to the future and for PoS there is nothing really committed. Another argument here is that PoW stores the burned electricity as value in the ledger. Let’s be honest, this is a false belief. If people are not willing to buy Bitcoin, the price will fall.</p><p>There is no reason why anybody will say “there is this much electricity already put in this blockchain, I’m willing to pay more for a Bitcoin, it is undervalued. For buying stocks this is an usual behavior and what Warren Buffett does quite often. He realizes the market cap of a company is lower than the valuable objects in the corporation of the stock. He then buys. These valuable objects are mostly real estate, intellectual property and long lasting contracts. These are things you can take out of a company and sell individually. The burned electricity cannot be taken out of the Bitcoin network. If people reduce their demand of Bitcoin nobody will be willing to pay more, just because there is that amount of electricity in it.</p><p>People will buy coins because it is faster than mining the same amount of coins or maybe because it is cheaper. In the latter case mining is declining. What keeps the price up is the prospection of the future. And stake is just as good in harvesting block rewards as miners. Selling stake might be different to selling miners though. If you have ASICS and these are used for the biggest PoW chain as is the case for Bitcoin, then it is really hard to sell the miners, if the interest in the blockchain is declining. The same goes for PoS stake. In contrast if the there are other blockchains, maybe bigger ones, which use the same hash algorithm, then it is easy to sell the miners. For PoS there is never another blockchain that accepts your stake. So at this point we see that for an investor it really doesn’t matter how much electricity was put into a blockchain or how much stake was bonded in the past. The important thing is the future. For the future PoS relies much more on belief of the market. Also it can scale to 10x or 100x in value without spending a lot of physical resources.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*U9EuaLZYnRv7jI9HxlHjIA.jpeg" /><figcaption>Stages of crypto anarchist’s enlightenment</figcaption></figure><p>The thing that makes people behave are punishments in PoS and not fear of devaluating mining equipment as in PoW. It is how nothing-at-stake and long-range-attacks are solved. The punishment easily scales with increasing valuation. Investing in mining equipment binds miners to a certain cost for producing a block. When staking you don’t commit to a certain price for producing more stake. You hope that the rewards for block producing are worth it, if the price stays the same. In a scenario where the blockchain grows and now many use it, it is not necessary to burn more electricity. Maybe some validators will think, wow my stake is now worth that much, I need to invest more in IT security. But this scales very well.</p><p>So a PoS chain can easily adjust to much more demand, which is a consequence of what we discussed early in the article, that transaction throughput can scale, but it is also a consequence of virtual punishments vs. binding of physical miners. So for technical network properties as well as for token economics, PoS scales much better. The capital locked away can come from intrinsic increase of valuation, still people will be afraid to lose that value and won’t misbehave. For a mining operation to be profitable it is necessary to cover the costs of electricity. The users of the blockchain have to pay for this — either in transcation fees or via inflation.</p><p>In contrast the punishment of PoS must not be paid by the users. Wait a second? Many will oppose to that. We have learned that MR=MC in Paul Sztorc’s article and the risk of punishment will be factored in when investing in staking coins. Whoever runs a validating operation must factor in this risk and hand over the cost to the users in the same way the miners do. This is the argument we have learned, deeply condensed in MR=MC. This is flawed if presented in such a reduced version. To understand more, we need to differentiate punishment by 2 different sources. The first source is attacks like long-range-attack, following forks (nothing-at-stake-problem), short-range-attacks (corresponding to 51% attack in PoW) and similar things, the bribe example also belongs to this category.</p><p>The second source is being punished for going offline, server malfunction etc. If you run an honest server only the latter is relevant and only for this risk you need to push the cost to the users. If you are honest, you already know that the first source of punishment will not affect you. This punishment is for the validators who want to gain from misbehavior. If you do not belong to this type, there is no risk involved. Since you know for yourself, if you belong to this kind, then you don’t have to factor in this cost. But what if a malfunctioning version of the blockchain code is uploaded to github and I as a validator pull and run it and I’m slashed (punished)?</p><p>Well, in such a case a lot of other validators will also misbehave, because they download the same software. If &gt;33% of nodes fail, then the network will halt and there will be a software patch and a rollback to the block before validators were slashed. There is no reason to apply this punishment. If you look at the DAO fork of Ethereum, there is even an example in PoW, which is much less severe and still the network decided to rollback. So don’t think there can’t be rollbacks in PoW. This separation of 2 sources is the reason why in the Cosmos blockchain, there is a distinction between double spending (good for short-range attacks) and nodes going offline regarding the severity of punishments. The punishment for the first is much harder than for the latter.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*JK-TFikCOxhnaDwHAf0duw.jpeg" /></figure><p>Now that we have understood this, we can finally draw a comparison to fiat again. Fiat money also is secured by punishing misbehavior. Therefore fiat money can be printed without buying a lot of valuable gold and put it in bunkers. If you fake fiat money, you are fined and go to jail. This is why fiat scales much better than gold. The question if PoS works or not, is not a question of weak subjectivity, of nothing-at-stake or unforgeable costliness, it is a question if you believe the gold standard can be overcome by introducing punishment.</p><blockquote><a href="https://coincodecap.com/?utm_source=coinmonks">Get Best Software Deals Directly In Your Inbox</a></blockquote><figure><a href="https://coincodecap.com/?utm_source=coinmonks"><img alt="" src="https://cdn-images-1.medium.com/max/584/0*OJ-qb5G6i863msBB.png" /></a></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a23ac4565649" width="1" height="1" alt=""><hr><p><a href="https://medium.com/coinmonks/proof-of-work-vs-proof-of-stake-for-real-idiots-a23ac4565649">Proof-of-Work vs. Proof-of-Stake for real idiots</a> was originally published in <a href="https://medium.com/coinmonks">Coinmonks</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Ultimate Cosmos Delegation Guide for real idiots]]></title>
            <link>https://medium.com/coinmonks/the-ultimate-cosmos-delegation-guide-for-real-idiots-87ebc6518145?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/87ebc6518145</guid>
            <category><![CDATA[staking]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[cosmos-network]]></category>
            <category><![CDATA[dpos]]></category>
            <category><![CDATA[proof-of-stake]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Fri, 26 Apr 2019 11:32:32 GMT</pubDate>
            <atom:updated>2024-03-08T08:43:01.781Z</atom:updated>
            <content:encoded><![CDATA[<h3>The Ultimate Cosmos / Osmosis Delegation Guide for real idiots</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lB83A0uTMtu51VoSl9MdFg.png" /><figcaption>Phew! If I didn’t know better, I would have thought this an AAA space game.</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*RZ2T0EqxCYlGaK-wq1PEFQ.jpeg" /></figure><p>The first hub of Cosmos has launched and this means Atoms can now be delegated and since a few days are even transferable, which means that many people will ask themselves “How do I delegate my freshly acquired currency to reap some fine interest?”.</p><p><strong>UPDATE 2024<br></strong>I have written this guide when Cosmos came out in 2019. So this is quite old, but people are still reading it. This is why I have decided to update it and bring it in proper shape for the current time. The most important update is:<br>Cosmos might be forked by Jae’s followers, in this event it is absolutely crucial that you do not stake with central exchanges. On most of these you will not get any forked coins, which can be a great loss for you. Beside that central exchanges have much worse conditions.<br>And now enjoy the article ;-)</p><p>This article consists of 3 parts, A) I will give a short introduction and try to lower the niveau with bad jokes and related imagery. B) I will try to explain how to pick the right validators for the delegations and in C) I will give an explicit guide how to delegate. For most readers, this will not be an interesting part, since it is much easier with a Ledger Nano and most will go this route for which enough tutorials exist. But for some others, it makes sense to learn how to do offline transaction signing with Cosmos.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*NOtotkpSifaIvblGgLOvmw.jpeg" /><figcaption>A typical Cosmos validator in preparation of uploading his consciousness to the cloud.</figcaption></figure><p><strong>A)</strong> Since we are not all Linux Experts who can’t wait to finally upload their consciousness to the cloud, we will be very overwhelmed by the complexity of the topic. This is why I try to write this guide for real idiots.</p><p>To understand delegation one should first understand Cosmos. I have written an article about this more than a year ago (<a href="https://medium.com/coinmonks/cosmos-tendermint-explained-for-real-idiots-ab4305cbb41">Cosmos / Tendermint explained for real idiots</a>) and it might be a good entry. Otherwise, if you understand why Cosmos, let’s think about why Delegation? The main task for Delegators is to distribute voting power among validators. This gives decentralization to Cosmos. There are not just 100 validators doing their thing, all other holders of Atoms can participate in the consensus as well, but not by running servers, but by controlling validators. So the most important part is to pick validators wisely and withdraw the stake from them if they act bad. Acting bad comes in different colors and strengths. The worst is double signing, which is an attempt to steal money in essence, less worse is going offline and not participating in the consensus and there is also missed blocks, which is not optimal, but it can happen once in a while, but it is a good sign if a validator does not miss blocks.</p><p>Of different color is the behavior of a validator. The double signing and missed blocks is measured automatically by the network and is very transparent, let’s call it hard fuck-ups. The behavioral things, let’s call them soft fuck-ups, are mostly if the actions and decisions of a validator in the ecosystem and this is not automatically measured by the network. One extreme example is building a cartel. For example, if the biggest 3 validators join forces and can halt the network. This is really bad for the ecosystem but can be gamed by these 3 validators. For example, they could short Atoms, halt the network, wait until price collapses and sell the shorts.</p><p>The job of the delegators is to take away delegation from those and support all others so that they lose voting power until they can’t form a cartel anymore. Another thing might be that they block advances in development and upgrading to new software, because it is not beneficial to them, even though it is beneficial to the whole ecosystem. One typical example is the Bitcoin miners, who did not want to have bigger block sizes, to keep the transaction fees high. So there are a lot of possible scenarios and the job of the delegators is to detect them and enforce transparency. This is why decentralization in Cosmos is more than 100 validators.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/1*ivpgBjb_Hmw4pUGYc044cg.gif" /><figcaption>Live footage of the cosmos launch on March 13th</figcaption></figure><p>So how does one become a delegator? If you are an Atom holder, all you have to do is delegate your Atoms to a validator or many validators. If you are not an Atom holder, you have to acquire Atoms beforehand (as usual this is not investment advice, be aware of the south sea bubble, etc.). Delegating means that your Atoms cannot be transferred for 3 weeks, but you can redelegate to other validators instantly. So if you are a daytrader or plan to sell your Atoms soon, then it does not make sense to delegate, but if you want to HODL anyway, delegating is a no-brainer. But what are the benefits of delegating? First, you are allowed to vote on governance proposals on the Cosmos hub, which is great, because everyone loves having an opinion.</p><p>But the best part is that you receive Atoms over time. This interest, reward or inflation heavily incentivizes delegating and running validators and is why we want to delegate over long time periods. So how does this free money work? It consists of 2 parts, the first is yearly inflation and the second is fees from transactions. Sounds like Bitcoin, yeah it is similar, but for Bitcoin, the yearly inflation becomes less and less until there are only fees left. But for the sake of analogy staking coins (delegating) in a PoS system is like mining in a PoW system. Of course there are differences, first, you can’t use your stake on other coins, which works with miners in many cases. You can’t just produce more miners like GPUs, you must buy Atoms from other holders to increase your “mining” capacity. And you don’t need any technical knowledge as a delegator, only validators need that. Well, you can also invest in a mining fund, then you don’t need technical knowledge as well, but the premiums you pay are often absurd. However, this article is not about comparing PoW and PoS, this might be an interesting topic for another story (it has been written in the meantime <a href="https://medium.com/coinmonks/proof-of-work-vs-proof-of-stake-for-real-idiots-a23ac4565649">here</a>).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*21U0EKpW3fQyHwCspeMFnA.jpeg" /><figcaption>Of all successful communist countries the most successful of them has a very successful leader who appreciates successful crypto projects like Cosmos.</figcaption></figure><p><strong>B)</strong> So let’s look at the 2 parts, first the fees, this is very easy to explain: Whenever someone sends a transaction, a fee must be paid and this fee is given to all stakers. If you participate in the process of creating new blocks, then you get your share of the fees proportional to your stake. Delegators and validators participate both in this process. Does that mean validators do all the work by running nodes and still delegators get the same? Well, no, there is a commission, which is charged by the validators. Depending on which validator you chose, the validator gets a fraction of your rewards, for example, 10%.</p><p>We will discuss this commission in this article quite extensively. So fees are quite easy to explain but hard to predict, since nobody knows how many transactions will be sent to the Cosmos hub in the next months. The situation is different for inflation. There is a targeted inflationary rate of 7% per year. Does that mean you get 7% on your staked atoms? No. This is a targeted value, which is achieved when 66.6% or more of Atoms are staked. There are good reasons, why the Cosmos network wants to have 66% of Atoms staked, but it is not the purpose of this article to explain byzantine fault tolerance. For that, better read my introductory article linked at the top. So in case, there are less than 66% Atoms staked, then the inflation goes up, to 21% at most. However, we should expect to have 66% staked in the long run. Take home message here is: 7% inflation will be what we see in the future. But there is some important thing: inflation is regarding all Atoms, not only staked coins, also unbonded Atoms are inflated. In contrast to staked Atoms, the inflationary Atoms from unbonded Atoms are not given to their holders but rather to the other stakers. So even if 66.6% are staked and the inflation is 7%, then the interest on staked Atoms is not 7%, because there are 33.3% Atoms which are not staked but give inflationary Atoms to the stakers. This means at 66.6%, the interest rate on staked atoms is 10.5%, which is 7% + 3.5%, the inflation of the 66.6%, plus the inflation of the remaining 33.3%. I try to keep it clear with wording, but it is not easy in this example. So inflation is how Atoms become more over time and interest is what you actually get for your bonded Atoms.</p><p>In Cosmos interest is always higher than inflation for bonded/staked Atoms. There will never be all Atoms staked, so 7% is more like the lower limit and not the expected value. Next example, let’s assume 50% Atoms are staked, then the inflation rate is 10.5% since the goal of 66% staked is not reached, the rate increases and is higher than 7%. In addition, half of the Atoms are staked, the other half is unbonded, which gives the inflation from these unbonded Atoms to the holders of the staked Atoms. This gives a net interest rate of 21% per bonded Atom.</p><p>The good news is, it gets even more complicated. Will you get 21% in such a case? No, not as a delegator, because you have to delegate to validators and these charges a commission. If they charge 10%, you get the net interest of 18.9%, if they only charge 5%, you get 19.95%. But there are also validators, who do not charge a commission, then you get the full amount. Does that mean you have 21% more after 1 year? Well, only if the validator does not get slashed. Slashing is a mechanism where the validator gets punished for bad behavior and the delegators get also punished but not as heavily. This is the mechanism that drives delegators to pick good validators who run stable servers. Don’t be too afraid, the network is running for over a month and no slashing has occurred yet. So this won’t be an event that happens often.</p><p>Update: Slashing has indeed only happened quite rarely over the years. More important are the changes to inflation and commission. There has been a governance decision which changed max. inflation to 10% and one that changes minimal commission to 5%. So this is quite bad news for delegators. Maximum inflation of 10% means that there will never be more than 10% new atoms generated per year, this means if 50% of atoms are staked, you will get 20% APY. In the past you got 21% APY if 66% of atoms were staked. The increase of min commission means that no validator is allowed to charge less than 5%. This is good for small validators, who are struggling to survive because their commission is so low. These validators were in a really hard environment as they need to have low commission to attract new users, but then cannot pay for servers and costs of bureaucracy in their jurisdiction.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/777/1*SKC43VdKTI2VTPU7Y4MRsA.jpeg" /><figcaption>A rather typical experience for Atom hodlers</figcaption></figure><p>Almost mad? Not so fast, it gets more complicated! Ok, so if the validator does not double sign, does not get jailed, etc., then I have the 21% of this example after 1 year? Yeah, but there is also this sweet concept of compounded interest. While the year is ongoing, you can bond your rewards after you get them, for compounded interest. The compounded interest you say? Even more gainzzzz? Yes, but you can’t do it every 2 minutes, because delegating your rewards cost some small fee. Also it doesn’t change much if you do it once a day or once a week. But the good news is, that these were all things that make it complicated and now we can start to put all the pieces together.</p><p>First of all, there is the overall inflation rate, which you cannot really change, it depends on the overall ratio of bonded Atoms and those unbonded. Of course, this influences our decision whether we want to delegate Atoms or trade or get rid of them. But once we have decided to delegate, it is no longer something that changes our decisions. So in case the whole passage before this sentence made you think “Hell, what the fork? I don’t know if this is maybe too complicated for me”, then don’t worry, it is mainly for the ones who are curious to know, it is nothing you need to know by heart, when you want to earn reward. The only thing that is really important comes next.</p><p>So the question is, which validators do you want to pick for delegation? Some validators have a high commission, some have a low commission and here we have real influence with our decision. The lower the commission the more benefit we get from delegated Atoms, but there are other things to consider as well and these are often exclusive to each other. The next thing is the self-bonded Atoms of a validator. In principle, you want to pick a high value here, since this means that a validator has a lot to lose. If someone self-delegates Atoms worth of $10 million to his validating node, then it is obvious that spending, for example, $10k for security is a no-brainer. If you self-bond Atoms worth $5000, then someone might not spend $10k for IT security. The logic is simple, the more you have to lose, the more you will invest to secure your system. Because if you mess up, you will lose your staked Atoms. It does not make sense to pick a validator that has 1 Atom bonded and no proof that the node is secured quite well, just to avoid a commission. Let’s assume the net interest rate is 10%, then after 1 year with 0% commission we have 110% of what we have before, but since we have picked the least secure validator, it got slashed for example 2 times for 1%, then we have 107.8% after 1 year. In contrast, another validator might charge 10% commission but does not get slashed, then we have 109% after 1 year, so this makes more sense economically, even though we pay a commission. But these are just examples. In reality slashes are very rare, so high commissions are not really desirable.</p><p><strong>a) </strong><a href="https://www.mintscan.io/validators/cosmosvaloper1cql9ska0xl2rkg6gcv0np4333gn6fygs55asrs"><strong>Atom Sandler</strong></a><strong> <br></strong>This is an example for a high commission (25%), high self-bond (~450k Atoms), no track record validator. Let’s be honest, the name Atom Sandler is funny enough so that one should actually delegate some Atoms to this validator. But then the commission is really high, so it depends, how you value this great name. This validator no longer exists but serves as an example here.</p><p><strong>b) </strong><a href="https://www.mintscan.io/validators/cosmosvaloper1ma02nlc7lchu7caufyrrqt4r6v2mpsj90y9wzd"><strong>GenHashtower</strong></a><strong><br></strong>This is a good comparison to Atom Sandler because it is mainly the same, high self-bond (~500k Atoms), no track record, but a low commission (7%). So here we can see, we get the same characteristics, but with a much lower commission, but no pun included. This validator has even reduced commision to 3%.</p><p><strong>(c) </strong><a href="https://www.mintscan.io/validators/cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fz4sdg3"><strong>BouBouNode</strong></a><strong><br></strong>Here we have another contrast, this validator has also a low commission (6.1%), but it has almost nothing at stake, only 2700 atoms and it is even funnier since it claims to be run by an AI and one should not trust humans. The joke gets even better when one clicks their website that is down for a years right now. But if you are afraid of Roko’s Basilisk this might be a good option.</p><p><strong>d) </strong><a href="https://www.mintscan.io/validators/cosmosvaloper1qwl879nx9t6kef4supyazayf7vjhennyh568ys"><strong>Certus One</strong></a><strong><br></strong>This has enormous delegations (8.7M) , even though the self-delegation is quite low (55k). Also the commission is not really low with 12.5%, so one might ask oneself, why they have so many delegations? The answer is track record. The 2 guys from Certus One are to the Cosmos infrastructure what the Bogdanoffs are to Bitcoin price. They are the Alpha and the Omega. If they enter a room, every computer recognizes their presence and starts to vibrate and emit bright light. What do I mean with that? They have won the Game of Stakes and their instructions on how to run a Cosmos Validator are the best. This means many believe they provide the best security of all. Their validator seems to be no longer available.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*R20qDyjPCaiH746AD7O9nA.jpeg" /><figcaption>Please, of all the things you can do with your money, please delegate your money to this person. This is not investment advice, but do it!</figcaption></figure><p><strong>e) </strong><a href="https://www.mintscan.io/validators/cosmosvaloper1ey69r37gfxvxg62sh4r0ktpuc46pzjrm873ae8"><strong>Sikka</strong></a><br>Here we have the last of all extremes. Very low self-delegation (28k), very high delegations (4.7M) and the minimum commission of 0%. So this might be one of the examples, where the commission is really low and for the validator, there is not so much to lose… However it is Sunny, see the image above, he is one of the Cosmos researchers so, like many crazy scientists he is absolutely trustworthy. Also, he thinks Ethereum classic is the real deal. <br>Has increased the commission to 3%.</p><p><strong>f) </strong><a href="https://www.mintscan.io/validators/cosmosvaloper1ey69r37gfxvxg62sh4r0ktpuc46pzjrm873ae8"><strong>CrowdControl</strong></a><strong><br></strong>You can check out validators on MintScan, see <a href="https://www.mintscan.io/cosmos/validators/cosmosvaloper1x3mkgqpshvpq87d33ndsleu7gd7w47dl4ve0yy">here</a>. This validator is very good, quite obviously the best. How do I know? Well, of course I know this validator…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*ijAE-72jtQxK-kIg.jpg" /></figure><p>This validator has only 1% commission but runs excellent hardware, it is quite small because not many know of this great validator. So it only has about 170k Atoms delegated, which is really sad, since we all love decentralization and this only works if the small validators also get some stake. This validator has all my money attached to it, so I really do my best not to mess up :D <br>Just be a decent person and delegate to this validator, my life depends on it and if everything goes right, I might be able to make a living out of this validator and you can join me and also become insanely rich just by delegating to this validator. This is not investment advice, but do it now!</p><p>So I hope I could help to make things a bit clearer here. There is no best choice (except CrowdControl), every validator has its own characteristics.<br><strong>Is there any type of validator where one should not delegate?<br></strong>Well, actually I just said there is no best choice, but there are definitely bad choices in my opinion. One of the bad choices are validators which are “run” by a celebrity or something like that. The reason why it’s bad is very easy: those celebrities might be awesome personalities and even trustworthy, but they do not run the validators themselves. They hire someone and use their popularity to attract stake. That’s why all their coolness and trustworthyness does not translate into tech savvyness and they will hire someone, who of course will do a good job, but not their own money is at stake. General in investing I’d say, do not follow celebrities, famous youtubers etc. this is a really bad idea. These people are very good at making cool youtube videos, using ads to grow their popularity and turn all of this into profit. Not that this is bad, but it is very unlikely that these specialization benefits you as a delegator or the Cosmos ecosystem. For influencers this even means that they become biased. If you are following an influencer and the influencer tells you to delegate to their validator, they become biased. If the network has a serious problem, is not adapting to a changing environment, or whatever is going on that is dangerous for its investors, the influencer has an incentive not to inform you properly about that, because as long as you stay invested, she profits from you. It’s all fine to check what influencers say and read their content, I for example like to read Arthur Hayes, but I’m not doing everything he says and would not invest my money with him while at the same time getting all my information from him.</p><p>However most people want to pick the safest and most secure validator, so they don’t lose any money. This is clever and makes sense, since crypto is volatile like hell already and we don’t need additional risk. However, it is a widespread misconception that there is a single best validator to safely delegate. The best validator to safely delegate is all of them. If you spread your money across all validators the risk is the lowest. If you distribute among many validators, then the risk of losing all your atoms is nonexistent. However it is very cumbersome, comes with more fees and if we look how many validators have been slashed in the past, this is really rare, especially the good ones never got slashed. Keep in mind, that the chain halts if 33% of validators mess up. If you want to help the network decentralize, also split among small validators. Unfortunately, this is more work, than a single click, so in the end it just makes most sense to delegate all the Atoms to CrowdControl validator.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AZnLsnQyCuken62AeolQcg.jpeg" /><figcaption>This is basically how the world works in a single image.</figcaption></figure><p>I’ll try now to sum up this most important part of the article:</p><ol><li>You get fees and inflationary atoms. It’s complicated and we can’t really influence it, but 10% per year is guaranteed if we stake, 15% is likely.</li><li>We can influence the choice of validators, which get our delegations. Diversification is safety, low commission gives high yields, high self-delegation and good track record are good validators. Do your own research. Don’t read overly long medium articles giving you advice and shilling their own secret validators. Sniff the butts of the validators. If they smell good, delegate. No, seriously, don’t sniff the butts, but maybe have dinner with each validator and then make your decision, or just freak out because there are so many options.<br>3) This is not investment advice. Did I say this already? It is so important nowadays, I don’t live in the US, I don’t know if it is necessary for me to say that, but please delegate your money to Sikka, he is very good. This is not investment advice.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*tsFe26bMSoF-Q9l03Ua_1A.gif" /><figcaption>This gif speaks for itself.</figcaption></figure><p><strong>C) </strong>Now I come to the final part, in which I try to explain how to delegate with an offline and an online computer so that your key never gets exposed. You can also use this technique to send and receive your atoms and you can also vote online. All you have to do is change the command in Step6. So now I’ll present exactly how I delegate using an offline and an online computer, where my private key never leaves the offline computer. This is quite a special solution. If you have a Ledger Nano, then it is much easier to use it and follow a guide on how to use a Ledger Nano (<a href="https://medium.com/cryptium-cosmos/how-to-store-your-cosmos-atoms-on-your-ledger-and-delegate-with-the-command-line-929eb29705f"><em>How to Store Your Cosmos ATOMs on Your Ledger and Delegate with the Command-Line</em></a>). However some people don’t want to buy a Ledger, must handle many accounts or want to be able to program code, which is not possible via graphical user interfaces (GUI). For me this makes even more sense, since I work on a project using cosmos-sdk and knowing the CLI very well is important anyway. So if you want to go the dirty hand&#39;s route, here it comes. The best source how this command-line interface (CLI) works is <a href="https://cosmos.network/docs/cosmos-hub/delegator-guide-cli.html">https://cosmos.network/docs/cosmos-hub/delegator-guide-cli.html</a>.</p><p><strong>Explicit content (this means explicit instructions, not what you think):</strong></p><p><strong>Step 1</strong><br>Install Linux. I have used several Linux distribution in my life and have used Cosmos on 3 different distributions, but I’m not a Linux expert (the safest way to identify someone not being a Linux expert is if he claims to be a Linux expert, this is like quantum mechanics, if you think you do understand it, then you don’t). Also for me, it is not a religion, so I don’t care, I just want a working OS. For many cutting edge IT software projects Linux is the only OS that works and this applies to Cosmos as well. So get Linux, from what I have tried, Antergos is the easiest. You can also use Ubuntu, which is a bit more work to set it up. I prefer Antergos in this context because it is simple and fast to install Arch Linux. All you need to do is use <a href="https://antergos.com/try-it"><em>etcher</em></a> to burn the live ISO on a USB stick and then install it on the disk you want to use. In our case, we need 2 disks, one for the computer that never touches the internet, the other for the computer which broadcasts the messages. As a desktop manager, I pick XFCE, but you can also go with Gnome, KDE or whatever you like. After installing Antergos and booting, the only thing you have to install is Go. Just use the Add/Remove Software program that comes with Antergos and install Go. If you decided to use other distributions, like Ubuntu, then you need to check if the Go version is the newest, often it is not and for Cosmos, you need to find the repository with the newest go release. <br>*UPDATE* Shortly after this article was released, Antergos was discontinued. So here you see how much I’m a Linux expert. However I tried the same instructions with Manjaro and it worked. So be advised to use Manjaro instead of Antergos.</p><p><strong>Step 2<br></strong>Install Cosmos. We follow <a href="https://cosmos.network/docs/cosmos-hub/installation.html"><em>this guide</em>,</a> but we have to change bash_profile, to bashrc, so commands change to:</p><pre>mkdir -p $HOME/go/bin<br>echo &quot;export GOPATH=$HOME/go&quot; &gt;&gt; ~/.bashrc<br>source ~/.bashrc<br>echo &quot;export GOBIN=$GOPATH/bin&quot; &gt;&gt; ~/.bashrc<br>source ~/.bashrc<br>echo &quot;export PATH=$PATH:$GOBIN&quot; &gt;&gt; ~/.bashrc<br>echo &quot;export GO111MODULE=on&quot; &gt;&gt; ~/.bashrc<br>source ~/.bashrc</pre><p>After that we run the cosmos specific commands:</p><pre>mkdir -p $GOPATH/src/github.com/cosmos<br>cd $GOPATH/src/github.com/cosmos<br>git clone <a href="https://github.com/cosmos/gaia.git">https://github.com/cosmos/gaia.git</a><br>cd gaia &amp;&amp; git checkout master<br>make install</pre><p>and finally, check if the version is displayed properly:</p><pre>$ gaiad version --long<br>$ gaiacli version --long</pre><p>here you should see the current cosmos version, it depends on when you read this document, at the moment (April 2019) cosmos-sdk: 0.34. Now we have installed Cosmos. Great.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/400/1*XuUysVl9SXVaOZn41mhvBQ.gif" /><figcaption>This boy signs all his transactions offline.</figcaption></figure><p><strong>Step 3<br></strong>Configure Cosmos. Now we need to setup Cosmos so that it is connected to the Cosmos hub. Just for the clarification: Cosmos is a network of blockchains and its first hub is the Cosmos hub and there are the Atoms which we want to delegate. Now we can either setup an own node or just connect to another node, the latter is easier and fits our purpose:</p><pre>gaiacli config node <a href="https://cosmos.chorus.one:26657/">https://cosmos.chorus.one:26657</a></pre><p>We also need to set that trust-node to true and specify the chain-id of the cosmos hub:</p><pre>gaiacli config trust-node true<br>gaiacli config chain-id cosmoshub-3</pre><p>Then we can test the configuration with this command:</p><pre>gaiacli query staking validators</pre><p>In case everything went well a list of validators should show up. In case this does not work, most likely the cosmos.chorus.one address is no longer valid, one should google than a public node of the Cosmos hub.</p><p><strong>Step 4<br></strong>Get Cosmos Account. You do this on the offline computer. So you have to repeat Step1 and 2. Step3 is not necessary since there is no internet anyway. Once we have installed cosmos-sdk on the offline computer, we disconnect it from the internet and enter the following command:</p><pre>gaiacli keys add horst</pre><p>Now we have to specify a password, which is only for the local storage of the key on this offline computer. The important part comes afterwards, the 24-word mnemonic and the address. The mnemonic is the secret of your Cosmos address. Don’t lose it. This is the thing. Keep it encrypted whenever you transfer it. The address is needed whenever you want to transfer to this address. We should note it and transfer it to the online computer.</p><p><strong>Step 5</strong><br>Transfer your Atoms. In this step, we need to transfer our Atoms to the address that was generated offline. Wherever you have it, send it to the address that you have noted in Step4, it will be something like cosmos1abcde123….</p><p><strong>Step 6</strong><br>Generate a transaction. In this step, we generate our delegation transaction. For simplicity, we will refer to the address as cosmos1abcde123 and the validator we have picked is cosmosvaloper1xyz. We enter the following command on the online computer:</p><pre>gaiacli tx staking delegate cosmosvaloper1xyz 1000000uatom --from cosmos1abcde123 --gas auto --gas-prices 0.025uatom --gas-adjustment 1.5 --generate-only &gt; unsignedBond.json</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/440/1*IeiY8SecJiQHd_JJn2hLNw.jpeg" /><figcaption>A dog doing research for you. That’s why you should do your own research (DYOR)</figcaption></figure><p>gaiacli is the program that we have installed, the rest are parameters. tx staking delegate is the specific command, which needs 2 arguments, the validator that gets the delegation and the amount to delegate. In this case, we have specified 1 atom = 1000000utaom. After that the flags follow, — from is your own address, — gas auto, — gas-prices 0.025 and — gas-adjustment 1.5 are picked so that the transaction will go through and sufficient fees are paid. — generate-only means that the transaction is only generated and not signed and not broadcasted. &gt; unsignedBond.json is the last part, which is not Cosmos specific but rather Linux specific, which writes the result into the given file. So the transaction is saved in unsignedBond.json. Also, we want to find 2 parameters that we need on the offline computer by entering</p><pre>gaiacli query account cosmos1abcde123</pre><p>We will get a response with sequence, that should be 0 since no transaction has ever been done with this account and account-number which is some number we have to note, for simplicity let’s say 1337.</p><p><strong>Step 7</strong><br>Sign transaction. We take the unsignedBond.json and transfer it to the offline computer. If your paranoia is strong, then you transfer via pencil, but a USB stick is also fine if you can handle your inner demons. After we have transferred the file, we enter</p><pre>gaiacli tx sign unsignedBond.json --from horst --offline --chain-id cosmoshub-2 --sequence 0 --account-number 1337 &gt; signedTx.json</pre><p>In the future sequence will increment by 1, whenever we sign the next message and account-number will always stay the same for this account. If you get signature verification fails, then in most cases cosmoshub-2 is wrong, sequence or account-number. We will also have to enter the password we have specified for horst. Again &gt; signedTx.json writes the result into this json file.</p><p><strong>Step 8<br></strong>Broadcast transaction. This is the last part and we take the signedTx.json from the offline computer to the online computer and enter</p><pre>gaiacli tx broadcast signedTx.json</pre><p>This is the last step and either it worked and we can see the transaction on mintscan, hubble, stargazer or something similar or in most cases the gas will be too low, then we have to start all over again, increase the sequence, supply more gas and sign offline and broadcast again. If it worked, congratulations, you have done it.</p><p>If you have read down to this line, I give you the special attention award of the internet. Once I read an article that explains how to be a successful medium article writer. The most important message was to write small articles, which can be read in 5 minutes. As you can see I have not submitted to this rule and this is my personal attempt to overcome human limitations by writing articles that need more than 10 minutes to read. As a very kind thank you, I will answer frequently asked questions now:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*RmVUiEagDna8tye1ub9X8g.jpeg" /><figcaption>Looking deep into these eyes, you can see the reflexion of prices moving up</figcaption></figure><p><strong>Q: What is the circulating supply of Cosmos?</strong></p><p>I have asked this on Reddit and Telegram already and after I did not get an answer after 10 seconds, I left the channel, this is your last chance, to get community members. Answer my question. Why do you keep making this question so long?<br><strong>A:</strong> Sorry, I will answer immediately, I know, the answer to this is somewhere hidden deep in the vaults of the Cosmos. For example <a href="https://medium.com/cryptium-cosmos/how-to-store-your-cosmos-atoms-on-your-ledger-and-delegate-with-the-command-line-929eb29705f"><em>here</em></a>: currently 120M of 238M are bonded. 10% are also vested of the 238M, so only 212M Atoms are really able to circulate. Now the question is if you count bonded Atoms towards circulating supply. Actually, it is not circulating, but it can be made circulating within 3 weeks and people use circulating supply to determine what a project is really worth. So regarding this, it should be counted in. <a href="https://www.coingecko.com/en/coins/cosmos/usd"><em>Coingecko does</em></a>, <a href="https://coinmarketcap.com/currencies/cosmos/"><em>Coinmarketcap does not know</em></a>, so let’s see, I don’t know, the ultimate fate of the universe is a similar complicated topic.</p><p><strong>Q: What was the ICO price?</strong> I have heard $0.1, x30 or x40 real? This is scam!<br><strong>A: No no no, this is a real.</strong> My wife still doesn’t believe me. Actually, it was not an ICO but a fundraiser and it was more than 2 years ago. So, keep in mind, 2 years in crypto are 200 years in real life. Imagine how unspectacular x30 would be if you invested in Apple 200 years ago…</p><p><strong>Q: Can I become a validator?</strong><br><strong>A: </strong>Yes, please read <a href="https://kb.certus.one/">this</a>, unfortunately this no longer leads to the great knowledge base of certus one, but rather just jump crypto -.-</p><p><strong>Q: When </strong>will <strong>IBC</strong>, the interblockchain feature, be implemented?<br>A: As far as I have heard in summer, a more Cosmos-typical answer is “in 1 month!”<br>Update: IBC IS RUNNING AND IT IS AWESOME</p><p><strong>Q: When Binance?</strong><br>A: It’s already on binance</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/601/1*K4Lzeawn5xU-YBJqQInISw.gif" /></figure><p><strong>Q: Why do you include so many absurd pictures here?</strong><br><strong>A:</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/771/1*pdUHmkzE8ubK6j7sM1u_7Q.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=87ebc6518145" width="1" height="1" alt=""><hr><p><a href="https://medium.com/coinmonks/the-ultimate-cosmos-delegation-guide-for-real-idiots-87ebc6518145">The Ultimate Cosmos Delegation Guide for real idiots</a> was originally published in <a href="https://medium.com/coinmonks">Coinmonks</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[IOTA experienced as a real idiot]]></title>
            <link>https://medium.com/coinmonks/iota-experienced-as-a-real-idiot-ec72e872f753?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/ec72e872f753</guid>
            <category><![CDATA[qubic]]></category>
            <category><![CDATA[distributed-ledgers]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[bitcoin]]></category>
            <category><![CDATA[iota]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Mon, 14 May 2018 12:34:32 GMT</pubDate>
            <atom:updated>2018-12-11T09:00:38.647Z</atom:updated>
            <content:encoded><![CDATA[<h4>and the CAP-Theorem explained for real idiots</h4><p>IOTA is not your normal everyday bitcoin source code forked blockchain. IOTA is more like that crazy nerd guy from school, really different and you don’t know if he will be a Bill Gates later or just some fat basement dweller. In the world of blockchains IOTA is so different that it isn’t even a blockchain. Wait a second, it is not a blockchain? No. It is a directed acyclic graph. Or in short and not less confusing DAG. We should always bear in mind that we are real idiots and need to find some easy simple terms for what is some geeky nerd phrases to confuse people into thinking this is the next big thing. Does that mean IOTA wants to trick people thinking it is the next big thing? No, it is still the nerdy different guy from school. We don’t know yet if he becomes a Bill Gates or just a fat basement dweller. And don’t get me wrong here, I don’t want to downgrade fat basement dwellers. I love the content they post on 4chan. It is just that I don’t want my new technology made by them, as long as there are also Bill Gates out there.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZThc7ZIH_WCWH8zQsa9LCQ.png" /><figcaption>Not only does IOTA redefine trust, value and ownership, it also feels like a train running through our head.</figcaption></figure><p>Ok so what is this directed acyclic graph? A graph is something where nodes point to other nodes. For example if you have these pictures with nodes asking “Will this investment pay for itself within a couple of weeks?”, then one arrow goes to another node, denoted with “yes” and another goes to another node denoted with “no”. These connections are directed, because they go into one direction. By answering the question with “yes” you get directed to the next node which asks you “Is this investment opportunity called something like ‘smart pyramid high yield yes you like program?’” and again you get directed to some other nodes. At the end there is a conclusion node like “yes, you are getting scammed big time”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/843/1*N6RLyYMnvoZBq4-qphpQaA.jpeg" /><figcaption>This is a directed acyclic graph</figcaption></figure><p>The next thing is acyclic. It just means not-cyclic, therefore no connection or let’s call it edge, can direct to a node which you have visited before. Now that we have learned what kind of graph IOTA is, we should remember what a blockchain is.</p><blockquote>A blockchain stores everything that has happened in the network in blocks. These blocks are connected to each other in a linear one dimensional way. The next block refers to the current block.</blockquote><p>So one could say a blockchain is a directed acyclic graph. But there is one important point, a graph allows for one node to have many edges connecting this node to more than one other node. A block in a blockchain only has a previous and a next block. So one directed edge going in and one going out. Some smart guy might now interrupt and say “wait a sec, how about forks? When Forks happen there are two different blocks pointing to the same block as their last block. Yes, that is true, but in one chain, there is only one reality without the other chain. The forked chain does not exist in the other part.</p><p>For DAGs this is different, DAG-based blockchains are not blockchains, but rather DAG-based distributed ledgers. So one might think, that they constantly fork and their blocks don’t just point to one other block. Well, almost. Because not only there is no chain, there is also no block. Blocks are a collection of transactions. The network has to agree on a block and then all transactions in this block are valid and cannot be altered, except the network switches to another forked chain, but for this specific chain, the block cannot be altered. The blocks are the heartbeat of a blockchain, whenever a new block is found, all network nodes agree to this block and start the search for the next block from there. Since IOTA does not have these blocks, we want to know how people agree on valid transactions.</p><p>A block contains all transactions and it can be checked very easily if these transactions are valid, meaning there are sufficient funds and it can also be checked easily if the proposer of the block has solved the puzzle correctly. For the proposer it is a very bad idea to forge a block with invalid transactions, because it gets rejected and his work was in vain. So what changes if we abandon the blocks? Not much. Users just push a transaction into the network instead of a block. Instead of a whole block, the network has to validate a single transaction. Ok, so IOTA is just the same but there are no blocks collecting sets of transactions but rather each transaction is validated individually. Well, yes, this part is right, but now comes the next shock. How are these transactions validated? By mining? No, there are no miners in IOTA, the answer is, new transactions validate old transactions. How does this work? If you push a new transaction into the network, you have to validate 2 previous transactions. Validating 2 previous transactions means validating each transaction these transactions validate and also validating everything these point to and so on. This means what in bitcoin are separate entities, transaction emitters and transaction validators are the same in IOTA. This is fucking genius.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1013/1*K3AME4Ctj7ZjPg2M3BJJwA.jpeg" /><figcaption>Sometimes an image speaks for itself. The reader can decide if this is the case here.</figcaption></figure><p>In bitcoin it seems quite natural to insert the whole mining concept and I always asked myself, what else can you do? This is the easiest way to incentivize someone to validate transactions. It is a real problem. Someone has to do it, but it is work. If nobody pays for the work, nobody does it. Therefore bitcoin has a free market of pending transactions and miners just pick the ones which yield the most revenue and put them into the next block. Why is it so genius that IOTA abandons this part? Because if you engineer a system and you can leave parts away, then this is great. More than often you only see how you can insert a new piece, to solve a problem, but unfortunately the new piece creates new problems and for these new problems you need new pieces and so on. So let’s go back understanding what this all means. In IOTA we don’t have a latest block everybody has agreed on, but rather some transactions, not being verified at the front of the tangle, so called tips. If you push another transaction you verify 2 transactions, which are tips in your opinion. In your opinion because you don’t know if someone else has also verified these. It might have happened already at a distant node and until this information reaches you some time has passed. <strong>The tangle is this network of verified transactions and unverified tips.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7kZScBNWYGLo96pBsoLK7w.png" /><figcaption>Transactions in the tangle of IOTA, numbers are just a chronological iteration.</figcaption></figure><p>In above picture the transactions 1–10 are verified and 11–15 are tips and aren’t verified. Let’s imagine that transaction 14 is invalid and someone wants to double spend with it. Now some other users want to push new transactions, they will realize that 14 is invalid and won’t point to it and pick 2 other txs. 12 and 15 for example. The tangle will grow and grow but tx 14 will never be verified. So this is completely without problems? Well, this was just an easy example. Since there is no global state of the tangle everybody has agreed to, some distant branches of the tangle might evolve independently and grow for some time until they touch each other again. Also there is no guarantee that distant branches will meet each other again. With more time elapsed it becomes more and more likely that they will, but if there are conflicting transactions then it might be never that they come together. This is the big problem of IOTA. There is no global state everyone has agreed to. To solve this issue the IOTA foundation has introduced the <strong>coordinator</strong>. This is a special node which keeps the whole network together. From time to time it writes down a specific state of the network which serves as a milestone and it is surprisingly called a milestone. Every node can start syncing from there. This in fact is such a piece that brings in new problems and you might introduce new pieces to solve it. Why? Because the coordinator reduces decentralization. It is supposed to bring balance to the network but it can destroy it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*Cl2kjamHtj1VI5dJJn8vrQ.jpeg" /><figcaption>Anakin was the chosen one and he was supposed to bring balance to the force.</figcaption></figure><p>The coordinator is what many people see as very problematic in IOTA. I fully agree. We don’t know if IOTA works without the coordinator. Milestones are important because without them, the size of the ledger would grow to sizes too big to be handled by most nodes. They also reduce time to sync greatly. But the coordinator also decides which separate arms are to drop and which survive. This gives it a lot of power and it is controlled by the founders of IOTA. This is something people in the crypto-space don’t like. What makes crypto so great is the partition tolerance of its networks and the censorship-resistance. Now is a good time to introduce the CAP-Theorem.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/245/1*WgyXbQkqE_yLrxeANmr6zw.png" /><figcaption>This triangle is alsmost self-explanotry with the edges being labelled with C, A and P.</figcaption></figure><p>This theorem states that a distributed system cannot have all three properties fully deployed at the same time. These properties are consistency (C), availability (A) and partition tolerance (P). Unfortunately you want to have all of the three at the sime time to full extend.</p><p>Consistency means that there is no dispute about what is the truth. There is no subjectivity and if there is, it only lasts for a short time and objective truth emerges after some time. For example in bitcoin consistency is very high, because a block is either valid and accepted or invalid and dropped. Is it possible to have even higher consistency? Yes, in bitcoin blocks are not final. Their finality grows with time to “so final that it takes more work than is available on earth to revoke it”, but there are also blockchains which have final blocks just after they are posted. Thus their consistency is even higher. So how about IOTA? Well, low consistency. There is no global state which everybody agrees to. Only after milestones. There is only some part of the network you have seen and this is consistent to you. To others another part might be consistent. If you wait for some minutes or some hours, then everything that has been there one hour ago, might now be consistent to you, but of course there are already new transactions.</p><p>The next thing is availability. This mostly means, how available the network is, big surprise here. Will every transaction be handled? If yes, then it is very available. What happens on a huge load of transactions? Do txs end up in a long queue and are eventually dropped forever? If yes, low availability. We see here, <strong><em>that bitcoin has a low availability and IOTA has an enormous availability.</em></strong> There is no limit in IOTA like the block size and the block interval. There is of course an emerging limit if nodes are not able to catch up with all transactions. This limit is not easy to calculate, because it emerges and depends from all factors like bandwidth, network topology and computation power of nodes.</p><p>Then there is only partition tolerance remaining. This means, what happens if half of the nodes are cut off? For example because the Great Firewall cuts them off. Does the network come to a halt? Are transactions lost? Is the network able to resume or just continues and the network split is not a real problem? In fact this is very important for crypto stuff. If you just aim for high consistency and high availability you can run a sharded database and you are fine. Only with partition tolerance comes censorship-resistance and decentralization. This is why same people say Ripple is not even a real cryptocurrency. Because the network is run by Ripple Labs. What if Ripple Labs is taken down by US government? This is the end of the Ripple token as well. Another example is NEO, 7 nodes, most of them are in China and some weeks ago, we have seen that one node halting means the whole network halts. For a high partition tolerance your network must be able to work, even if you cut big parts of it. Without coordinator IOTA is quite good in this regard, with coordinator, well it looks worse. Bitcoin in contrast is very good in this regard. It is permissionless and decentralized, the only problem is that mining has become centralized over time and it doesn’t look like this will reverse any time soon. CAP-Theorem is a very nice thing to categorize a new cryptocurrency or crypto-platform. How about ethereum? Well, it has PoW like bitcoin, so quite the same as bitcoin.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/810/1*X9MfRdRwMlNMc2e2UWam1A.jpeg" /><figcaption>Partition tolerance is very important. Because every time the network halts god kills a crypto-kitten.</figcaption></figure><p>What properties are the most important ones? Well this depends on what you want to do. If you want to build a decentralized exchange, then you need consistency like hell. If you have to wait 30 minutes until you know that a transaction is valid, then transactions on your DEX take very long. If you want to connect billions of machines worldwide, which exchange some value and a lot of data, you need a lot of availability. If you want to have a very safe store of value, let’s say digital gold, then you need partition tolerance, to a level that the network can be spawned again after a nuclear war and works. Different demands mean different properties should be favored. For a platform running a DEX, some interoperable PoS blockchain might be most sensible, Cosmos, Dfinity or Polkadot for example, for the Internet-of-Things IOTA makes sense as we have seen and for digital gold bitcoin might be a good fit. Since we have mentioned interoperability here, it should be noted, that IOTA is very hard to make it talk to other blockchains, because there is no global state. There is always a hard time to be sure, that transactions are final, so that the other blockchain can be sure they have happened in IOTA. But this is another big topic.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/1*2HgBFBVkT9f51noYXMGuQg.png" /><figcaption>IOTA makes the windmill in our head spin</figcaption></figure><p>After I was excited by IOTA’s concept, I tried it out. I tried the wallet, I run a headless node and talked to the community. In the beginning it was a real pain, nothing really worked, it was very hard to start the node and the wallet had a lot of bugs. Running a node is awful. On Ubuntu I didn’t get it to work, it only worked on Antergos (Arch). No automatic neighbor discovery? Wtf. In bitcoin or ethereum, when you start the client, it automatically finds neighbors and connects to them. Most users might not even know that this happens in the background. In IOTA you have to go to a slack channel and find some other nodes to connect to. I didn’t understand why this tedious process is necessary. After some time, I learned that this is to prevent Sybil attacks. A Sybil attack is like bringing 200 friends to a party and blocking the bar until the price for the beer is lowered. Since IOTA is permissionless, someone could just start thousands of nodes, be connected to the network and bring it to a halt, by just freezing the nodes. If there is no automatic neighbor discovery, then it is a lot of work to connect and thus cannot be automated. On the other hand people are incentivized to connect to the same nodes and keep their nodes up. Furtheremore new versions are released, they don’t work anymore, java has to be another version in order to make IRI (IOTA Reference Implementation) work and all such problems you have when working with beta or alpha software. Normally you have these problems when developing such software, running it should work, once you have set it up. But not for IOTA. It is much more work than running a bitcoin or ethereum node. How is using the API and programming with IOTA? It is ok, not super developer friendy, but not really hard. I never understand why they rely on a ternary system, some strange reasons were given here and there, but ok. Only thing that somehow convinced me was that they come from developing chips which use ternary and at some later point they want to sell such kind of chips, which do the PoW for IOTA really fast. IoT devices can have these chips then to save energy when doing PoW. If this is true (I don’t know) then I don’t like it. I think there is a reason why mankind has picked binary. Then there were also some frozen accounts, which hat to be reclaimed, I didn’t have this problem, but it is strange. Some people claim IOTA is not safe and their hashing function is bad. They have designed their own hashing function but after the critique they have changed it. Ok, I don’t know why you do your own hashing function and not use some proven function. The answer given by the IOTA team is that it was necessary in order to have a copy-cat-protection. Ok, this might be true, but still it is strange. Later in december IOTA started to become a big thing. People freaked out completely, cooperation with Microsoft and everything. Later it was said, no, no this is fake news. The IOTA team never said this etc. Still there is a lot of cooperation with big industry. Then the slack channel got too crowded, IOTA moved to discord and the community became more and more like “when moon, lambo now, oh shit IOTA was supposed to hit $10 soon, what is going on?”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/1*lbqWLQhb-X-AF9RyKEGujg.png" /><figcaption>The community of every big crypto project in a nutshell</figcaption></figure><p>Now IOTA has made a big announcement that “<strong>qubic</strong>” is coming. What is qubic? We don’t know. We have only seen an inspirational video without any spoken text, just some buzzwords on the bottom and awesome animations. This reminds me of december, when they announced, there will be awesome news soon and then awesome partnerships were announced and (partially) revoked. The new video posted about qubic is something I really don’t like about crypto-space currently. It seems like marketing, teasing and hype are all that matters. They drop the words “oracles”, “smart-contracts” and “outsourced computation”. Ok. How do they do it? Are there whitepapers? What I really like about the crypto-space is that people post their whitepapers and explain how they want to do something and you can think about how hard it is and if it makes sense. But lately people don’t care and it is all about selling the hype. This is accompanied by a David Sønstebø freaking out regularly in discussions, stating that IOTA is superior because it can handle infinite transactions, when in fact it can’t, we just don’t know how much it can handle. The same goes with the argument, IOTA becomes safer and better when it gets bigger. This is iterated very often in the IOTA community and it is only true for the statistical variance of the time a transactions need to be confirmed. But beside that I don’t see why the network becomes faster, when it grows. I think it will be slower, because transactions take longer to be propagated to all nodes. So coming back to my initial lines, I’m skeptical. I see great potential in IOTA, but we just don’t know if it works as intended and the coordinator can be shut off. The features announced by qubic don’t sound like the coordinator will be shut off soon, more like it will be necessary forever. Does all of this mean IOTA is the fat basement dweller pushing the hype wave? No, we just don’t know. Always keep in mind that IOTA is new and there is progress. If progress was as slow as in bitcoin, I would be very suspicous, but maybe the IOTA team delivers and the hype is justified. We will see and I’m excited.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/1*uB0j2XeDghDg19XgCPZa0g.jpeg" /><figcaption>Yeeeees qubic. I have no fucking clue what it is, but it looks awesome.</figcaption></figure><p>Edit:<br>I totally forgot one important thing: fees. It is something that often comes up when people talk about IOTA. The discussion mainly revolves between “IOTA does not have fees, it is superior!” and “IOTA has fees you moron!!11one”. So IOTA does not have fees in the sense that you have to pay some currency to send a transaction. Why does bitcoin and so many others have such fees? Because it is necessary as a spam protection. Sometimes this concept is also called “hashcash”. The initial idea of this concept was to allow E-Mails only to be sent if some hashs have been calculated, so that spammers cannot send infinite E-Mails. Since you pay with these hashs, they called it hashcash. Now IOTA does the same. If you send a transaction you have to solve a typical Proof of Work puzzle just as miners do in bitcoin. So yes, you have to pay something to send transactions, if you call this a fee or not, is not relevant in my opinion. If your stance is that fees are only fees if something is paid to someone, then Ripple is also feeless, because you burn XRP. Still there is something great about this concept. Because there is no distinction between senders and miners who get paid by the former, there is no fee runoff. In Bitcoin if there are too many transactions the fee climbs to absurd values and miners actually want this to happen. In IOTA such absurd stuff does not happen, which is clearly an advantage. On the other hand it seems quite unintuitive to force IoT devices to do calculations to send txs. If these devices are remote and run off a battery you don’t want to spend the remaining power of your battery on doing PoW. But I think this serves the ultimate goal to sell some of this ternary PoW chips for IoT devices using IOTA.</p><figure><a href="http://bit.ly/2G71Sp7"><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fKX2mtg7p1lOs7JmosPLsA.png" /></a></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ec72e872f753" width="1" height="1" alt=""><hr><p><a href="https://medium.com/coinmonks/iota-experienced-as-a-real-idiot-ec72e872f753">IOTA experienced as a real idiot</a> was originally published in <a href="https://medium.com/coinmonks">Coinmonks</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Cosmos / Tendermint explained for real idiots]]></title>
            <link>https://medium.com/coinmonks/cosmos-tendermint-explained-for-real-idiots-ab4305cbb41?source=rss-8e91a3236ca6------2</link>
            <guid isPermaLink="false">https://medium.com/p/ab4305cbb41</guid>
            <category><![CDATA[proof-of-stake]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[tendermint]]></category>
            <category><![CDATA[proof-of-work]]></category>
            <category><![CDATA[cosmos]]></category>
            <dc:creator><![CDATA[Patrick Wieth]]></dc:creator>
            <pubDate>Thu, 18 Jan 2018 07:05:05 GMT</pubDate>
            <atom:updated>2022-06-25T13:18:01.673Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_o_LeDb3UKGGzYEUy514BQ.png" /></figure><p>I’m a big fan of Cosmos. So far so easy. Many people ask me “what is this Cosmos, why do you get a boner speaking of it?” and I have often explained it or tried to explain it. Here, I will try to explain it another time, hoping for a document, I can send friends in the future. I’m not part of the devs, team or foundation and I’m not going to explain it in precise technically correct terms, because there are a lot of documents doing so. Read the <a href="https://cosmos.network/about/whitepaper">whitepaper</a>, read the <a href="https://blog.cosmos.network/">blog of Cosmos</a>, watch the <a href="https://www.youtube.com/watch?v=LApEkXJR_0M">video made by Sunny Aggarwal</a>, all of that is nice and technically correct. Unfortunately, I see all of this is too complicated for many folks. That’s why I try to explain for real idiots. Don’t take this as an insult, I’m also a real idiot, but once in my life I sat down and tried to understand this stuff, but being an idiot myself, I can explain it in the words of an idiot.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F183530279%3Fapp_id%3D122963&amp;dntp=1&amp;url=https%3A%2F%2Fvimeo.com%2F183530279&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F592763135_1280.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/1b90857e7cb7eba7f7c36eaeed547885/href">https://medium.com/media/1b90857e7cb7eba7f7c36eaeed547885/href</a></iframe><p>What makes the thing so complicated is that there are so many things that Cosmos does. One reason for that is because Tendermint. Tendermint is the software and Cosmos is the blockchain. Or more precisely the main hub for the internet of blockchains, being created by the creators of Tendermint. So let’s start understanding what Tendermint is. It describes itself as “ Byzantine fault-tolerant replicated state machines in any programming language”, so let’s decipher this stuff. Byzantine fault-tolerant means that we don’t need a single, trusted entity. Like a central bank, which releases coins and we trust it to release these coins in a sensible manner. It is called Byzantine fault, because there were some guys with some camels and they wanted to conquer the byzantine empire. Therefore they had to attack the city of Constantinople simultaneously. So far so easy. All they had to do was all attack at the same time from all sides and since they brought sabres as well as the already mentioned camels, it sounded like a possible plan. But they had no cell phones, so they could not call everyone else or send snaps around. To make it even worse some of the generals might be traitors and also the camel’s shit was quite smelly. So the traitors wanted to send some other loyal generals to death by giving false information. In contrast the leader of all the troops does not want to lose and therefore wants to prevent misleading information being spread by traitors. Unfortunately nobody knows who are the traitors and at some point it is not possible to know if information can be trusted. This whole dilemma is called the byzantine generals problem and is the prototypical/example problem, which blockchain is solving.</p><blockquote><a href="https://coincodecap.com/category/staking">Wanna stake your ATOMs? Check these staking service providers.</a></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/569/1*44w2guJwt6jKrpeFxEOM1A.jpeg" /><figcaption>The siege of Constantinople. Attributed to Philippe de Mazerolles [Public domain], via Wikimedia Commons</figcaption></figure><p>A solution to this problem allows for decentral organized systems. The generals are a good example for a decentralized system, because it is technically not possible to connect all generals to a central authority. Therefore fast enough communication is only possible through a mesh network, but this opens up to the byzantine fault. One solution is for the leader to put signatures in their messages, but what happens if you get 6 messages with 2 different types of signatures, 3 are fake and 3 are the real ones? In this case even byzantine fault tolerant protocols fail to work. But if there is only a minority of traitors there are ways to determine, who is loyal and who is a traitor.</p><p>The next thing is “replicated state”, that is what networks of blockchains actually do. They replicate states across all nodes. It doesn’t matter if you run your node in Australia, China or the North Pole, you end up with the very same blockchain. We don’t want to argue here what you replicate for example with bitcoin, some would say it is not a state but rather unspent transactions. This is important for the developers but we are idiots and don’t want to dive too deep into hairsplitting. Now that we have mentioned bitcoin, we can also note that bitcoin has a special solution to achieve byzantine fault tolerance. The solution is called Proof of Work. We’ll come back to this later.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/1*lJoW2McLBuKzMD8Qb7JWaA.jpeg" /><figcaption>The fall of Constantinople indicating the byzantine generals problem was solved back then. Attribution: Jean-Joseph Benjamin-Constant [Public domain], via Wikimedia Commons</figcaption></figure><p>So now we have an idea what Tendermint is. A software to distribute a state of something across the globe without any central authority. This something can be anything. It can be a ledger, it can be a database of pictures, it can be a list of unresolved trade orders. It doesn’t matter. And this is awesome. It means with Tendermint you can create whatever blockchain system you need and everything is already done, except your application logic. How all nodes connect, how they reach consensus, how the blockchain is written, all of this stuff is already solved, you only need to program the desired economy of your token, if you want to make a token. You can also create a decentralized application (dapp) without a token. There are some limits, the most important one means that your application has to be deterministic. That means whatever input you give, the output can be predicted exactly or in other words, all nodes calculate the exact same next state from a given set of transactions and an initial state. Obviously this is necessary, otherwise there would be forks all the time. All of that means that Tendermint separates consensus and network layer from the application layer of a blockchain solution. That means anybody can program a blockchain without writing all the crypto and networking related code. This makes it really easy for developers to build solutions. What about big companys, maybe they want their private blockchain? No problem, Tendermint leaves it up to you, who becomes a node and how contribution is incentivized.</p><p>Now, that we realize that Tendermint makes it possible to create new shitcoins everyday even faster than before, we realize that all of these coins are built on the same foundation. Maybe it makes sense to connect these as well. This is where Cosmos comes into play. Cosmos is built with Tendermint and it is a Proof of Stake coin that uses delegation to allow anyone, even non-validators to participate in staking. We call this Cosmos Proof of Stake. This is again one of these crazy words that we as real idiots do not understand. Let’s try to understand Proof of Work (PoW) first. This is the approach that secures bitcoin. We remember from the beginning of the article that we need a way to determine who is a traitor and who is loyal. In the world of cryptocurrencies this means who is creating new blocks with valid transactions and who might be forging fake transactions for one’s personal benefit. PoW is based on the idea that there is hard work to be done and whoever solves this work is trustworthy. So there is a hard task that the network agrees to solve but it is easy to check and with every solution found there can be a new task spawned. Furthermore how hard the task is can be adjusted. Whoever solves the hard task first finds a new block and starts the search for the next block. Still, this stuff is decentralized. So how do we know who to trust? In that case you can just trust the longest chain. Since the longest chain has solved the most hard work, it is most trustworthy. Aha, but doesn’t this create an upward spiral of ever faster blocks and a decrease of block times? Yes it would, but as we have mentioned how hard the work is, is adjustable. Therefore the network increases how hard the the problem is, so that it always takes 10 minutes to find a new block on average. Now we might ask ourselves how this bullshit is securing the network? There is just some random arbitrary work to be done? Well, the thing is you have to acquire hardware to do that. You have to spent electricity to do that. If you decide to become malicious, buy a lot of hashpower, do all the PoW and troll the network, then you might be able to steal all the bitcoin. One reaction might be bitcoin losing its valuation, which is not too bad for you, since you stole the bitcoin anyway but your mining hardware is now worthless as well. And for that hardware you paid a lot of money. Therefore the folks with a lot of hashpower have invested a lot and thus want bitcoin to prosper. In addition it is desired that the worth of the mining hardware scales with the market cap of bitcoin. This is achieved with fees and block reward meaning that miners earn more when bitcoin has a higher valuation. As a result this incentivizes setting up more miners and therefore increasing the hardware capitalization. With this clever mechanism it is always secured that it costs many billions to destroy bitcoin via this vector. Unfortunately PoW has two very critical drawbacks. One critical drawback is that electricity is wasted. Bitcoin wastes tremendous amounts of electricity. The calculations are arbitrary and only necessary for securing the network, but they do not fulfill a purpose regarding the numbers that are calculated. Waste is always bad, but in the context of global warming wasting power is even worse. The second critical issue with PoW is the power of the miners. The miners do not need to keep bitcoin. For them it doesn’t matter how useful bitcoin is, for them only profit from mining counts. It is nice if the miners want to make the currency useful and I don’t want to discuss here how good this works for bitcoin. All I want to say is that it is possible that the miners follow a policy which drives up the fees and making a currency unusable as a currency, reducing its utility to a store of value or even worse just a collector’s item. For example for daily payments you don’t want to use a currency with transaction costs above $10. If you buy a chewing gum at a kiosk and the transactions costs much more than the item, well than we are not just real idiots but rather absolute dipshits. It is ok to be an idiot, but we don’t want to be dipshits.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/287/1*3dmCbGrQyXcuvGyt5Ddcvg.gif" /><figcaption>Typical reaction to bitcoin’s fees</figcaption></figure><p>However, people have made up their minds about the problems of PoW years ago already. The most popular solution is Proof of Stake (PoS). The network is no longer secured by hard work but rather by high stakes. So to make consensus and find new blocks you no longer need hardware but rather a lot of currency in the respective currency. In PoW everyone has a chance finding a new block proportional to the amount of work being committed. Analog in PoS everyone has a chance finding a new block proportional to the amount of stake being committed. That means you can put your coins at stake, which makes them immovable and lets you participate in the consensus rounds. Remember how expensive hardware made the network secure? Here, the same happens with coins at stake. If you forge faked transactions, then the crypto currency you have staked into becomes worthless, therefore you don’t want to do that. At first glance PoS sounds so easy and much more sensible than PoW, but it comes with a some drawbacks. A prominent example is the nothing-at-stake problem. It applies to forks, so when the network splits up in two separate parts, simple PoS incentivizes stakers to just follow all forked chains. In PoW you cannot do that, because your mining hardware is limited in the amount of work it can solve. But if your staked coins just exist the same way on all forks, you have nothing to lose in a fork, just to win, thus incentivizing forks. Unfortunately forks are not very healthy for a network, they stop growth, they confuse users and slow down development process. Another problem is that all these stakers have to communicate and synchronize their opinion on the evolution of the chain. For PoW it didn’t matter all that much. Someone is going to find the next block and all others can verify that this solution is correct or not. For PoS someone must be determined who will propose the next block and then all have to agree that this block is valid. If we stick to block times in the range of 10 minutes like for bitcoin, this is not a real issue, but we know scalability is an issue on the one hand and we dont want to wait for transaction confirmation on the other hand. In fact it is not necessary to wait 6 confirmations (= 1 hour) like in the case of bitcoin to be sure a transaction is valid. This is just a consequence of PoW.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*t4Rsa-8U3xmIyi_8-j5tqA.jpeg" /><figcaption>A completely unrelated funny image to allow for a refreshing interruption</figcaption></figure><p>I’m very sorry that this text is so long already. Maybe this also a good point to get a coffee. I’ll try to keep it as short as possible, but I don’t know how to do that, since so many people don’t know of all these intrinsic problems. But here is the good news: Now I can talk about Cosmos :D</p><p>So Cosmos solves this nothing-at-stake problem by introducing the concept of slashing. As far as I know this idea is from Jae Kwon, the head of Tendermint. The idea is that misbehaving stakers get their coins slashed. When you misbehave you lose your coins. When you forge fake transactions you lose a lot of coins. Therefore if you follow fork #2, you will get your coins slashed at fork #1. Also stakers can be forced to partake in governance processes. This also solves some problems that occurred with the DAO disaster. The next thing is if you are lazy and don’t participate in the process of forging new blocks, then you get slashed as well. It is important for network security that the stakers do participate in the creation of new blocks. Furthermore it is desired to have people stake their coins rather than having them do nothing or being traded daily. Therefore there is a dynamic inflation happening in Cosmos. Staking your coins gives you a proportional share of the inflation, causing you to not lose anything due to inflation. This is necessary to be able to keep fees low and still have an incentive for staking. The next thing is (delegated) Cosmos-Proof of Stake (CPoS), which solves the other mentioned issue. When there are 10k stakers for example, then everyone has to inspect each new block and communicate to everyone if it is ok. There are two limitations, one is latency and the other is bandwidth. For latency the physical limit for a signal to travel is the speed of light, giving roughly 66 ms time to travel to the other side of the globe. In media the speed of light is reduced, in fibre not as much as in copper, but let’s just say 100 ms. Now it depends on the network topology, for a simple example, let’s assume one participant in the consensus sends its knowledge to two others. Then it takes 14 iterations (stages) to inform all participants (since 2¹⁴ &gt; 10k), these 14 stages take 1.4 seconds and after all are informed, they have to report back, in total 2.8 seconds. The problem with this calculation is, that it is wrong. Because in this scenario there must not be a single traitor. If there is just a single traitor for example at the first stage, then half of the network gets false information. Furthermore we have not taken into account that computers also have to do calculations. So let’s assume a calculation takes 50 ms, this is for example the actual check if a block is valid. But there is also room left for optimization, there is no need to send the new block to just 2 other network members. We can just send to all 10k. So the one who forged the new block, sends it to all 10k, they calculate if it is correct, 150 ms have passed. Then all 10k send to all other 10k and they calculate who might be traitors, another 150 ms have passed. Then all 10k agree on the new block and raise their concerns about the traitors and propose the forger of the next block. Another 150 ms, totalling at 450 ms. Great, we can get block times of less than 1s. Pretty decent compared to bitcoin’s 10 minutes. Furthermore blocks are final. With this approach there is no need to wait for 6 or 32 blocks to have passed to be sure that everything is valid. That is because we no longer trust the longest chain but rather agree all according to our stakes. Ok, so let’s just check the other caveat and that is bandwidth. For a first calculation let’s assume a block size of the original bitcoin, 1 MB. Assuming we have to send out the whole block every 150 ms to all 10k participants, we need an internet connection with 67 GB/s bandwith. Normally this is denoted in bit/s and not byte/s, so this would be 532 Gbit/s. Quite a decent connection. Some people have 100 Mbit/s or some lucky guys in student dorms have 1 Gbit/s, but nobody has 532 Gbit/s especially not to any location in the world. Our initial example with 2 recipients at each stage is possible though, there we only need roughly 100 Mbit/s, if we calculate with 150 ms intervals. In reality we need less, because we do not need to transfer the whole block in all rounds. In the last rounds, we only need to send confirmation. However, the first round cannot be delayed too much for latency reasons. But here comes another problem, we have reverted back to the 2 recipients per round. With that value we ended up at 2.8s. Unfortunately we have made the mistake there that there is no byzantine fault tolerance. To get that there have to be many more stages and there is no way to have block time in the range of 1s. So here we have a real dilemma. With 10k network participants, we either have not enough bandwidth to broadcast to all participants at the same time or the latency is too high to have many stages of information distribution and block times in the range of a few seconds are not possible. Keep in mind that talking of network participants we mean the people who participate in the consensus. The users of the blockchain who just read and send transactions do not need to partake in this process. The finalized blocks can be broadcasted to millions because there is no necessity of two way synchronization.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/699/1*x7lsgtu_xSgZi3lr1BepRQ.jpeg" /><figcaption>Me reading the whitepaper of Cosmos for the first time</figcaption></figure><p>The solution to this problem is delegation. Hence the name delegated Cosmos-Proof of Stake. The stakers are split up in two groups, the delegators and the validators. The validators are the participants of the consensus mechanism, whereas the delegators are not part of the direct network communication. In this case we can limit the number of validators to a fixed number, for example 100. Now even if 10k want to stake or even 100k want to do that, the problem of synchronizing all these nodes is gone. Synchronizing 100 participants is not a big problem, with our initial example of a power of 2, we need 7 stages, if 10 recipients get the new blocks, then 2 stages are enough for simple information passing. For byzantine fault tolerance we need some extra rounds, but it is possible to forge new blocks in a few seconds with normal internet connections. So what do the delegators do? They vote on the validators. They still secure the network with their stake. It makes sense to pick validators that also have a lot of coins at stake, so they don’t become malicious, but the delegators also decide which validators are to support. Therefore you can participate in getting stake rewards, but don’t need to setup a strong node. This is the job of the validators, who get a bit more return for their work. But just as the yearly inflation is dynamic in Cosmos the commission for the validators is also dynamic. The former is adjusted so that at more than 66% of coins are staked (in proof of work 50% of hashing power has to be non-malicious, in practical byzantine fault tolerance 66%) and the latter is adjusted so that commission can be kept reasonably low but validators still have an incentive to run a node. This is achieved through competition between validators for delegation. One more remark is important here, if your validator gets slashed, you also lose your staked coins. That’s why it is important to split your delegated coins as well as keep an eye on the validators, which is an incentive to monitor what is happening, thus keeping the network secure. The 1 MB block size was just an example here, of course this is dynamic again. But it is picked here to show that Cosmos is able to handle 600 times more information than bitcoin without waste of electricity.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/746/1*UCNCqiHT40r13Iz2NyRr7A.png" /><figcaption>Cosmos serves as a hub to many zones, which represent specific blockchains</figcaption></figure><p>Ok, so Cosmos is technically well thought through and somehow improves scalability vertically. That’s great, but a lot of coins are doing that, right? Absolutely right. There are many many coins doing this. And to be honest, this is not what is so awesome about Cosmos. So fasten your seat belts, we’re approaching the delicious facts now. Finger licking good. Cosmos is also a multi token platform. You cannot only have atoms in Cosmos, which are the staking tokens of Cosmos, you can also have photons, which are the fee tokens of Cosmos. So one coin for this whole consensus stuff and another coin for paying transactions. These photons are not staked and therefore are very fluid at exchanges. Furthermore photons are also the native token of the ethermint network. Ethermint is Ethereum hosted on Tendermint. Same functionality as Ethereum, but consensus as described above. This solves the vertical part of the scalability issue. Since Cosmos is not meant to be a competitor to other ecosystems, the creation of ethermint is called a hard spoon (<a href="https://blog.cosmos.network/introducing-the-hard-spoon-4a9288d3f0df">read more about it</a>). This means all ETH holders get photons and can use ethermint just as ethereum, but with much less fees and it is connected to the Cosmos network. Cosmos network? Well, actually Cosmos describes itself as the internet of blockchains. That is because in this cosmos there are zones connected through hubs. The zones can be anything that can be created with Tendermint. One zone is ethermint obviously. Another zone can be a bitcoin peg zone for example. That means you can deposit real bitcoin there for bitcoin-inside-cosmos-token. Once you have these token you can transfer them to the Cosmos hub and trade them to some other token, photons for example or whatever is also available. Another zone might be an Euro peg zone, where you deposit Euro to a bank account and get some Euro-inside-cosmos-token. Now you can trade these Euro-token vs. some bitcoin token and after the trade, redeem real bitcoin at the bitcoin peg zone. Your trade partner then redeems Euro at the Euro peg zone. What you two have done then is a decentralized trade. Is this directly possible with Cosmos? No, but it is very easy to create a zone connected to Cosmos that allows decentralized trading. In fact this creation of zones and connecting them through hubs is the outstanding feature of Cosmos. Imagine there is an app on Ethereum that clogs the network by creating more than 10% of the traffic and it is all about cute kittens. In Cosmos the developers can decide to create a specific zone just for these sweet kittens. If the kitten mania breaks out completely and even that zone is not handling transactions fast enough, the developers can just spawn another kitten zone, which runs in parallel to the first one. Through the Cosmos hubs the kittens between zones can be exchanged in a decentralized manner. We have heard of vertical scaling sometimes in this article; this is the other approach and it is called horizontal scaling. Horizontal scaling is asynchronous and makes things a bit more complicated but it is not as limited as vertical scaling (by bandwidth and latency…). All of that means it is possible to build tree like structures of hubs and zones, opening up for enourmous scaling of blockchain technology. Many might have heard of sharding that the ethereum devs are currently working on. It is different, sharding works automatically. For Cosmos hubs and zones developers have to sit down and build it manually. But a big advantage of zones are that every zone can be build indepently. A zone can have its own economy and private blockchains can be connected as well. This already indicates the spirit of the Cosmos project. It is not about taking over the whole crypto space as many other projects are claiming. It is just about connecting everything together. There is no future with a single blockchain solving everything in the best way. The future will have many application specific blockchains tailored to the needs of its users and developers. Cosmos wants to connect these blockchains to the existing blockchains. Tendermint wants to give these application specific blockchains a substrate. This is why Tendermint and Cosmos cannot be looked at separately. Both make the other thing much more valuable.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4Pgp27etbe5lGuE_XkKWrg.png" /><figcaption>There can be many hubs — all interconnected</figcaption></figure><p>I hope this text wall helps some real idiots. Here are also the answers to the most frequently asked questions:</p><p><strong>When ICO?</strong><br>April 2017</p><p><strong>When available at exchanges?</strong><br>Cosmos will not be usable before mainnet launch. Atoms will still not be tradeable then. They have to be unbounded first. So photons will hit markets before atoms will be available. Atoms will be unbonded some weeks after the launch. So they might be available some weeks after launch.</p><p><strong>When moon?</strong><br>Moon does not matter. The solar system must be left for reaching the cosmos.</p><p><strong>Is this ERC20?</strong><br>Read the text again. How can this be an ethereum token? The only possible thing would be ERC20 token serving as IOUs for atom coins when mainnet launches. This has not been done, because Cosmos team has no interest in crazy speculation and maybe also because of US regulations :&gt;</p><p><strong>How many atoms were raised?</strong><br>168 million in the ICO and 50 million more for team, foundation and strategic partnerships.</p><p><strong>How many atoms do I need to stake?</strong><br>Any.</p><p><strong>When is the snapshot for ethermint? </strong><br>I don’t know. Subscribe to C<a href="https://blog.cosmos.network/">osmos blog</a>. I’m sure you’ll be noticed then.</p><p><strong>When can I start developing things with Tendermint?</strong><br>Today. You can build now and integrate into Cosmos later. See <a href="https://blog.cosmos.network/a-tour-of-cosmos-for-developers-7517ba1b4045">this article.</a></p><p><strong>I did not read any of the above, when is ICO?</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/370/1*f6VLBX3mcpaKdSn4plX_wA.gif" /></figure><p><strong>How will Cosmos change the world?</strong><br>Please see the image below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7EbWcG0lKNbOMlij20DJNw.jpeg" /></figure><p>Edit: More FAQ<br><strong>What was the ICO price?<br></strong>Actually it is called fundraiser, it was $0.1 per atom, or 450 atoms for 1 ETH. So if you want to take into account that whole cryptospace exploded, you can scale this $0.1 with the rise of ETH and say one atom would be $2 today. This is the comparison to an alternative investing in ETH.</p><p><strong>Is the amount of atoms ultimately fixed?<br></strong>No, there is an ever ongoing inflation. Default value is 7% but it can be adjusted, so that majority stakes. Keep in mind that you get the atoms from inflation if you stake your atoms. Therefore inflation does not dilute your portion of atoms regarding all atoms, as long as you stake.</p><p><strong>When is the launch of the mainnet?<br></strong>It was postponed from end of february to “when it’s done”. See roadmap for how far it’s done <a href="https://cosmos.network/roadmap">https://cosmos.network/roadmap</a></p><p><strong>At which exchange can I buy atoms?<br></strong>There will be decentralized as well as central exchanges listing atoms. Ethfinex seems to list atoms h<a href="https://www.ethfinex.com/token_listings/37/social_subcategory/223/whitepaper?parentCid=220">ttps://www.ethfinex.com/token_listings/37/social_subcategory/223/whitepaper?parentCid=220</a></p><blockquote>Join Coinmonks<a href="https://t.me/coincodecap"> Telegram Channel</a> and<a href="https://www.youtube.com/c/coinmonks/videos"> Youtube Channel</a> get daily <a href="http://coincodecap.com/">Crypto News</a></blockquote><h4>Also, Read</h4><ul><li><a href="https://medium.com/coinmonks/top-10-crypto-copy-trading-platforms-for-beginners-d0c37c7d698c">Copy Trading</a> | <a href="https://medium.com/coinmonks/crypto-tax-software-ed4b4810e338">Crypto Tax Software</a></li><li><a href="https://coincodecap.com/grid-trading">Grid Trading</a> | <a href="https://medium.com/coinmonks/the-best-cryptocurrency-hardware-wallets-of-2020-e28b1c124069">Crypto Hardware Wallet</a></li><li><a href="http://Top 4 Telegram Channels for Crypto Traders">Crypto Telegram Signals</a> | <a href="https://medium.com/coinmonks/crypto-trading-bot-c2ffce8acb2a">Crypto Trading Bot</a></li><li><a href="https://medium.com/coinmonks/crypto-exchange-dd2f9d6f3769">Best Crypto Exchange</a> | <a href="https://medium.com/coinmonks/bitcoin-exchange-in-india-7f1fe79715c9">Best Crypto Exchange in India</a></li><li><a href="https://coincodecap.com/crypto-exchange-in-singapore">10 Best Crypto Exchange in Singapore</a> |<a href="https://coincodecap.com/buy-axs-token"> Buy AXS</a></li><li><a href="https://coincodecap.com/best-crypto-to-invest-in-india-in-2021">Best Crypto to Invest in India</a> |<a href="https://coincodecap.com/wazirx-p2p"> WazirX P2P</a></li><li><a href="https://coincodecap.com/copy-trading-spain">5 Best Copy Trading Platforms in Spain</a></li><li><a href="https://coincodecap.com/pionex-dual-investment">Pionex Dual Investment</a> | <a href="https://coincodecap.com/advcash-review">AdvCash Review</a> | <a href="https://coincodecap.com/uphold-review">Uphold Review</a></li><li><a href="https://coincodecap.com/best-cryptocurrency-apis">8 Best Cryptocurrency APIs for Developers</a></li><li><a href="https://medium.com/coinmonks/best-crypto-apis-for-developers-5efe3a597a9f">Best Crypto APIs</a> for Developers</li><li>Best <a href="https://medium.com/coinmonks/top-5-crypto-lending-platforms-in-2020-that-you-need-to-know-a1b675cec3fa">Crypto Lending Platform</a></li><li>An ultimate guide to <a href="https://medium.com/coinmonks/leveraged-token-3f5257808b22">Leveraged Token</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ab4305cbb41" width="1" height="1" alt=""><hr><p><a href="https://medium.com/coinmonks/cosmos-tendermint-explained-for-real-idiots-ab4305cbb41">Cosmos / Tendermint explained for real idiots</a> was originally published in <a href="https://medium.com/coinmonks">Coinmonks</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>