Proof-of-Work vs. Proof-of-Stake for real idiots

Patrick Wieth
Coinmonks
39 min readOct 1, 2019

--

Image taken from https://de.wikipedia.org/wiki/Goldstandard#/media/Datei:McKinley_Prosperity.jpg

Up to this point, I have only written articles about specific coins, but maybe it is interesting to someone if I write about technology in general. The only way to find out is this article. As usual, I will try to write like a real idiot for real idiots. So let’s just remember what we are looking at:

Blockchains, cryptocurrencies, decentral ledgers, etc. are networks, where no authority exists. When we read about it, we bump into Proof-of-Work (PoW) and Proof-of-Stake (PoS) really often. Is there some other stuff? Yes, Proof-of-Authority (PoA) also exists and some derivatives like Proof-of-Intelligence or Proof-of-Assignment. But these are very similar to the others — so we will compare the first two. Sometimes these things are called Consensus algorithms, but thanks to Emin Gün Sirer we know that this is terribly wrong.

Unfortunately, as real idiots we don’t understand this hairsplitting. So let’s try to understand more here, but keep it simple and have a look at economics, too.

Emin Gün Sirer jizzing out a tiny fraction of his infinite knowledge. After reading this article, I promise, you will understand at least 2 technical terms he used in these tweets more than now.

A short note beforehand: This article is not as funny as my other articles. I’m sorry. I have included “funny” pictures to compensate for this. I hope this cheap trick at least fools some of you into reading further because there are some nice memes looming.

With Bitcoin Satoshi Nakamoto did not invent Proof-of-Work, he (or she, or them) invented Nakamoto Consensus — what a surprise. Nakamoto Consensus is the combination of PoW, longest chain rule and blocks. More precisely this is the combination of some arbitrary cost that has to be paid (PoW), a mechanism that ensures it makes sense to burn value (pay arbitrary cost) as well as synchronizes an unorganized network and something that binds together asynchronous transactions into synchronous blocks. This last thing was already solved.

In essence, there were hash signatures being printed by the New York Times already in 1995. The purpose of these signatures was to sign all the digital documents sent to Surety (read here). In principle, this was the first blockchain. It allowed someone to prove that a digital document is unaltered. Therefore it was sent to Surety, a company that created a hash for this document and you could use this hash to prove that your document is unaltered. In addition, you could also prove that your document was released at a certain point in time.

These points in time are the release times of the New York Times with the hash signature. This hash signed the collection in which the signature of your specific document was included. These collections are blocks and each newly published hash in the New York Times verified the previous blocks. Even though it was not called like that, it is a blockchain.

The paper published by Haber and Stornetta goes back to 1991 and Surety was offering its services in 1995. So could have Bitcoin be realized back then already? No. This approach works for signing digital documents well because trusting Surety is on a different level for these signatures then it is for a cryptocurrency. The service offered by Surety was not decentral. So you had to trust Surety, that they won’t alter your documents.

Fortunately, there is no real incentive for Surety to do that. At the point in time, when you transfer your document to Surety for signing, there is no real gain in manipulating your document. You will recognize instantly that your document was altered and you won’t use the received signature since it points to an altered document. For Bitcoin this is different. If a single entity was issuing blocks, there is definitely something to gain if this entity alters transactions, for example transferring all BTC to their own address and selling these before the market realizes the system is no longer trustworthy.

Another invention that was already available before Bitcoin existed.

So how about PoW? Yeah, this was present already, especially in the form of hashcash, a way to make sending E-Mails more costly to reduce spam. In order to send an e-mail you had to find a hash, an operation which costs time and thus makes sending spam mails expensive. The idea is good but it did not make it. Satoshi did not think about how to include this hashcash into the blocks of signatures but rather how anyone can be Surety, the single entity that publishes collections of signatures. If just everyone can publish collections, how is assured that the network does not get flooded and how do they agree on conflicts?

The first problem is obviously solved by hashcash or say Proof of work. The second problem is much more complicated and is basically what makes blockchain revolutionary. And this is a decentral consensus. How do independent actors agree on a single truth, even when some parts of this truth are non-beneficial to some participants?

For Nakamoto Consensus the final piece for this is the longest chain rule. It is a typical puzzle piece that fits in perfectly. For example, when modeling algorithms or physical systems, you often encounter a situation, in which the major parts are there but the whole thing is shaky. Sometimes you find a piece that solves some remaining problems, but it creates a few new problems, then you find the next piece, it solves again a problem, but creates a new one, until you finally accept that this path does not lead to a consistent theory.

Many paths lead to solutions that create new problems, but then starting over again, you finally find the piece that just solves all remaining problems at once. It solves the problem you are currently looking at and also some other problems on the backlog of problems and you find out, this puzzle piece is the missing magic sauce. The longest chain rule is such a piece. It solves the problem of how the network agrees on a single truth. At the same time, it solves the problem of why anyone should do much work. It even solves the question of how does the cost of an attack increase with the value of the network.

A typical reaction when a crucial puzzle piece is found

It also comes with a very nice feature that the network participants do not have to communicate with everyone. This does not sound like a big thing, but just imagine you had to talk to all your neighbors who live in the same street as you do. Yes, the annoying ones as well as the super boring ones. You have to do it, every day. Otherwise, your street is closed and nobody can get in or out anymore. Once everyone has talked with each neighbor, you can unfold the sidewalks in the morning and at night the same happens, once you have agreed you fold up the sidewalks. Alternatively, your street has agreed on a special rule, that whoever wakes up first, unfolds the sidewalks and once it is dark and you come home, you fold up your sidewalks. Then nobody has to communicate. It works no matter how many people live in the street. This doesn’t come naturally, especially not in networking.

So what does it mean for bitcoin? Well, anyone can check if a block is valid, by looking at the previous block. And for conflicting blocks or chains of blocks (forks) anyone can check which is the longer chain and has more work put in it. And these checks can be done without talking to the whole network. Therefore the consensus mechanism scales with O(1), where this O is the Landau-O, that stands for complexity. O(n) means that the complexity of an algorithm scales linear with n, in this case, n is the number of network participants. A network where every participant has to talk to each other participant scales with O(n²), this means doubling the numbers means quadrupling the communication effort. Linear means double participants give double the effort. Ok, great, but then O(1) means doubling the participants does not give additional effort? Yes.

Keep in mind this is only the algorithm, the real network needs a bit more, because you have to send the new blocks around, but this can be done in a way, where one node sends to 1000 other nodes and this scales very well. And only the ones who found new blocks need to do that kind of broadcasting. But wait for a second, this means infinite scaling right? Yes. But everyone knows Bitcoin does not scale. True, but when people say this, they are not talking about participants (miners) in the network but rather the number of transactions. So the number of users reading the blockchain scales very well, but the number of transactions these users can send per time unit does not scale at all.

A funny image. Not related to my sex life.

Interesting, but isn’t this stupid? It sounds like a restaurant with infinite tables, but only a fixed amount of food or drinks orders can be handled. Yeah, we will discuss this also, but one can see here that this mismatch in active user scaling and passive user scaling is problematic. There are some solutions, like Lightning network, where you open a lot of other restaurants and each of these restaurants sends one guy to the original Bitcoin restaurant, where he orders all the stuff the people in his restaurant want to eat. The customers in his restaurant can also order virtual drinks, which become real later, once they have finished all their virtual drinks and the waiter goes to the original restaurant to turn the virtual drinks in real ones.

Me doing research for this article. In the background are all kinds of analogies I have thought out. Like the one you just read above this image.

But let’s go back to complexity. It can also be applied to other things, it must not be networking, a typical example is sorting. If you are a computer scientist, this part of the article might be very boring, but since this article is for real idiots, I have to go into detail. The task is quite easy, sort whatever list of words alphabetically or numbers by value — in principle it does not matter, sorting is the same for a list of whatever as long as there is a property for which a transitive order exists. However we are leaving the real idiot realm right now, so let’s stick to sorting numbers by value. The task is simple but there is an infinite number of approaches, some are faster than others. For example, Bogosort shuffles the numbers randomly and then checks if the order is right. Another example is the Bubblesort. It is the prototypical example, because it is easy and how we as humans often sort, which is by swapping neighbor elements until the set is sorted.

Bubblesort animated — not that this is important to understand the article, but let’s just rest a while and warm our hearts by seeing algorithms at work. Image taken from Wikipedia.

The third example is Quicksort, which is the prototype of a divide and conquer sorting algorithm. In this example, the list is divided into 2 lists, one list with all elements smaller than an initially selected element and a list with the rest. Then again the resulting 2 lists are divided in the same way until there are only lists with single elements remaining. At this point, the order is found and the lists only have to be merged back together reflecting which one was the sublist that contains the larger elements on each level. So this algorithm is much more complicated and we will look at complexity soon to understand why this extra effort might be worth it.

Quicksort animated — With higher complexity comes better heart warming. Image taken from Wikipedia.

The last example is Sleepsort, which spawns a subprocess for each number that waits (sleeps) for as many seconds as the value of the number is and then adds the number to the final list. This is a joke algorithm, but it will be a very interesting example here. So let’s look at complexity to compare these algorithms. At first one needs to know that checking the order of a list has a complexity of O(n). In the best case, Bogosort is sorted after the first shuffle, which gives O(n) for the check. In the worst case, the complexity is infinite, because shuffling can go on forever at worst. But what is the average? As everyone knows the average between n and infinite is n•n! (faculty). No, I’m just kidding, we can calculate this. I will try to demonstrate how this can be derived easily: There are n! possibilities to arrange n elements. This is because for the first element there are n spots, for the second (n-1), for the third (n-3) and so on. All these possibilities combined gives
n•(n-1)•(n-2)•(n-3)•….. = n!

so on average one needs to shuffle n!/2 times and do a check after each shuffle. For complexity we omit factors, so the complexity is n•n!. Bubblesort has an average of n², because in essence, you have to compare each element with each other, which is n•n. In fact, you only need to do half of that, but we omit the factor of 1/2 again. This is much faster than Bogosort. The complexity of n² in such a system is an important learning, we have seen this in the first part already for network communication. Quicksort introduces hierarchy and is a bit more efficient because a lot of redundancy is prevented systematically. This yields a complexity of n•log(n), which is better, because logarithm of n is always smaller than n, especially for large numbers of n.

There are a lot of optimizations of Quicksort, which are better in the worst or best case or need less memory. But we won’t cover that here since this is quite special for this type of sorting algorithms. Let’s get to the most interesting algorithm. The best of all sorting algorithms is Sleepsort, because it has a complexity O(n) always. Therefore it is much less work than all of these very sophisticated algorithms. You don’t need to check, for each element you only have to spawn a single process, which has a fixed amount of computation.

This all sounds too good to be true? Exactly. You still have to wait really long until the solution is ready. For each number you have to multiply it with a factor which then gives the duration of the sleep. Before you have checked the list, you don’t know what is an appropriate factor, but we could do this and complexity would not increase. But for this factor we cannot switch from seconds to milliseconds or nanoseconds, because this time must be significantly longer than the time a subprocess needs to spawn.

The sorting takes an amount of time that is equal to the largest number times the factor. So the complexity is the lowest possible of all sorting algorithms, but in fact this is not reflecting the time it takes. This is only the computational complexity. A lot of computational effort is masked in the subprocesses and the factor cannot be chosen freely, it needs to be high enough to separate the subprocesses from each other if the sorted numbers are small and it needs to be high enough if there are a lot of numbers, so that the computer is able to spawn all subprocesses in a time shorter than the time interval between two close numbers.

Now we have talked a lot about these algorithms and one might ask what this has to do with blockchains. Well, we have seen that Nakamoto consensus scales with O(1) and this is no coincidence. The infinite scaling in Sleepsort comes from the assumption that infinite subprocesses can be spawned and in Nakamoto consensus, infinite miners can search new blocks in parallel. But that does not mean that infinite transactions can be processed.

In fact, the number of transactions cannot really be increased by a lot. In the end, you have to wait until a block is found and this blocktime cannot be reduced, just like with Sleepsort. If you reduce the timing too much, the subprocesses spawn time dominates and there is no sorting. In the same way, if the blocktime is too low, the computation is so fast, that network communication dominates and the finder of a block has the highest chance to find the next block.

The same goes for the network participants who have low latency (ping) to the finder of the last block, they find the next block with a much higher chance than others. In this case, the consensus doesn’t really work. So we have seen a similarity here between Sleepsort and Nakamoto Consensus.

Understanding why Sleepsort is not very efficient at sorting means understanding why Nakamoto Consensus is not very efficient at processing transactions. Another similarity exists between BFT (Byzantine fault tolerant, here we mean BFT with dPoS) consensus and Bubblesort, where each item has to be compared to each other or each participant has to agree with each other, thus giving n² scaling behavior. Sounds pretty inferior? Well, it’s not.

Because we have seen that infinite scaling for Nakamoto Consensus is only for passive users (readers/miners). For active users BFT has a lot more scaling to offer, because you don’t have to offer a sufficient long block time to give everyone a fair chance. In BFT the limit is how long it takes to synchronize all participants. Whenever you find ways to increase this bottleneck, then you can handle more transactions. Also you can separate some kind of overseers from the actual block producers. Cosmos has done this to keep the number of actual block producers low, called validators. There are 100, but all other stakeholders can participate as overseers by distributing staking power among validators. They are called delegators. With this approach, you don’t have to exclude everyone who is not in the top 100, but still, have the transaction throughput of small validator sets. Great. So BFT is just Superior to Nakamoto Consensus? Well, it’s not that easy. BFT does not necessarily come with PoS.

I will not introduce BFT/PoS here in detail, but I have another article on that. Instead of using PoS, a BFT network can also be run with a fixed set of public keys that allow only the owners of the corresponding private keys to become a validator. This is usually called Proof of Authority (PoA) and it has an obvious drawback, that it is permissioned. Bitcoin would never have been anywhere noticeable if it was permissioned.

So the permissionlessness of Bitcoin and Nakamoto Consensus is very important. I’d also say that the option to either buy Bitcoin or mine it, was also important for early adoption. But we wanted to compare Nakamoto Consensus to PoS BFT and in contrast to PoA PoS is permissionless. However an interesting question is: Is it as permissionless as Nakamoto Consensus with PoW? I’d say no. In PoS it is possible that you cannot get into the system because everyone already in it has decided not to sell any stake (coins). In this case the technology is permissionless but the real network does not permit you to get in. In PoW this cannot happen, because you don’t need anyone from inside to build new chips that can solve the puzzle e.g. mine BTC. However this feature also implies an attack vector that does not exist for PoS, in PoW if you acquire enough computation power, you can attack the network. In PoS to do that, you need to buy the coins from network insiders. You cannot get them from the outside. These insiders don’t want to have an attack on their network or only if they can sell all of their stake. However, on the way to 67% share the price might rise astronomically, whereas the production of silicon chips can be scaled almost linearly.

Two crypto experts discussing peculiarities of different rate limiting approaches. The bearded man is also an expert in quantum computing.

It makes sense to introduce CAP theorem here, which I already mentioned in another article. CAP theorem basically states that we can’t have nice things in distributed systems. Either consistency (C), availability (A) or partition tolerance (P) is very limited or 2 of them are limited at the expense of one feature being very strong.

We talked a lot about how many transactions can be processed and this feature is availability (A). The more participants can be handled the higher the availability. PoS wins vs. PoW. Permissionlessness has a lot to do with partition tolerance. Is it possible to fill the gaps if half of the network is gone? Can the network even continue if too many participants break away? PoW wins vs. PoS. To be precise this is the killer feature of Nakamoto Consensus. It works if all but one node go offline. It works if there is a nuke creating an EMP wave shutting down half the globe electronically. After the nodes come back online they can join the network, see if the others have found some blocks.

Even if all wires in the oceans are cut, the continents can go on and produce blocks and compare which found the longest chain and merge the different realities back together once connection is reestablished or each continues separately. Well this specific example, Nakamoto Consensus might not be the best to merge different realities, but this is a story for another day. The only thing that has better partition tolerance are hydras, where you can even cut a single node in half and they still work.

Hydras are very partition tolerant. Image taken from www.dndbeyond.com/monsters/hydra

Ok, so PoW wins at partition tolerance and PoS wins at availability, what about consistency? This feature basically means that there is a single reality that everyone knows and agrees with. So Bitcoin has synchronous blocks and these synchronous blocks contain asynchronous transactions. Sounds pretty consistent and it seems to be the same with PoS. But there are forks, so sometimes there are concurrent blocks and one chain will be omitted in favor of the one that turns out to be longer after more blocks are added. In PoS this is not necessarily a problem, fast finality is possible, which means that blocks are final once 2/3 of validators have agreed and everyone knows that it is final then. In this case, consistency is very high because there is a single truth everyone has agreed to and you don’t have to wait for some blocks until it becomes almost certainly final.

To put this into perspective, let’s compare it to IOTA, where such a single truth never exists. There can be transactions known in one part of the network which is unknown in another part and vice versa. After some time both sides get to know the transactions of the other part, but until this happens there are again new transactions created that are not known by everyone. This is because there are no synchronous blocks, there are only asynchronous transactions. Consistency is low, but availability is high. This is quite interesting since IOTA uses PoW for limiting transactions just like Nakamoto Consensus.

In contrast, it does not have the longest chain rule and thus availability is increased at the expense of consistency. So here we can see there are different ways to use these buildings blocks of consensus mechanisms. And such a building block comes with specific properties, but how it works, in the end, depends on the whole combination of mechanisms.

Coming back to PoS with fast finality we have seen that this comes with very high consistency, but what is the drawback? We have seen that there are often drawbacks associated if some desired features are strongly favored. This is the reason why the CAP-Theorem says that you cannot have all properties at max. So the price for very high consistency is a loss of liveness, which means that the chain halts, if not enough participants are online. Fragile liveness is in essence low partition tolerance.

Now we have seen that Nakamoto Consensus is strong at partition tolerance (P), good at consistency (C) and bad at availability (A), in contrast to PoS with fast finality, which is bad at P, strong at C and good at A. We can accept the fact that there is no single consensus mechanism that is suited perfectly for all problems. Just like there is no best car. If you want to transport lumber out of a forest, a Ferrari might not be the best fit, even though for a race it is great. The same applies here. So we have to define the use cases, to make a decision what might be a good fit.

If you open the website bitcoin.org, you’ll see it say “ Bitcoin is an innovative payment network and a new kind of money.”, which is great. The use case is making payments. What is the most important feature for payments? Availability. If you cannot make payments, because the network is congested, then it is not a good payment network. Funnily Nakamoto Consensus is not good in this regard and thus Bitcoin is not good in this regard. Consistency is also important because you want to be sure the payment is final and your trade partner knows this.

Looking at this, it is quite obvious that PoS with fast finality is much better suited for transacting payments. But wait, isn’t this proven wrong, because Bitcoin has the highest valuation of all cryptocurrencies? First of all the market does not tell you if something is good at what it wants to be good at, it only tells you if people are buying it. And second, there is something else than payment for which money can be used and this is a store of value.

A high market cap can also mean that something is a good store of value or people believe it is. When you store value, you don’t make payments. To store value you don’t need high availability. What you need is partition tolerance and high security. Consistency is also not utterly important, because it is fine if it takes some time until the whole network knows you have stored value or until you can access your value. So is there an analogy to an old asset? Yes, it is gold.

Gold is a store of value for a really long time now and it is a decentral asset. There is no central authority issuing it and there is nobody that can mark your gold savings as invalid. Gold is quite heavy and not as easy to carry around compared to bills for example. So if we look at gold like a distributed application, which is a bit ridiculous, but we do it for seeing analogies and education, then it is extremely partitioned tolerant. The availability is not very good. The consistency is bad. Nature keeps very accurate track of how the gold is distributed among holders of its value, but we cannot access this database of nature, we need to check individually if a stack of gold is real gold and not filled with lead.

These bad marks at C and A are the reason, why most people don’t hold gold physically but rather have bought gold on a second layer, where you hold the claim of a certain amount of gold. This allows you to sell the claim and for the buyer it is not necessary to weigh any amount of gold and check if there is lead in it.

This second layer solution of trading gold claims increases C and A tremendously and decreases P. The latter is because the second layer solution can be destroyed more easily than the real gold. This is quite similar for second layer solutions for Bitcoin. And here we can see that the term digital gold is a really good description of what Bitcoin is. To make Bitcoin more available second layer solutions are necessary. The same goes for gold. In both cases, the second layer trades away P (partition tolerance) for C and A. So we have seen now that the intrinsic properties of Bitcoin or say Nakamoto Consensus make it a good store of value and a bad payment processor.

McAfee doing prank calls impersonating himself as “future”

How about PoS? It seems to be a natural fit for payment networks since it is strong at availability. With fast finality, it is also great at consistency. Does that mean, PoW is good for a store of value and PoS good for the transaction of value? Don’t fall for the trap giving PoW, in general, the attributes that only Nakamoto Consensus has. Take for example IOTA, which has PoW but does not have a blockchain, rather a tangle, which gives high availability, weak consistency but good partition tolerance.

However, this is a theory. In practice, there is a coordinator, which does not really make it decentral, limits transactions and of course partition tolerance is not given. But in theory, IOTA is an example of PoW being good at doing transactions. The big problem comparing PoS and PoW directly is that both systems are not directly interchangeable. If we take Nakamoto Consensus and just plug in PoS, then the longest chain rule does not make sense at all. Vice versa plugging cryptographic PoW puzzles into the BFT consensus, does not really make sense. So we correct ourselves, BFT PoS with fast finality is good for payment networks and Nakamoto Consensus (including PoW) is good for store of value.

But can payment processors and store of value really be disconnected? Can it be seen as distinct things? Isn’t it ultimatively necessary for things which are used for payment to hold value and isn’t it ultimatively necessary for things which store value to be transferable? Yes, but only in a static picture. If we look at time evolution, it clears up. Again we can compare to real world things. We have introduced gold already, now let’s introduce fiat, for example Dollar. Both Dollar and gold are a store of value and something you can use to make payments. Are both features implemented to the same extent? Certainly no. Dollar is a much worse store of value since it depends on the existence of the United States of America, whereas gold only depends on the physical laws to be unaltered. For payment it is the other way around. Gold is heavy, complicated to split, not as easy to check for validity and harder to count.

So here we have two real world examples that both are used, make sense and still have different characteristics. If we look at time scales, we see that payment processing of fiat money can be done really fast, in contrast to gold. If we ask the question, on which time scales both function as a store of value, then it is clear that gold performs well on the scale of 1000 years, but there are many examples of fiat money only 100 years old, that have no value today. It is no coincidence that PoS systems resemble the characteristics of said fiat money and BTC resembles the characteristics of gold. Does it make sense to have both systems? Yes, absolutely. Taking out a chunk of gold and cutting off a small piece to pay for your train ticket in the morning is analog to using a Lambo to carry lumber out of a forest.

There might be many that argue, the gold standard should have never been abandoned, but there won’t be many who will say gold is not limited by its physical properties. A monetary system based on coins and banknotes allows for a much wider design space than a physical asset like gold. So whoever argues that the gold standard should have never been abandoned does not take into account how important it has been to actually do monetary politics on the one hand. And to be able to process daily payments really fast on the other hand. Furthermore our fiat money system is not really based on coins and banknotes anymore but is rather an electronic credit money system, which has an even wider design space and allows for faster transactions. So does blockchain has an even wider design space and allows for new properties? Partially. When it comes to decentralization yes, but there are limitations given because of cryptography and its demands.

Nakamoto Consensus is much heavier on these constraints than PoS systems. Again this is an resemblance to gold vs. fiat. If we look at the way the different systems are secured economically we find another similarity. In PoS economic security comes via penalties and in Nakamoto consensus it comes via intrinsic mechanisms. The nothing-at-stake problem does not exist in the latter. For those who do not know, this problem describes the circumstance that in naive PoS systems there is no reason to pick a specific fork of the blockchain. It makes economic sense to follow each fork of the chain. Since this is very problematic for the users and makes the network pointless, this behavior must be disincentivized by protocal defined penalties. We will call these penalties from now on punishment.

For Nakamoto Consensus — in this case it is sufficient to say PoW — this problem does not exist, because the work can only be spend once and therefore only on a single fork of the chain. This is not a property designed by the protocol but rather an implication from the physical world. Coming back to gold and fiat, we can see that the security for gold comes from physical properties and the security of fiat comes from penalizing misbehavior. It makes economical sense to print your own banknotes or to hack bank systems to steal money. But we punish whoever does so and thus make the system economically secure. Even more important is the 51% (67% for BFT) attack, which must not make economic sense. If it makes sense in a given network to form a cartel or even buy 51% and after the attack you make a net profit, then the network is doomed.

For Bitcoin this implies buying a lot of miners or renting mining power and then transfer all Bitcoin to your own address in order to sell them to all open orders on the internet. The goal is to make more money than you spend on mining power. After such an attack smart money will move out of bitcoin and only the maximalists will stay and tell you that everything is fine. This is why it is important to cash out fast, before everyone realizes bitcoin is now worthless. However an attacker can calculate what this amount of mining power costs and how much there is to gain and if this equation always yields a net loss, nobody will perform the attack. Of course there are more problems — on the cash out side, there might be a problem getting all the money from the exchanges before the attack becomes public and on the preparation side there are a lot of ASICs to be purchased, which might draw a lot of attention.

But this doesn’t matter as these mechanisms should not be the final barriers for such an attack. The attack must be infeasible regardless of such external problems. So how is this achieved? For bitcoin there is a network value, the market cap, which is currently $140 bn. Then there is something like value of all miners. If we assume this value is $140 bn as well, then you need to spend an additional $141 bn to get 51% mining power and to steal all the bitcoin, worth $140 bn. Since there are not enough open buy orders, it is not possible to sell the bitcoins, before bitcoin is worthless because of this attack. But what ensures that the mining power is worth more and more, as the bitcoin market cap grows?

This is why there is difficulty and why it increases as more and more miners are connected. Mining gives block rewards and fees. So if the bitcoin value increases, then the mining rewards increase in the same way. This means all miners make more profit now. Now itmmakes sense to buy more miners and supply more mining power. This in turn increases difficulty, which makes mining less profitable. But how is assured that the mining equipment value rises in the very same way as the bitcoin price? Well if all other parameters stay the same, then this is a consequence of all involved equations being linear. If the price doubles, then the return from a miner doubles, and there will be more miners until the marginal return from an additional miner exceeds the marginal cost of mining. I’m a bit afraid we are leaving the realm of real idiots here again. The key takeway is, that the amount of miners doubles, when the price doubles. As long as there are not other effects, like change in electricity price, new technology etc.

This plot shows the bitcoin difficulty and the bitcoin price over time. In green is the difficulty with scale indicated on the right and in orange is the price with scale indicated on the left. (Plot created on data.bitcoinity.org)

Of course this is a very simple picture. In reality there will be different marginal costs for different types of miners and different locations of mining. Marginal return will not exactly match marginal cost, because there must be a spread, which mostly depends on what time-to-value investors in mining can accept. For real estate investments in first world countries with AAA rating, for investors it is fine to accept 30 years time-to-value, but nobody who has some basic understanding in economics will start a bitcoin mining operation with 30 years time-to-value. If we look at the plot above, we can see that the price and difficulty curves match, but at a closer look we see that the difficulty increases by 2 decades whenever the price only increases 1 decade.

This means difficulty rose from 100 to 10.000.000.000.000, or 10² to 10¹³ and price has only increased from 0.1 to 10⁵. So here are two lessons to be learned: 1) Always look at the scaling of a plot. You can always make two curves match. The question is, does the scaling make any sense? And lesson 2) there is something that makes difficulty increase much faster than price. And the answer is technological advance. At the beginning there was CPU mining and its source code was improved over time until there was the first code for GPU mining, which increased the hashrate per $ significantly. This code was improved as well and of course silicon chip technology itself has improved but not at such a fast speed as the crypto community has improved mining hardware. The ASICs came out and the die size went from 120 nm to finally 16 nm and might even be smaller in the future.

So in this regard there happened a lot. It is still remarkable, that you cannot see spikes for these milestones in the difficulty curve. The same goes for halving events (“the halvening”). Whenever the block reward of bitcoin was halved, many went crazy and thought insanse things will happen to difficulty but looking at the plot, you can’t recognize these events without knowing where they are (yellow lines). The reason for that is the heterogenous distribution of miners. If new miners (more efficient) are put to service, then the difficulty rises, but many shut down their old now inefficient miners and the difficulty does not rise anymore. This is why the curve is so smooth. Of course the difficulty rises because of these events, but it takes time until enough old miners are driven out and the new marginal cost has established a new balance with the marginal return. Still halving the reward is important for the token economics of bitcoin. It makes the supply of bitcoin ultimately limited. Without it bitcoin would not necessarily be deflationary.

Not only is he in for the technology but also to learn about token economics and unforgeable costliness.

Maybe you are asking now, why is such a long story about mining here? And the reason is, because in order to understand the economic implications of Nakamoto Consensus there is no way around these things. What we have learned now is the relation between miners, security and throughput of the network. In the first part we have learned that there is no increase in throughput if there are more miners and in the last part we have learned that it is incentivized to have more miners when the network valuation has increased. Also it is necessary to increase the value of miners in this case.

For Proof-of-Stake these things do not matter at all. These economic relations are not given. In PoS the coins are the (virtual) miners so the value of both cannot decouple. However it is necessary to prevent a very similar attack in which stakers double spend and afterwards instantly sell off their stake. This is easily solved by freezing the coins that are “virtual miners” for some time. If harm is done by stakers the frozen coins will destroyed to punish the bad behavior. The same goes for following different forks to solve the nothing-at-stake problem. Mostly I’m presenting the solutions chosen by the developers of Cosmos. This is because Cosmos looks like the most decentralized and least attack prone of all PoS networks to me. However this is not the important part, maybe there is something better than Cosmos, feel free to convince me in the comments.

Important is the similarity to fiat vs. gold and PoS vs. PoW. For gold and PoW physical resources are devalued in case of bad behavior whereas in fiat and PoS punishment is applied mostly by devaluing/removing virtual assets. But isn’t it better to have physical assets as collateral? Isn’t it safer? It depends on how you define “safe”. Virtual assets cannot be swept away by floodings, this has happened with Bitcoin miners already. Virtual assets cannot be destroyed in a hurricane. But virtual assets can be hacked, when stored in an unsafe way. It is always harder to carry away physical assets. Nobody will doubt that the biggest heists in the 21st century will all be done with virtual assets.

But this is just one aspect of safety. Another one is corruption of authorities. Gold is the undisputable king. Nobody can corrupt physics. Changing the consensus parameters in a PoS system is much easier than for PoW. On the other hand, hashing algorithms might be broken by quantum computers. For PoS only the private keys might be broken by quantum computing. But this can also happen to PoW systems. Still it is much easier to increase the key length than switching the hashing algorithm. But we have also slided over from (physical) safety to (IT) security. Regarding all of these things, there is only one thing that can be said for sure: Whoever says PoS or PoW is safer no matter what, is wrong. Maximalists tend to extremes and the matter is more difficult than they want to believe. Making a decision which systems is safer is mostly a decision which scenario is more relevant in your opinion.

But some people say PoS doesn’t work at all? Why do they say that? Well, I will give two examples:

The point made here works as follows: If someone bribes validators (stakers) to destroy a blockchain it makes most economic sense for them to accept the bribe and destroy the blockchain even if the sum used for bribe is tiny. The author basically describes a prisoner’s dilemma. If the blockchain is destroyed, then if you accepted the bribe, you lose all stake but get the bribe. If you did not accept the bribe, then you have nothing. So in this scenario it is better to accept the bribe. In the other scenario, the blockchain is not destroyed and in both cases you have your stake but if you accepted the bribe, you also have the bribe. So in both scenarios, it is better to accept the bribe to maximize your outcome. But in the case of destruction you might lose a big stake in contrast to the case of non-destruction.

So the author factors in the probability that your action is the one that makes the scenario flip. This probability allows to calculate a ratio of network valuation to bribe sum. In the example only 0.45% of the network valuation is necessary to bribe 100 validators, who stake 90%. What is constructed here is a Nash equilibrium just like for the prisoner’s dilemma, where it makes sense to defect the other prisoner to maximize your outcome. Even if both defect each other, there is no way to improve an individual outcome by just switching a single decision (pareto optimum), both need to switch at the same time and cooperate together to get to a better outcome. And this is exactly what validators do all the time. They work together. They help each other and work to keep the chain online. I’m mostly arguing here why a PoS blockchain can work, even if the assumptions of zack-bitcoin are correct. In fact they are not, but let’s get to this point later.

Accepting the bribe vs. rejecting it.

So the validators are constantly working together and are cooperating. In case of a bribe, the validators know each other for quite some time and have communicated quite a lot. Now the bribe wants to destroy the blockchain either via consensus or governance. For governance it is quite easy, since everyone can see how the votes are progressing, there will be someone who is certainly the tipping voter, who is responsible for destruction of the blockchain. For this voter, the bribe of 0.45% of their stake is never enough to give the tipping vote. If this vote is not given, then for the next tipping voter, 0.45% is not enough and so on. Accepting the bribe only makes sense as long as the vote fails. Only if the bribe is higher than the stake, it makes sense to be the one who fills the quorum.

The other scenario is consensus. So not governance destroys the blockchain but the consensus that creates a new block. Here the validators will be bribed and of course talk to each other and sure, they will tell each other, that they won’t take the bribe. Then the block comes, for which the bribe is given and a majority agrees to destroy the blockchain. Then they have betrayed the others, but in fact they have done it, because they don’t want to be the suckers who did not even get the bribe. Basically such a scenario is more like a blackmail than a bribe. Because in fact the actors lose money, it is just that they don’t lose all if they submit to the blackmail.

So let’s assume here the worst case has become real. A majority lied to the rest, saying they won’t take the bribe, but they did and destroyed the blockchain. What happens next? The honest validators will be very pissed and they will restart the network without the dishonest validators. The users now might chose to use the fork with only <1/3 of validators remaining or pick the destroyed fork, which is not really an option. The majority (2/3) who destroyed the network, might relaunch a non-destroyed version, but now we arrived at a scenario, where 2 networks compete, one in which validators have proven that they are honest and one in which they have proven they are dishonest. This is actually good news, because it is a filter mechanism, which sorts out cartel forming dishonest validators.

Of course for investors, this might not be the best news, because the network will lose a lot of valuation and might take a lot of time to get back to the old valuation. But and this is the most important part: The attack has failed. The attack has only removed the dishonest validators and took away 99.55% of their investment. The network was overvalued though, because more than 2/3 of its validators are prone to dishonest behavior. Ok great, but why does the prisoner’s dilemma does not explain this?

The prisoner’s dilemma is a simple and static scenario and this is exactly a situation equivalent to it. Again evolution of time makes a difference, we will see soon why. The author (zack-bitcoin) also states the “tragedy of the commons”, which is the theory why in shared flats the dishes are never done. But in reality there are rare examples of shared flats in which the dishes are done. And this is because the flat mates are cooperating over time and if there is a majority who never does the dishes, then a fork will be performed, in which the other flat mates label their dishes and do their own dishes and separate the commons, so that this tragedy ends.

The tragedy of the commons is only a real tragedy if the commons cannot be distinguished. For example climate crisis is such an example and carbon dioxide emissions are not distinguishable. It is not possible to label carbon dioxide and then relocate natural disasters caused by it proportionally to the emitters. However as time comes into play and participants cooperate over multiple instances of decision making the situation looks different. If you look up research on the prisoner’s dilemma, where many instances of the game are played and different strategies compete against each other, the ones that punish others for uncooperative behavior and cooperate with the other ones are the strategies that succeed.

When one meme is not enough to illustrate accepting bribe vs. rejecting it.

Ok great, so there is hope for PoS? Well, there is even more hope, because there is a very crucial assumption being made, that is wrong. In fact there is punishment for trying to break consensus. Zack-bitcoin assumes there is no punishment, which is only right for old and naive PoS systems. But in Cosmos there is stake being slashed and validators being jailed. With this punishment the given 0.45% do not exist. If the punishment for trying to destroy the blockchain is high enough, then the risk is too high and there is no Nash equilibrium for defective behavior. To use more of these heavy-laden economic terms, we can say there is a Shelling point at which validators cooperate and won’t accept bribes. If you want to understand more, read about Shelling points, Nash equilibrium, the prisoner’s dilemma, tragedy of the commons and of course King Midas, who turns everything into gold, or in a modern version bitcoin, without doing PoW.

http://truthcoin.info/blog/pow-cheapest

This is another piece which describes problems of PoS. But it does not say that PoS doesn’t work. It says that it cannot be more efficient than PoW. It is a very good read though it is very long. Together with the following piece it might be one of the best articles about what makes PoW viable: https://nakamotoinstitute.org/shelling-out/
I highly recommend reading these articles.

The most important takeaway is that PoS “costs” the same. It costs the same on paper. Where PoW burns electricity, PoS locks away capital and prevents investment in other fruitful endeavors. Does this make any difference? It actually does. Imagine the example of a world where climate crisis brings almost all nations together to ban things that emit too much carbon dioxide (CO2). Fossil fuel might be banned and also PoW blockchains. In this scenario there is no need to ban PoS, since PoS only locks capital away from CO2 producing investments, PoW in contrast destroys electrical energy to secure its ledger. Of course Bitcoin maximalists don’t want to hear this argument. And it is not really important here.

It is mostly to understand that even though something costs the same on paper, for real world implications the type of cost might be very important. Let’s further investigate this. Locking away capital also has some other interesting property. Imagine a PoS blockchain starts with a $10 mio valuation and over time users come, do transactions, buy token and the valuation goes up to $100 mio. Now the frozen stake has increased in value, more capital is locked away but no real world resources were burned to do this.

In a PoW blockchain this is not possible, since real miners must be bought and real electricity must be spend. The value of the miners is uncoupled from the blockchain valuation. Many opponents of PoS will tell you at this point that therefore only PoW really commits to the future and for PoS there is nothing really committed. Another argument here is that PoW stores the burned electricity as value in the ledger. Let’s be honest, this is a false belief. If people are not willing to buy Bitcoin, the price will fall.

There is no reason why anybody will say “there is this much electricity already put in this blockchain, I’m willing to pay more for a Bitcoin, it is undervalued. For buying stocks this is an usual behavior and what Warren Buffett does quite often. He realizes the market cap of a company is lower than the valuable objects in the corporation of the stock. He then buys. These valuable objects are mostly real estate, intellectual property and long lasting contracts. These are things you can take out of a company and sell individually. The burned electricity cannot be taken out of the Bitcoin network. If people reduce their demand of Bitcoin nobody will be willing to pay more, just because there is that amount of electricity in it.

People will buy coins because it is faster than mining the same amount of coins or maybe because it is cheaper. In the latter case mining is declining. What keeps the price up is the prospection of the future. And stake is just as good in harvesting block rewards as miners. Selling stake might be different to selling miners though. If you have ASICS and these are used for the biggest PoW chain as is the case for Bitcoin, then it is really hard to sell the miners, if the interest in the blockchain is declining. The same goes for PoS stake. In contrast if the there are other blockchains, maybe bigger ones, which use the same hash algorithm, then it is easy to sell the miners. For PoS there is never another blockchain that accepts your stake. So at this point we see that for an investor it really doesn’t matter how much electricity was put into a blockchain or how much stake was bonded in the past. The important thing is the future. For the future PoS relies much more on belief of the market. Also it can scale to 10x or 100x in value without spending a lot of physical resources.

Stages of crypto anarchist’s enlightenment

The thing that makes people behave are punishments in PoS and not fear of devaluating mining equipment as in PoW. It is how nothing-at-stake and long-range-attacks are solved. The punishment easily scales with increasing valuation. Investing in mining equipment binds miners to a certain cost for producing a block. When staking you don’t commit to a certain price for producing more stake. You hope that the rewards for block producing are worth it, if the price stays the same. In a scenario where the blockchain grows and now many use it, it is not necessary to burn more electricity. Maybe some validators will think, wow my stake is now worth that much, I need to invest more in IT security. But this scales very well.

So a PoS chain can easily adjust to much more demand, which is a consequence of what we discussed early in the article, that transaction throughput can scale, but it is also a consequence of virtual punishments vs. binding of physical miners. So for technical network properties as well as for token economics, PoS scales much better. The capital locked away can come from intrinsic increase of valuation, still people will be afraid to lose that value and won’t misbehave. For a mining operation to be profitable it is necessary to cover the costs of electricity. The users of the blockchain have to pay for this — either in transcation fees or via inflation.

In contrast the punishment of PoS must not be paid by the users. Wait a second? Many will oppose to that. We have learned that MR=MC in Paul Sztorc’s article and the risk of punishment will be factored in when investing in staking coins. Whoever runs a validating operation must factor in this risk and hand over the cost to the users in the same way the miners do. This is the argument we have learned, deeply condensed in MR=MC. This is flawed if presented in such a reduced version. To understand more, we need to differentiate punishment by 2 different sources. The first source is attacks like long-range-attack, following forks (nothing-at-stake-problem), short-range-attacks (corresponding to 51% attack in PoW) and similar things, the bribe example also belongs to this category.

The second source is being punished for going offline, server malfunction etc. If you run an honest server only the latter is relevant and only for this risk you need to push the cost to the users. If you are honest, you already know that the first source of punishment will not affect you. This punishment is for the validators who want to gain from misbehavior. If you do not belong to this type, there is no risk involved. Since you know for yourself, if you belong to this kind, then you don’t have to factor in this cost. But what if a malfunctioning version of the blockchain code is uploaded to github and I as a validator pull and run it and I’m slashed (punished)?

Well, in such a case a lot of other validators will also misbehave, because they download the same software. If >33% of nodes fail, then the network will halt and there will be a software patch and a rollback to the block before validators were slashed. There is no reason to apply this punishment. If you look at the DAO fork of Ethereum, there is even an example in PoW, which is much less severe and still the network decided to rollback. So don’t think there can’t be rollbacks in PoW. This separation of 2 sources is the reason why in the Cosmos blockchain, there is a distinction between double spending (good for short-range attacks) and nodes going offline regarding the severity of punishments. The punishment for the first is much harder than for the latter.

Now that we have understood this, we can finally draw a comparison to fiat again. Fiat money also is secured by punishing misbehavior. Therefore fiat money can be printed without buying a lot of valuable gold and put it in bunkers. If you fake fiat money, you are fined and go to jail. This is why fiat scales much better than gold. The question if PoS works or not, is not a question of weak subjectivity, of nothing-at-stake or unforgeable costliness, it is a question if you believe the gold standard can be overcome by introducing punishment.

Get Best Software Deals Directly In Your Inbox

--

--