100k Bitcoin for 2020— Special 🎅

one does not simply assume things…

Everything-Blockchain
17 min readDec 28, 2019

There are so many predictions about Bitcoin and Crypto-currency and people often state they don´t need to know exactly how Bitcoin works to make price-predictions 😂 😂 😂 Santa will do the job right?

But you know that implicit assumption are also a priori, backed into your story a.k.a. your model? 😏

Every hodler who lived in hodlers-ville liked Cryptmas (and 100k+ Bitcoin Predictions) a lot…

but the Grinch, who lived just north of hodlers-ville did NOT!

In the following and the next article I will logically deconstruct the Bitcoin stock-to-flow model and so we should start with what Bitcoin actually can be. Can something like Bitcoin really be a store-of-value? 🤔

The hardness of a tulip bulb 🌷 🌷 🌷

From 1634–1637 in the Netherlands and even until 1730 [in the Ottoman Empire] tulip bulbs were incredible good in storing value too, because they only went up in value 😂. Only with the hindsight of a 2019 Boomer/Doomer/Zoomer, they are just silly onions 🧄 🧅…”a bubble”. With the knowledge of a 1630s `Tulip Expert´, they were a very sophisticated piece of biotechnology ☝️.

Did you know, that you can not simply reproduce a rare tulip? Because the color can be dependent on special virus infections [tulip breaking virus] and the bulb cant be forked to get an identical copy? That it takes 6 damn years to get a tulip from the seeds? That use-cases were medicine and aphrodisiacs? That a tulip-garden owner was a whale 🐳 and the `sail-cart´ was the Crypto-Lambo of the 1630s?…probably not, but this is not about tulips and their use-cases and unforgeable costliness of bulbs, it is about crypto 🤡.

The tulip fuckers of Harleem. Interpretation of Hendrik Gerritsz Pot — Flora’s Mallewagen, painting, around 1640 CC0 Wikipedia

There are NO parallels…

TFW when you assume that people back then were just silly Neanderthals, but then you find an accurate description of yourself in a painting from that time: — That moment when no fiat left to invest but still watching the obscene scene.

Part 1 — The purpose of Proof-of-Work

No, its not well known WHY Bitcoin (fundamentally) works…same as we don´t know how a damn tulip bulb works.

Disclaimer: I am not one of those persons which thinks everything he/she writes is true. I´m happy if something is provably not true, so that I and others can learn. So please, challenge my articles and feel free to be critical! And this not a "long article", this is all the science, hopefully translated into something digestible.  

Our Quest: Defining Bitcoin physically

Lets start technically (computer scientific): Any crypto-”currency” from a solid technical view point, is simply a communication protocol based on distributed consensus. And boy, is this non-trivial. 🤭

Nakamoto didn´t simply “solved” the Byzantine Generals Problem

`Distributed Consensus´ in a BFT (Byzantine Fault tolerant) fashion is a centuries old problem. Way before Bitcoin, we had consensus mechanisms to really practically solve the Byzantine Generals Problem in distributed consensus (which is actually also a scientific field). “Solving” the Byzantine Generals Problem simply means: knowing how to reach consensus (under certain conditions) in the presence of byzantine actors aka. assholes 😈 and failures.

Byzantine-fault-tolerant systems are used in air-planes ✈️ and spaceships 🛰 🚀 [SpaceX] and server-networks in corporate-settings. — everywhere, where a computer not just has to be reliable but really, really reliable.

FalconX (Image: by NASA CC0 Wikipedia]: 6 Computers with 18 processing Units with each 3 Cores using a BFT-consensus protocol)

If you wonder how in a closed air-plane, nodes can be “byzantine”:…of course they are not, but they can still “lie” 🤥 about values. Due to background radiation, spontaneous bit-flips occur more often than down on earth, → that´s why they use distributed systems with distributed and byzantine fault tolerant consensus (Aydos, Gökçe 2017). The technique is called state machine replication. 💻+ 💻= 💻² 🙌

>>getting reliable systems out of unreliable parts<<

So, solving the Byzantine-Generals-Problem was not an achievement of Nakamoto…even though he “solved” or circumvented it for a very special case.

When Byzantine Generals win

How many computers do you need, so that a conspiracy of n faulty computers could not prevent the honest or functioning computers from reaching consensus ❓

❗️ The NASA-sponsored scientists Shostak and Marshall Pease and later Lamport (the guy who introduced the Byzantine analogy) showed that a minimum of 3n+1 are both needed and sufficient. For example: when there are 30 faulty nodes, you need three times as many total nodes plus one (=91 nodes). (Pease et al. 1980)

Safety margin

So, the correctness of the consensus algorithm (correctness is reaching consensus) fails, when there are less than 2/3+1 honest nodes (or Block-producers a.k.a. Witnesses). This is the safety-margin.

LasVegas Algos — This is why Bitcoin is not a Consensus Protocol…

What? So it is a 33%-Attack and not 51%?

Yes in the case of a consensus-protocol it is 1/3 or “33,333…%”. BUT ☝ ️later you will see, that this only holds true for classical-consensus and Proof-of-Stake systems but not for Bitcoin, because Bitcoins consensus-layer is not a consensus system in the classical sense and has no concept of nodes, nor does it have `consensus´ defined by finality. Bitcoin cant reach 100% final consensus.

Classical Consensus is used for years in many industry- and enterprise applications and was re-invented as dPOS - used in Eos, Steem, Tendermint, Casper, Hashgraph,…(Larimer 2018). The 2009 introduced Nakamoto-Consensus completely changed the paradigm of consensus. Nakamoto said: “OK 100% consensus is not possible in an open, anonymous network, but I can give you 99,9…%

Those Las Vegas Algorithms - based on randomization, have some probability each round of achieving consensus and thus they achieve consensus within some amount of time (T seconds) with some probability exponentially approaching 1 as T grows. Do you smell the bottleneck? So its better to choose longer time periods for your rounds than smaller. Because you come closer to 100% the longer you give the network time. Bitcoins heart beats only every 10 minutes. The time in between the consensus process does not exist. Like a stroboscope. (Miller et al. 2014).

Since there are nodes but just honest and faulty hash-power, we have 2f + 1 resilience (Miller et al. 2014). When 200 THashes are faulty you need 400+1 TH in total. However within Nakamoto Consensus there is also evidence that you don´t need 51%. In their paper “Majority is not Enough” Eyal and Gün Sirer showed, that with selfish-mining, over 2/3 of the participants need to be honest to protect against this form of attack (Eyal and Gün Sirer 2018).

Proof of Work is not a consensus mechanism 🙄

Proof-of-Work is a moderatly hard hashpuzzle. Puzzles were first introduced to cryptography by Ralph Merkle 1974. Merkle used it for public key cryptography. Proof-of-Work is based on the Hashcash algorithm invented by Adam Back 1995 which is based on the idea of computationally costliness by Dwork and Noar 1992. Originally it was intended as an anti-spam-mechanism for E-mail and was quickly used for digital tokens in decentralized networks as a minting-mechanism (Back 2002). Way before Bitcoin 😉:

neither of them helped in reaching consensus …

In Nakamoto-Consensus (e.g. in Bitcoin or Ethereum 1.0), Proof-of-Work is implemented in such a way, that you have to solve a hash-puzzle in order to add a block…

Do you actually understand the purpose of Proof-of-Work?

…and here the understanding of Bitcoin for most people ends. They think the purpose of Proof-of-Work is making it costly to “produce” Bitcoin, so that Bitcoins have value (`labor theory of value ´— a fallacy) or some other crazy explanations. 🙃

What most people know:

You have to find the “golden nonce” — a very very small number — the needle in the haystack. Whoever finds the needle, gets the block-reward of 12.5 Bitcoin (geometrically declining all 4 years) and can add the new block -which means, that the winner can decide what happened the last 10 minutes. So, Nakamoto-Consensus is a retrospective protocol. But this is also not the purpose of implementing it. Now, lets see the Proof-of-Work with our own eyes…

Do YOU actually have ever seen Proof-of-Work? 👀

Ladys and gentlemen, 20 million times cited as a “consensus mechanism”, most often used word on cocktail-parties in 2017… theeee only and hoooooly proof-of-wooooork 🙌… in mathematical form:

Let SHA256 be the hash of the latest block
i => that party, that commits the block b=(h, ipubk,tx,nonce)
tx => transactions
And the public key of party i is ipubk
H is the difficulty target
Find the nonce such that SHA256(h|ipubk|tx|nonce) < H

Yes look at it! This is the magical Proof-of-Work…

now, tell me where is the consensus? 👀 Right there is no consensus. 💅

So, Proof-of-Work just is a:

  • 👍Sybil-Resistance Mechanism (One “CPU” = One Vote) ✔️
  • 👍Leader election oracle (Who ever finds the nonce decides what happened the last 10 minutes) ✔️
  • 👍Incentive — Lottery for the Block-reward ✔️
  • no consensus mechanism ❌
  • what else ❔ ❔ ❔

The >>consensus<<-mechanism in Nakamoto-Consensus is simply the longest and heaviest chain rule, which says: “The right chain, is the longest chain on which the most hash-power was applied”.

but if consensus was well known long before bitcoin and Proof-of-Work was used for minting tokens in distributed networks like Mojo-Nation…

Then why didn't Bitcoin happened earlier? 🤷‍♀️ 🤷‍♂️

In a masterpiece of investigation from 2011 Gwern Branwen discussed why Bitcoin didn't happened earlier. Everything we needed (technology wise) was already in place and in the 1990s we had a big internet-currency hype. Gwern concludes:

“The interesting thing is that all the pieces were in place for at least 8 years before Satoshi’s publication, which was followed more than half a year later”

  1. 2001: SHA-256 finalized
  2. 1999–present: Byzantine fault tolerance (PBFT etc.)
  3. 1997: HashCash
  4. 1992–1993: Proof-of-work
  5. 1991: cryptographic timestamps
  6. 1980: public key cryptography
  7. 1979: Hash tree

[“Bitcoin Is Worse Is Better” Gwern 2011]

The fundamental problem which prevented us from having something like Bitcoin was proven in the FLP Impossibility Theorem by Fischer-Lynch and Paterson 1985. The FLP Theorem also known as “FLP impossibility result” showed mathematically, that it is impossible to reach consensus in an asynchronous network, if even one single node can fail.

It is not as easy and trivial as you think it is…

“Impossible” means that in a asynchronous setting, you can not always reach consensus and not in a fixed time. Imagine a node fails, then you can not tell whether it failed or whether it is just slow. If not specified, you would wait until infinity 🙇‍♀️ 🙇‍♂. ️Now you could say: “hey than lets wait a fixed amount of time, like 10 minutes and who ever is not responding → is considered as faulty”

Bruh…srs…. a computer has no concept of `time´ and even if you use the on-board clock, you could easily manipulate it → this means that everybody could stop the network from reaching consensus. “Well, than we could use a trusted third party as a time provider” …yeah same problem… 🤯

So the problem is the permissionless set-up of the validators-network (Network of miners) or in physical terms the lack of a decentralized coordinator/clock.

Synchronicity 🕓 🕔 🕕 🕖

synchronicity

Synchronicity is achieved in very closed and controlled environments. For the 18 processes in the FalconX it is no problem (all to all means 18x18=18²=324 messages) to communicate the state of the system and reach consensus. But for 100 processes/participants it is already 100²=10,000 messages! So you get the problem for global networks? Classical BFT consensus does not scale! This is why Eos, Steem, Tron, … which use classical consensus based hybrids, restrict their “Mining”-Layer to 20–100 validators! Classical consensus = quadratic message complexity.

Plus: the participants have to be trusted! Either they are controlled servers OR they are elected like in delegated Proof-of-Stake (dPOS).

asynchronous = open set of validators

An asynchronous setup, is a set of loosely connected nodes, like the internet itself. You can join and leave when ever you want. The nodes are not synchronized by a central clock, they each have their own time. One could argue: “why not taking internet time from different sources for synchronization? — Problem solved”

Even if the majority is honest, this would still introduce a central or trusted third party and “time” even from atomic clocks is not sufficient. Even if the differences in the beginning are very, very small, over time → they lead to clock-drift. In globally distributed networks like Bitcoin, network-delay and even the relativity of time plays a role…

OK, so Bitcoin is all about “Time” ⏳?

“Time” really means exact ordering of events — not human concepts like “year”, “month”, “day”, “hours”, and so on.

You probably have heard, that the problem in Bitcoin was solved by assuming synchronicity. This is called >>synchronicity-assumption<<.

yeah, one does not simply assume synchronicity 💁‍♂️, so..what is it, that synchronizes the processes? What is the central clock in Bitcoin⏱?…

Why it is not Math making bitcoin secure…

…Obviously, there is no central pacemaker but…BUT ☝️ there is a distributed one! The block-time correlates to the difficulty and the difficulty is the result of adjusting to applied hash-power. In his great article Trubetskoy 2018 explained how PoW substitutes time.

The connecting element is the hash-puzzle. So, the distributed clock is… entropy! 👈 👌

Pixabay CC0

People often say Bitcoin is backed and secured by math. Probably they mean the right thing. You might know entropy from physics. Entropy is all about micro-states of a system. When you buy a new deck of cards, they are all ordered. When you take the factory-fresh deck and throw it on the floor…the cards are probably less ordered. Throwing them again, leads to more disorder. This is because there are way, way more possible disordered states, than states of higher order.

This disorder tends to increase over time. All the particles in the universe (the cards) will statistically not reverse their order, this is why “time” is a one-way-street. From higher order, to lower order. Or in terms of entropy: >>Entropy always tends to increase (second low of thermodynamics)<<. Time is a human concept, there is only change in order.

Entropy in information-theory and cryptography

Pixabay CC0

Entropy is also linked to information. The knowledge about the state of a system, can be either known or unknown. For a simple coin this means that there are two differentiatable states (head or tail, zero or one, yes or no,…) which leads to one decidable bit of information. Each position of a crypto-key is like a covered coin.

Entropy is what makes a crypto-key secure. Entropy = information we don’t have. At the start of the hash-puzzle, we all have no information about the solution. Then we start applying hash-power, start guessing/brute-forcing/mining. Entropy of one bit unknown is a universal constant…because one bit unknown is one bit unknown…

Here it is important to understand that SHA-256 is memoryless

“The proof-of-work is a Hashcash style SHA-256 collision finding. It’s a memoryless process where you do millions of hashes a second, with a small chance of finding one each time. … Anyone’s chance of finding a solution at any time is proportional to their CPU proof-of-worker.”

Take a second: >>Anyone’s chance of finding a solution at any time is proportional to the applied hashpower.<<…

The Number which you have probably never thought about

…This means that you cant make progress in mining, you have no disadvantage in starting a little bit later than the other miner, because you practically don´t reduce the size of the remaining set of numbers to try. Of course, when you have already tried 200 Million numbers, those numbers are not the solution, but for any set of numbers which is 2²⁵⁶ elements big, 200 million or even 200 billion is not of any relevance.

⁵⁶ = 115792089237316195423570985008687907853269984665640564039457584007913129639936A 1 to 115 quattuorvigintillion (10⁷⁷ this is a 78 digit number) chance of finding a collision.

>>it is all about applied hash-power. A lottery and your chance of winning is proportional to the tickets you can afford<<

“Find the right one” vs “find one which is sufficient”. Source: Pixabay CC0

Imagine you have a bowl full of marbles and one out of 115 quattuorvigintillion marbles is a bubble-gum. You could test them and take those balls aside which are out of glass, but you could also throw them back to the bowl..because you will for sure, not pick any of those again.

Then how is it possible to find the right nonce at all?

It is a probabilistic game where the difficulty is adjusted so that at least one participant will find a solution. Its not about finding one specific gumball but any gumball which is provably not a marble. The nonce just has to be smaller than the value of the difficulty parameter H.

But…

“But it is always different and never exactly 10 minutes how can it be more exact than an atomic clock?” 🤔

right, it only 10 minutes on average BUT the “time” (entropy) per round, for each participant is exactly the same. There is even no relativity.

“But the computational power for each participant differs” 🤨

right, but the information entropy of one bit does not care about the computational power. And one unit of computational power of a bigger miner is not better than one unit of computational power of another miner. Only on average over many, many rounds - by the law of large numbers, the bigger miner has a better chance.

Proof-of-Work democratic? This is why it is communism!

real Bitcoin Marximalist

Communism in Bitcoin

…yeah well it is all of it. It depends on the level! On the consensus level it is a communistic scheme. The protocol is agnostic about everything but the amount of work you apply. People often talk about nodes but the mining-layer does not have a concept of nodes! If geographically disconnected nodes (e.g. one on Earth and one on Mars) pool their hashpower, they are one participant, one worker no matter how many distinct nodes (technically we assume infinitely many nodes so that is all one homogeneous mass, like pudding). And because it is memoriless, they can do it without coordinating their actions!

And because the entropy of one bit is constant the whole mining-network needs no central coordinator.

Capitalism in Bitcoin

In the real world/economic layer it looks a bit different: of course Bitcoin is embedded in a capitalistic world. Miners are corporations owned by capitalists. So yes, it is also capitalistic.

Plutocratic background radiation in Bitcoin. Vote — Bet

And is it plutocratic? Survival of the richest?

If it was plutocratic it would mean “the same rich idiots on every single decision”. A plutocrat does not bet his money, he simply signals with his money. Plutocracy is based on authority bias. BUT of course there is a plutocratic background radiation: monarchs like goverments can have much influence in Bitcoin because they have lots of capital.

Democracy in Bitcoin

Bitcoin is not a democratic voting system nor is it a plutocratic voting system! It is a capitalistic betting market. When you vote, nothing is at stake. When you bet, you have skin in the game.

Bitcoin is democratic in the sense, that you can influence the system by free political participation but your influence is by no means proportional to your voice. And since it leads to economies-of-scale, it introduces undemocratic thresholds. Bitcoin is ruled by stake —>> Proof-of-Work is a Proof-of-Stake!<<

Stake 🥩 vs. Stack 📚

Token-Stake Models

If you compare PoW with token-stake models, which we mean when we say “Proof-of-Stake”, than it all boils down to skin in the game. Stake is often confused with stack. A stake really needs slashing conditions.

Always check for Slashing-Conditions

Ethereum 2.0 will use smart-contracts to enforce those conditions. If you misbehave according to the rules in the smart-contract, → you lose your token stack. It is not easy to say how much stronger the Proof-of-Work binding conditions are, but it is possible that PoS will never be as strong as Proof-of-Work. What could happen are nothing at stake attacks.

With Proof-of-Work you have first to pay/burn in order to play. .

In nature this phenomenon is called costly signaling.

Proof-of-Stake/Proof-of-Work and the Peafowl 🦚

When a handicap signals skin in the game. Source: By lo.tangelini from Soliera / Modena, Italia — Tonos, CC BY-SA 2.0

The colorful feathers of a peafowl 🦚 or the tails of male fishes 🐠. The horns of a giant deer 🦌. In a Darwinian world, those traits are disadvantages/handicaps to the carrier. You will be detected by predators way easier, you are slower. It is costly to have them. The price you get from them is that the carrier signals that he has survived in spite of all those disadvantages. This gives the carrier the right to add a block to the chain called “life”. His chances to mate are proportional to his colorful/big and costly traits. → Proof-of-Stake/Proof-of-Work!

Lets update our Proof-of-Work List:

  • 👍Sybil-Resistance Mechanism (One “CPU” = One Vote) ✔️
  • 👍random leader election oracle (Who ever finds the nonce → decides what happened the last 10 minutes) ✔️
  • 👍Incentive — Lottery for the Block-reward ✔️
  • 👍decentralized clock (“solving” the FLP impossibility Problem)✔️
  • 👎Source of plutocratic background radiation ✔️
  • 👎Economy of scale —> threshold-democracy✔

Bitcoins consensus mechanism is a retrospective, probabilistic ( non-deterministic → no 100% finality) Nakamoto-Consensus, build on a synchronicity-assumption and synchronized by computation (or entropy) as a decentralized clock, reached by following the longest and computationally-“heaviest” chain.

SO to get the message: Proof-of-Work is not there to give Bitcoin value. PoW is of pure technical importance and has nothing to do with gold-mining and scarcity. Just a consensus thing. Nakamoto-Consensus which is build around PoW is relying on its Block-time and an oracle (a source of randomness) and needs Sybil-resistance by introducing time/cost. And yes, there are alternatives…

--

--