Defusing Crypto’s Ticking Time Bomb

Blockchains have some fundamental problems. It’s time to talk about them.

Square
167 min readAug 1, 2022

In the movie Inception, there is a character named Saito. Saito enters the dream world with a purpose, but ends up getting trapped there, unable to escape. After many decades pass — with his mind confined to the illusory world — the young Saito grows into an old man who forgets that he is in a dream. Life before the dream becomes nothing to him but an opaque and distant memory. He accepts the dream-world as reality. Eventually, Saito is found in the world of the dream, and must be convinced that he is dreaming before he can finally wake up and return to reality.

There are some major problems with blockchains as we know them. And instead of solving them, we have simply forgotten about them. We have convinced ourselves that they are unsolvable and have resigned ourselves to working around them instead. Now, we live in the world of the dream, and our ability to make progress is being hindered by the illusions we have come to believe. We are all Saito. You are Saito. And in this article, I am going to try and convince you to leave the world of the dream.

Saito — before and after entering the world of the dream

Article Contents

Part I………………………………………………………………The Players

  • Block Producers
  • The Peer to Peer Network

Part II……………………………………………………………..The Problems

Major Problems:

  • The Volunteer / Infrastructure Problem
  • The Scalability Problem
  • The Data Storage/Pricing Problem
  • The Network Closure Problem

Additional Problems:

  • Block Subsidies
  • Majoritarian Attacks
  • Discouragement Attacks
  • Commodification of the Work Function
  • Header-first Mining

Part III………………………………………The Solutions: Current and Planned

  • Statelessness
  • State Expiry
  • Sharding
  • Moving Computation Off Chain — Rollups
  • Data Storage Blockchains — Arweave and Filecoin

Part IV……………………………………………………..A New Set of Solutions

Part I — The Players

Blockchains have an infrastructure problem. This includes your favourite chain. Before we can get into what exactly it is, we need to know what the key elements of a blockchain are. We can outline them by looking at Bitcoin, which is a good template for the space.

Bitcoin has two key infrastructural roles, which each play a distinct function. I will outline them here. Feel free to skip Part I if you are already comfortable with basic blockchain network design.

Miners (Block Producers)

Block producers are responsible for guaranteeing the security of the network. In Bitcoin, it is through hashing (spending electricity) and block producers are called miners. In proof of stake, it is through risking capital (stake) and block producers are called stakers. I’m going to stick with Bitcoin terminology for simplicity — but remember that the general process outlined here applies to both Proof of Work (PoW) and Proof of Stake (PoS) networks.

First, users want to interact with the network — and so broadcast their transactions out to nodes. Miners (who run ‘mining nodes’) receive some user transactions directly from their own node, but they also receive transactions from the peer to peer network of nodes. Miners verify transactions when they receive them, rejecting ones which violate the rules of consensus. Inbound transactions pile up in the block producer’s mempool (or, transaction pool) over time.

Miners then compete amongst each other to put these transactions on the blockchain (to confirm the pending transactions). They bundle a handful of transactions from their mempool into a ‘block’, which will get put on the chain if they successfully win a proof of work competition. The miner who solves the proof of work puzzle broadcasts their winning block around the network, and the other miners add the winning block to the chain (synchronise with the rest of the network), before they start mining on the next block. Whichever miner produced the winning block (added transactions to the blockchain) gets paid.

Mining (or staking) exists, above all, to make producing a block costly. If producing a block was not a costly affair, the network would be susceptible to a range of attacks and failures (sybil, spam, double spend, constant forking — and more) and would fail to be a viable blockchain. Mining is the security mechanism of Bitcoin.

Note: Multiple miners can have the same transaction in their mempool or proposed block — but when a new block gets added to the chain, the Bitcoin software ensures mempools and other proposed blocks are updated so that duplicate transactions are removed and replaced with new transactions.

The Peer to Peer Network

The peer to peer (p2p) node network, broadly categorised, is comprised of constantly running ‘archive nodes’ and ‘full nodes’ — both of which perform critical network functions. While mining nodes can be either archive or full nodes, the p2p network itself consists predominantly of non-mining, full nodes.

Different blockchain communities give all the varying types of nodes different names, and this can make things quite confusing. Bitcoin’s definitions are different to Ethereum’s definitions, but how we define ‘node’ doesn’t really matter as long as we know what exactly we mean by the word. So to simplify things as much as possible, in this article I will employ the following terminology:

  • If you mine or stake, you run a mining/staking node
  • If you collect, verify and propagate all new transactions and blocks, you are a full node. I might call you a ‘routing node’.
  • If in addition to the above, you store (and keep up to date) the entire history of the blockchain — from the first block to the latest block — then you are an archive node. Under this definition, all archive nodes are full nodes.
  • If you don’t do any of these things, then you are a ‘light node’, and are not a crucial part of the peer to peer network. For example, the average Bitcoin wallet is a light node, which only connects with the Bitcoin network to broadcast its own transactions to it. It does not contribute to the network infrastructure.

A ‘node’ could perform any combination of the first three tasks just listed. But when I talk about ‘nodes’ on the p2p network, I am at a minimum talking about nodes which perform the second function above of collecting, verifying and propagating new transactions and blocks. For the more technically inclined, when I say ‘verify’ I do not mean validate headers. I mean actually verifying blocks and the transactions therein.

Node Classifications, for the purpose of this article

It is worth mentioning that although full nodes only store a portion of recent blockchain data, they amount of data they store varies depending on their hardware constraints. Some could hold up to two years of data, while others might only be able to hold up to six months of recent data. If access to data not held by full nodes is needed for whatever reason (and there are many), then archive nodes are relied on to serve that data.

Now, there are two things that the nodes on the p2p network do.

Role 1: Routing Work

The basic (and highly important) task which nodes perform is collecting transactions and new blocks which have been broadcast to them — verifying them — and then sharing them with both other nodes and block producers on the network.

Here’s how it works:

You send somebody some Bitcoin from your wallet. What this does, is broadcast a Bitcoin transaction out to the p2p network of nodes, which are constantly running. Your transaction gets picked up by a node, which then checks that it is valid — meaning, that it doesn’t break any of the consensus rules such as “you can’t spend any Bitcoin you don’t have.” It then shares it with as many other nodes as it is connected to. All the nodes on the peer to peer network which have your transaction do the same — verify and broadcast it amongst more nodes in the p2p network. This gossiping of your transaction around the p2p network ensures that as many nodes receive the transaction as possible. In addition to broadcasting the transactions they receive to one another, nodes also broadcast them to as many Miners as they possibly can. The more nodes a transaction reaches, the higher the confirmation that it is valid (because of independent verification), and the more miners it will reach. This collection, verification and propagation of transactions is the basic function of the peer to peer network.

A user’s transactions is ‘gossiped’ from the receiving node to all other nodes, again and again, so that as many nodes (and eventually, miners) receive it as possible

The peer to peer network is the backbone of a blockchain. It is where a blockchain’s decentralisation comes from. If the majority of nodes on the p2p network are honest, they will independently verify transactions and new blocks — and ensure that consensus rules are enforced. Also, by acting as an information sharing network which propagates new data and blocks to the entire network, it helps create an open network which allows anyone to participate in consensus.

There is a very simple reason why Satoshi designed the peer to peer network to gossip your transaction around to all other nodes. The reason is: if your transaction reaches multiple block producers, then its chance of making it onto the blockchain increases massively. If your transaction only reached one miner (or only a subset of colluding miners), then it would be possible for that miner (or group of miners) to leave your transaction in its mempool(s) and never put it onto the chain. If the miner(s) decided, your transaction would never go through, and you will have effectively been censored. Which would be a failure of the blockchain, as it would lose the crucial properties of openness (that anyone can use the blockchain) and censorship resistance (that nobody can interfere with users’ transactions). But if your transaction was broadcast to multiple miners who are competing to put it on the chain for money (they collect a transaction fee), then your transaction will eventually be added to the blockchain.

“The purpose of the bitcoin network is to propagate transactions and blocks to all participants” — Andreas M. Antonopolous, ‘Mastering Bitcoin’

There is another (related)reason for the p2p network. And it’s that block producers, who run their own nodes (archive or full), are not actually incentivised to share the transactions they collect with one another. Even when they are willing to put the transaction they collect on the chain. This is because, to block producers, each transaction represents real value in the form of a fee which they collect if they put it in onto the chain. Would you share transactions around if they represented profit to you? You’d be better off just receiving them from other nodes and miners who are kind (or stupid) enough to send them to you, and then never share any yourself. Then, you don’t have to incur node-running costs, but still get all the benefit from free riding off the work done by the p2p network. Since the goal of miners is to make money, this logic is perfectly rational. And again, this is where the p2p network comes in to ensure that transactions are shared across the network to multiple miners.

Image Credit: Hackernoon

In short: Miners and Stakers collect transactions from the network of nodes and compete to put them on chain. It’s important that your transaction goes to as many block producers as possible, because otherwise a block producer could decide to never put it on chain, and you could be censored. Block producers don’t have an incentive to share transactions amongst one another either, because for them transactions represent profit.

The more nodes a blockchain has, the more decentralised it is. Greater decentralisation (assuming the nodes are good actors) ensures that consensus rules are not violated (that the blockchain is not broken). Furthermore, the greater (good-actor) decentralisation is, the higher the guarantee that your data will make its way onto the chain without censorship. If your transaction only reached one node, the number of miners which your transaction will reach would be much smaller than if all network nodes broadcast it out to the miners they are connected to. This is why nodes propagating transactions (data) across the p2p network is crucial for network functioning.

However, all of these benefits of ‘decentralisation’ are only benefits if we assume the nodes are good actors. More decentralisation doesn’t necessarily mean that we get the properties we want. Decentralisation is merely a form of network topology. What we’re really after is p2p nodes which help to preserve network openness (through transaction + block routing) and security (through transaction verification).

Now, let’s move on to the second network function which nodes perform.

Role 2: Data Storage

There is a second, crucial function which nodes in the peer to peer network perform — and that is storing historical blockchain data.

Nodes which store the full history of the blockchain are called ‘archive’ or ‘archival’ nodes. Archive nodes store blockchain data from the first block all the way up to the most recent block, and continually update the chain with every new block produced. They also send historical data to other nodes who need it. Most nodes on a given network are not archive nodes, as storing and continually updating an entire blockchain’s worth of data requires you to purchase sufficient hardware for data storage and then constantly maintain it — which is a cost most people prefer not to incur. It is also worth mentioning that, while significantly cheaper than archive nodes, full nodes themselves are not costless to run. Full node runners incur the direct and opportunity cost of the hardware, the cost of the portion of chain data which they do store, and the cumulative cost of the electricity it takes to run the node indefinitely.

A key piece of knowledge here is that in order to verify a transaction, nodes need to have a good amount (but not necessarily all) of the transaction history on the network. If a node receives a transaction which says “Pay Satoshi 1 Bitcoin”, it will need to have enough of the transaction history of the blockchain to figure out if the person who broadcast them the transaction actually has one bitcoin to spend.

Full nodes only need to know the current balances of everyone on the network to be able to verify new transactions. But to know the current balances, you need historical data. Once a full node is synced up, a lot of historical data can be pruned because only the current state is needed to verify transactions. I say most, because some amount of recent data is needed to resolve temporary forks which occur as the chain runs — otherwise a full node would have to rely on archive nodes to keep up with the chain (Ethereum-oriented explanation here and here).

As mentioned before, the amount of data which a full node (a node which verifies new transactions and blocks) stores depends on its hardware constraints. But it is generally a non-trivial amount, and this amount will be larger for blockchains which are more scalable. Since nodes need to verify all the transactions within newly produced blocks to ensure that they are valid and can go onto the chain, blockchains which process more data (which are more scalable) will generally have greater storage requirements for full nodes. In other words, Chains with higher throughput will require node-runners, full or archive, to hold on to more data so that they can properly verify transactions.

Use cases of archive nodes cover anything that requires historical data. Some examples: Creating block explorers; Any dapp function which needs to access past smart-contract states; On-chain data storage.

Because archival nodes hold all the historical states of the blockchain (not just the current state), they can be used in application development on smart contract chains. Any decentralised application (dapp) which would require querying past states of the blockchain will rely on an archive node. Furthermore, non-archival nodes often rely on them for historical data. In a sense, archive nodes are the blockchain. Without them, there would be no full copy of the ledger.

In an ideal world, your favourite chain would have lots of archive nodes (lots of full copies of the blockchain) spread out all around the world, all run by different parties — making the blockchain and all of its (your) data well-protected. But ultimately, how important archive nodes are to a network depends entirely on how much the network and the blockchain’s participants need historical data for their varying purposes. To the extent there is utility in having access to historical data, archive nodes provide utility to the network.

Interestingly, Ethereum will be pursuing a vision where by trying to make running a full node as simple and cheap as possible (and introducing data sharding), the blockchain could (hypothetically) be stored without archive nodes and instead reside on lots of non-archival nodes run by regular network users, each doing a bit of the work — but with no single node being relied on to store a given piece of data. We’ll discuss and evaluate this approach later on in the article.

To recap: full nodes and archive nodes store blockchain history. There are nontrivial costs that both types of node operators pay for. Only archive nodes store the entire blockchain history in perpetuity — and this is a significantly more costly task. Every transaction, account balance or smart contract that has ever been created is on your favourite blockchain exists in (hopefully!) multiple copies across a distributed network of archive nodes who keep it constantly updated with every new block, and store every piece of data ever put onto the chain.

Part II — The Problems

Now that we’ve covered the basics of what blockchain infrastructure is, it’s time to start discussing the problems surrounding it.

The Volunteer / Infrastructure Problem

Most of the problems which follow here derive from just one fundamental problem — which is that Proof of Work and Proof of Stake blockchains pay for mining and staking, but nothing else. The entire p2p network, comprised of archive and full nodes, is provided essentially in its entirety by volunteers. Although it can be stated simply, this fact has some very serious implications for the blockchain space.

The dominant consensus mechanisms we have come up with — Proof of Work and Proof of Stake — pay block producers for producing a valid block that is appended to the blockchain. Block producers are paid with some combination of A) transaction fees inside of the block, and B) newly minted coins. But no other form of work other than mining and staking is paid for. I’ll also mention here (though I’ll cover it later) that most cryptocurrencies which claim not to be ‘inflationary’, including Bitcoin, have not yet reached their supply caps, and thus rely on inflation and fees to fund network security.

Again: our chains only pay for mining and staking, not any of the other network functions which are needed to keep the chain alive and functioning the way we want. Those are provided for by volunteers.

  • The critical work of collecting, verifying and sharing transactions & blocks (essential for security and network openness) is not paid for
  • The critical work of data storage and making historical blocks available is not paid for
  • The p2p network providing a bulwark against miner centralisation of power (fork resistance through enforcing consensus) is not paid for.
  • The provision of user-facing network services and access points is not paid for
  • Other security benefits such as helping to resist selfish mining attacks by providing a shared view of consensus data are not paid for

Upon first hearing this, an instinctive reaction is: “well why don’t we just pay nodes, as well as miners/stakers?” Several problems immediately emerge. How do you price the work of data storage which nodes perform? When an archive node stores data, it stores it forever and so incurs a perpetual cost. How much does data which needs to be stored forever cost? Maybe it needs to pay rent? How can we price the rent without a central planner? Full nodes also store relatively large chunks of data indefinitely, and are not compensated for this. Furthermore, how much is a fair amount to pay nodes for collecting, verifying and propagating transactions? Nodes also do that kind of work, too. All of these are very challenging questions — and only the tip of the iceberg.

If you don’t understand why centrally planning prices is a fatal error, I suggest reading into it

In addition to these highly challenging problems, an even more problematic question emerges. How would you pay nodes without taking money away from mining/staking? A blockchain only has a certain amount of money to spend on its infrastructure (total collectable transaction fees + newly mintable coins). Money spent on the peer to peer network would be money directly taken away from miners/stakers. If you take away money away from the mining/staking reward to pay p2p nodes, it means that there is less money to incentivise mining/staking. If there is less money to collect from mining, then less people will mine, securing 51% of the hashpower will be easier, and you would have defunded the security mechanism of your blockchain. The security mechanism is important because it makes attacking the network costly (which we want, because there’s no point in using a chain which gets attacked). In economics, we call this situation a budget constraint problem. In the blockchain space, we can call it the foundation of the blockchain trilemma.

So, non-mining nodes are volunteers. The peer to peer network exists as it does today because regular individuals decide to set up nodes, voluntarily. So why do people go through the effort and cost of running nodes at all? Well, nodes can broadcast and verify their own transactions — and this is a reason why someone might set up a node (to use the network trustlessly). But in reality, most blockchain users don’t actually care about doing this, so are happy to just free ride off the volunteer p2p network without contributing to it themselves. The main reason people run a node is because they want to contribute to the decentralisation (and subsequent security) of the network — because they want the chain to succeed for moral, political, financial, personal or other reasons. The dominant strategy for creating a strong p2p network in our current paradigm is to cultivate a culture of voluntary node running. To date, only Bitcoin has really succeeded in doing this, for reasons we will discuss shortly.

We can sum up the volunteer problem very succinctly:

Blockchains don’t pay for critical network infrastructure, only mining/staking. Critical infrastructure (such as collecting, verifying and sharing transactions; storing the blockchain history) is provided by volunteers.

The volunteer problem can also be summed up as ‘the free rider’ problem of blockchains, as it is essentially network participants free riding on the work done by volunteers. Most of the subsequent problems we will discuss are a result of this fundamental issue — although they are not usually portrayed as such.

The Scalability Problem

Two key parameters in a blockchain are the ‘block size’ and the ‘block time’. The block time is how often a block is produced and added to the chain. The block size is the amount of data which can be put into a block. Together, these two parameters determine the rate of growth of the chain — the rate at which the size of the blockchain grows.

  • Rate of growth of chain (mbps) = block time (seconds)* block size (mb)

In simple terms, the ‘size’ of a blockchain is how much space it takes up on a hard drive — and this depends on both how fast blocks are added to the chain, and how much data each block holds.

Remember when I said that blockchains which process more data (which are more scalable) will have greater storage requirements for full nodes? — Well, here’s the problem with that. Blockchains which make their chain’s rate of growth too high (e.g. through increasing the block size) end up growing the chain size to an unsustainable level, where it becomes unfeasible for volunteers to run nodes because data storage costs grow too high. This leads to a deterioration of the peer to peer network and with it, all of the valuable and network-critical functions which it performs. In short: scaling too aggressively destroys the chain’s infrastructure. Since we do not pay for data storage, scaling a blockchain too much will lead to the chain buckling under its own weight, with a deterioration of the p2p network and all their related functions. This is called ‘blockchain bloat’.

The basic problem is this: attempting to scale beyond the capacities of what volunteers can support will lead to volunteers dropping off the network. Volunteers are nice people, but even they have their limits. When costs to volunteers become too great, they will stop supporting the network. Their costs come in the form of hardware requirements:

“The decentralization of a blockchain network is determined by the ability of the weakest node in the network to verify the blockchain and hold its state. Therefore, the costs to run a node (hardware, bandwidth, and storage) should be kept as low as possible to enable as many individuals as possible to become permissionless participants in the trustless network.

- Starkware, Redefining Scalability

This is where the decentralisation — scalability tradeoff in Vitalik’s ‘blockchain trilemma’ comes from. And is likely the reason why back in 2010, Satoshi somewhat covertly introduced a hard-cap to Bitcoin’s blocksize of 1mb.

The Blockchain Trilemma, as proposed by Vitalik Buterin. Pick two, at the cost of the third.

While we’re here, let’s talk about a little thing that happened called the blocksize wars. Back in 2015–17, there was a raging debate which tore up the Bitcoin community. The debate was about what the block size should be for Bitcoin. Should it increase its block size to have large blocks so it can be a highly scalable network, or should it have small blocks and be a highly decentralised and secure network — and leave scaling to off-chain solutions?

Remember when I said I’d explain later why Bitcoin (BTC) was the only chain able to really create a strong culture of node-running? This is because at the end of the blocksize wars, only a minor block size increase was enacted (from 1mb to 2–4mb with segwit), and BTC went down the path of small blocks. It severely constrained its scalability in favour of creating conditions favourable to node-runners, to ensure the chain’s long-term sustainability. Hard forks like BSC and BSV were (are?) attempts to steer the Bitcoin protocol down the big-block path. But the small block size of BTC (an intentional scalability cap) makes running a Bitcoin node vastly cheaper than running a node on other chains. This block size cap, coupled with Bitcoin’s rich history, is what has led to the strongest culture of user node-running in the space.

There’s a gold-mine (pun intended) of topics to discuss concerning the blocksize wars, but ultimately it comes down to the simple tradeoff between scalability and the peer-to-peer network.

‘Why is Bitcoin so slow?’, so many complain. There is nothing about Bitcoin that necessitates it being slow. The only reason that Bitcoin can’t scale is because scalability would undermine the volunteer peer to peer network by increasing data storage costs to the point where volunteer nodes would be unable to continue running the network infrastructure. Bitcoin doesn’t scale because it chooses not to scale — not because there is something wrong with its protocol design.

Volunteer networks can only achieve volunteer scale. The Red Cross has been providing volunteer-powered healthcare for over 100 years. But it is not able to scale to the level that it can provide an entire healthcare system for society. As we will circle back to later, in order to provide (produce) something at a scale greater than what volunteers can manage, the market must be rewarded for producing it. Incentives steer the course of production.

“I suspect we need a better incentive for users to run nodes instead of relying solely on altruism.” — Satoshi Nakomoto

*Whether or not one believes that the above quote is from Satoshi comes down to whether one believes Satoshi’s email was hacked or not. Obviously, this is hotly debated — as both sides on the blocksize debate want Satoshi on their side. In spite of all this, I think the point raised in the quote still deserves attention.

The Data Storage/Pricing Problem

At the root of the scaling problem is the fact that blockchains don’t pay for data storage incurred by volunteer nodes. In every blockchain, users pay money today to put data onto the chain forever. They pay a one-off fee to miners, which fluctuates depending on network traffic. The intersection of the supply of and demand for blockspace on that chain determines the fee. Not factored into this fee is the cost of storing the data inside that transaction which archive nodes will have to incur in perpetuity, and which non-archive nodes will have to store for a good while. Not that the fee goes to nodes, anyway (it goes to miners/stakers). This is the case for every transaction on the network. Data storage costs simply aren’t accounted for in the fee pricing mechanisms of blockchains. Nor are the bandwidth costs associated with collecting and sharing transactions, mind you. And as discussed earlier, there is also a significant challenge in actually determining a market price for data storage endogenously (within the network), even if there was a way to pay for data storage without defunding consensus.

Over time, more and more data gets added to your favourite chain without its storage being paid for. The blockchain grows in length. Even on a chain which limits its scalability. This puts increasing amounts of strain on archive and full nodes, who incur the cost of storing network data without direct monetary compensation. The hardware and maintenance costs related to data storage simply snowball over time.

Because there is no compensation for storing blockchain data in consensus, only those who can find a viable business model outside of the blockchain (or those who are extremely militant maxis with sufficient resources) end up running archive nodes for networks. Typically, this is done by a small number of large private (centralised) companies like cryptocurrency exchanges, as well as those who run block explorers who require archive nodes to provide their service. That said, one only needs to look at BSV to realise that nodes run by private companies might not even be profitable if chain scalability is too high, because the chain doesn’t compensate them for data storage costs.

In my opinion, we would all prefer a network which endogenously pays for everything it needs to survive rather than relying on the existence external markets to keep it alive, so that we have a completely self sufficient system.

The Scalability Problem revealed that we cannot scale the network beyond whatever volunteers can manage without sacrificing decentralisation.

The Data Storage/Pricing Problem reveals why: there is a fundamental issue with regards to not paying for data storage on blockchains, which is a problem no matter what the throughput of the underlying blockchain is. In economics, this unpriced impact is called an ‘externality’. The data storage/pricing problem is what causes the scalability problem, so the two may be lumped together as the same problem under either name. It was only for the purposes of education (and because it will be useful later) that I separated the two.

The inevitable outcome of the data storage problem, where an ever-increasing chain size puts cost-pressures on nodes who store the data, is sometimes referred to as ‘blockchain bloat’. Blockchain bloat will be a problem on any chain after enough time (because blockchains only ever grow in size) — though chains which have higher scalability will face it more severely, more quickly. Economists call a situation like this — where the overuse of a public resource (the blockchain) today degrades it for everybody else tomorrow — a tragedy of the commons.

It is worth noting something about Bitcoin here. The Bitcoin (BTC) blocksize is small enough that the Bitcoin chain grows so slowly, there’s a case to be made that increases in data storage costs for nodes might be offset by decreases in the price of hardware as technology improves over time. I’m not aware of any other chain that can reasonably make this claim. So one could consider Bitcoin exempt from the Data Storage Problem (we could also debate this). But Bitcoin is only exempt from the data storage problem precisely because it has decided not to be able to scale its base layer. So a fundamental trade-off has been made. For Bitcoin, The price of avoiding blockchain bloat is 7 transactions per second. So while one problem is solved, another becomes worse.

Bitcoin’s blockchain is small and growing at a steady pace. (Source: YCharts)

Above is a chart of the size of Bitcoin’s full blockchain (in GB) over the past five years. You can see that it is not exponentially increasing. It is currently 417GB. A regular person could feasibly run an archival node. But as you move further away from Bitcoin, it becomes apparent that this is not the norm in the space.

Because of its higher throughput than Bitcoin (more transactions per second, more data per transaction, larger state storage requirements), Ethereum’s chain size is growing at an extremely rapid and unsustainable rate. An archive node on Ethereum took up 4 terabytes of space in April 8th of 2020. That is a LOT. More than a normal person is willing and able to run. Approximately, two years later, at the time of writing this article (July 22nd, 2022), an Ethereum archive node takes up 11.3 terabytes of space. That’s more than double as much. And remember, the blockchain is only going to get bigger.

Ethereum archive node storage requirements, Jan 2019–July 2022

That’s archive nodes, who store much-valued historical data. But what about full nodes, which make up the bulk of the p2p network? Instead of the full history of the blockchain, full nodes (also referred to as pruned full nodes) only store the current state of the chain, including enough historical data to process any new transactions. They verify all new network transactions, but prune (delete) older states from the chain which are not needed to determine the current state (determined by a largely subjective valuation of how much is not risky to delete).

The storage requirements for Ethereum’s two major full node clients over the past four years are shown below. The sudden drop in storage requirements in January of 2022 for the GETH node client was due to pruning more data from the chain, likely because storage costs were rising too much, too fast. This is not a systematic way to handle data-storage. The trend of data storage on Ethereum is clearly up, and at an increasing rate. In the past four years, the storage space of a full Ethereum node have increased from 163GB to 740GB — an approximately 4.5x increase.

Ethereum full node storage requirements, Jan 2019-June 2022

I think is fair to say that without any changes to the protocol, if Ethereum’s blockchain continues to grow at this rate, the average user would certainly be unable to run a full node within the decade. Node running would be left to those private companies which are able profit from running nodes (like blockchain explorers) if there are any, and the hardcore maximalists who can afford to do so. The long-term decentralisation of the network would be threatened by the data storage problem.

If you’re an Ethereum maxi, I can already hear you seething. Don’t worry! This article will also cover Ethereum’s plans to tackle the issue of increasing blockchain bloat.

And I’m not trying to make Ethereum look bad or anything, because it’s well known that Ethereum is vastly more decentralised (and scaling less rapidly) than most of its competitors. Ethereum just made different tradeoffs to Bitcoin. Chains like Solana are growing at even faster rates, and are now offloading the burden of data storage costs to projects like Arweave. In 2021 Solana was growing at 2TB per year, and in 2022 it is now growing at 30TB per year. Its strategy is to have full nodes only store 2 epochs (2 days) worth of recent data, and for archive nodes to use Arweave to store the rest of the chain. This is all just classic blockchain trilemma tradeoff stuff. — And don’t worry, we’ll discuss data storage platforms like Arweave and Filecoin a bit later on in the article.

In short: blockchains exhibit a tragedy of the commons situation with regards to the storing of data. Everybody adds data to the chain, but nobody pays for it — and the cost burden is borne by the volunteer p2p network. Over time, this degrades the security of the blockchain as decentralisation is eroded.

The Network Closure Problem (Background)

This next problem is a scarcely discussed, and often misunderstood one in the space because it is an economic problem, rather than a technical one. It is one that Moxie Marlinspike’s viral blog post, My First Impressions of Web3 indirectly touched on. That article will be useful pre-reading if you don’t understand what something like Infura is, or how a web-wallet like Metamask interacts with the underlying blockchain.

If the concept of the ‘networking layer’ of a blockchain is new to you, and you want to learn how it works on a deeper level but don’t know where to start, then you should then read Preethi Kasireddy’s beautiful overview, The Architecture of a Web 3.0 application. Feel free to keep on reading here if you are already familiar with these things — or if you just want the ‘short’ version. But I highly recommend you revisit those articles at some point if you want to learn more.

I will mention that when I say ‘networking layer’, I am referring to the p2p network which connects users to the blockchain.

In order to get to the heart of this problem, we are going to learn some economics — and what Satoshi really invented in 2008. At first you might think that what I’m about to discuss isn’t relevant, but it’s actually the basis of understanding the entire value proposition of blockchains and cryptocurrencies — so strap in and enjoy the ride.

This painting is actually not relevant, but is really cool

Public Versus Private Goods
In economics, a public good is something that has two properties:

  1. It is non-excludable: nobody can stop somebody else from using it
  2. It is non-rivalrous: one person using it doesn’t stop other people from using it

A public park provided by the government is a good example of a public good. Nobody can stop you using it, and using it does not stop other people from also using it.

A private good is the opposite of a public good — it is something that is excludable and rivalrous. Something like a burrito is an example of a private good — its owner can stop others using it, and its usage or consumption stops others from using or consuming it.

A point of confusion is often that ‘public’, in the context of economics, means’ government. A ‘publicly produced good’ would refer to a good produced by the government — whereas a ‘privately produced good’ is something produced by the private sector (individuals, businesses). A ‘public good’, however, can technically be produced by the private or public (government) sector. It just so happens that, for reasons we will now discuss, inducing the private sector to provide public goods is very difficult — and so the government is basically always the provider and producer of public goods.

An Economics Lesson on Public Goods Provision
Normally, the government pays for public goods using money collected through taxation or monetary debasement. Why? Because there is no economic incentive for the private sector (you and me) to produce them.

I’m going to explain exactly what I mean by this using the example of trying to provide a public park.

Remember to go outside

Would you (the private sector) voluntarily pay the cost of building and perpetually maintaining a park that is open for everybody to use? It would be a constant money drain, and you would not get compensated for it. Meaning, if you didn’t have some way to make enough money to pay for it, you’d eventually go bankrupt.

Maybe if the park was small enough, you might volunteer to produce it for your community if you had the money. But in this situation, the park’s provision would not be ensured into the future. If you died, it would pile up with litter and damages with nobody being paid to constantly keep it clean or maintain it.

What if the public park was supported by the altruistic donations and efforts of the community who would use it? Well — first, as we learned earlier, volunteers can only provide volunteer scale. If we rely on volunteers, then the scale of the public good will be constrained by what volunteers — who lose money and effort to fund and care for the public good — are able to handle.

Secondly, we have an incentive problem on our hands. Suppose the (small) park is paid or cared for by a community of 20 volunteers. If I’m in the community, and I know that 19 other people are also paying or caring for the park voluntarily, then I’ll just stop paying for it and let them pay for it. They can do the work, and I’ll still get the benefit. You might object: “but that would be mean / wrong / unethical / bad!” —Oh well. I’m not going to pay for it. And because it’s a public good, I can still use it just as much as anyone else, and nobody can stop me. This is called free riding, and it is something we observe all the time in the real world. A classic example is free riding on the work produced by a hard-working member of a group assignment and doing nothing yourself. Or not paying for your bus ticket, while everybody else does.

I probably won’t be the only one with this free-riding idea either. One by one, other volunteers will catch on and also stop paying or caring for the park — because they don’t need to pay for it to use it, and it is still getting taken care of by other people. This is prototypical game theory — see here for more. Individuals act — not groups. And individuals do what is in their personal best interests. So the incentives on the individual layer determine the outcome we get. Even if everybody wants the park in theory. This is the essence of the Prisoner’s Dilemma in game theory.

You might say: “but we can stop this from happening — the volunteers who are paying or caring for the park can just ban those who aren’t from entering the park, or make non-contributors pay to use it!” These are pretty good ideas, but what you’ll get if you implement them is no longer a public good, because this introduces privatisation and therefore excludability. Those paying would have a monopoly on who could or couldn’t access the park. Note here, that introducing privatisation and subsequently excludability (being able to stop people using the park) actually enables a business model where you could allow park access to those who paid you for it. This would be a perfectly viable business-model which would ensure the sustainability of the park’s provision (assuming there was market demand for it). But we no longer have a public good.

Okay. We can’t ban people or make them pay, because then it wouldn’t be a public park. But “surely”, you might say, “because people value the park, and having the park gives them some kind of value, there will be people who are willing to pay for it. Even if they are volunteers, they will still get ‘compensated’ — just not with money!” Well, like we said before, at a small enough scale altruistic and self-sacrificing volunteers might be willing to take the economic hit to provide the public good which they value. But this is not a scalable approach. How can I prove it’s not scalable? Because we don’t get the private sector providing healthcare, education, roads, bridges, parks and infrastructure for free at nationwide or global scale for non-monetary (e.g. moral) utility! And even if there was a willing volunteer who had a LOT of money and could afford to produce a public good at a societal scale (insert your favourite company/billionaire here), they would not be able to do so sustainably. Eventually, the perpetual costs of providing the public good’s expenses would simply bankrupt them. And even if it was magically only a one-off cost to create and maintain the public good, there would still be no monetary return on the investment — which, compared to alternate uses of the money, would seem wasteful.

People not familiar with market failures often reason: “if [public good X] is really valuable to a lot of people, then someone will pay for it.” But that isn’t how markets work. The market will only provide something through the introduction of closure. However, by now it should be clear that the free market will not provide public goods unless they privatise the resource in question, so that they can justify its production costs through profit. That’s the only way we can incentivise them to produce the good for us. But privatisation leads us to lose the property of non-excludability — which is why we can’t all just go and relax in a private golf club’s park. It is not a public good.

This situation, often deemed a ‘market failure’, is why we try to get governments to provide public goods for us using taxation (whether directly or through inflation). Notwithstanding the point that government failures are also a very real possibility, both market and government solutions to the public goods problem introduce a trusted third party for provision.

What Type of Good is a Blockchain?
The private sector will not produce a public good unless they are able to remove openness (non-excludability) and introduce closure (excludability) through privatisation.

But what if there was a special kind of public good which actually paid the private sector to produce itself, while maintaining its core properties? What? How is that possible?

Blockchains are not ‘owned’ by anyone. It’s one of the weirdest things about them. If they are built right and the network becomes sufficiently decentralised (if they are good blockchains), no single party owns or can unilaterally control them. They are non-excludable, because anybody can use them — and they are also non-rival because one person using them does not stop others from being able to use them (your transaction will get onto the chain). They are public goods, in the purest sense of the term.

In fact, nearly everything desirable about something like Bitcoin comes from the fact that it is a public good. Censorship resistance is the inability for anyone to halt or tamper with your transactions on the network. Openness, that anyone can use the network and that no-one can be excluded from it, is another highly desirable property of blockchains. Blockchains (when built right) are public goods, and this is exactly why we like them!

Except, unlike basically every other public good, blockchains are not provided by the government. They are provided by the private sector. Blockchains are privately produced public goods. A good blockchain is a public good which incentivises its own provision.

Blockchains are privately produced public goods. A good blockchain is a public good which incentivises its own provision.

This is possible because every blockchain is powered by a cryptocurrency which can be used on its network. If you just asked the private sector (you, me, companies) to maintain a blockchain — a distributed ledger — they would say no, because they have no incentive to do that. Just like with a public park. But if you told the private sector that if they provided the blockchain, they would get paid with a currency that could be used on the blockchain— then they would happily provide the necessary network infrastructure, supposing they thought the currency would have sufficient value (utility) to others and the network would grow over time.

This was part of the genius of Satoshi. He made a network that pays for its own existence. A network which funds itself. A network which is self sufficient. And not is it only self-sufficient, but it is also a public good which is self sufficient. Something open that nobody can be excluded from using or participating in, and which me using does not stop you using. This was one of the key revolutionary steps of Bitcoin. It was a self-funding public good that did not rely on a closed set of participants to provide. It is not enough that a blockchain does what we want it to now. We need it to be able to survive into the future— indefinitely. Otherwise what we have is merely a temporary chain, doomed to inevitably lose its core properties in an attempt to stay alive.

As we saw before, the private sector can only viably provide public goods when they introduce closure — when they take ownership of the resource or good in question. This was the standard theory, anyway. Yet, with Bitcoin, this was not the case. Nobody owns Bitcoin. It is a public network.

Blockchains being public networks is the reason why ‘attacking the network’ means exerting control over the network.

An important point needs to be made here. Why did I say that blockchains are only public goods “when they are built right”, or that only “good blockchains” are public goods? The reason is that slapping the word ‘blockchain’ onto some code does not mean you have created anything of value. When Satoshi created Bitcoin, he created something new. Something fundamentally different to most of what passes for a ‘blockchain’ today. He created a self-sustaining and secure public good.

The key properties which make a blockchain valuable in the first place are:

  • Openness (non-excludability from both consensus and usage)
  • Self-sufficiency (incentivises the provision of its own infrastructure)
  • Censorship resistance (prohibitively expensive to tamper with or cheat)

I’m going to call these three properties The Satoshi Properties — as they constitute the core innovations made by Satoshi Nakomoto. Without them, blockchains are worthless. Trustlessness and the disintermediation of third parties would not be possible. All other properties (e.g. decentralisation) are only valued insofar as they help to attain the above properties. If they did not contribute to the attainment or preservation of the above properties, we wouldn’t really want them. If you don’t believe this is the case — try to imagine a blockchain without these properties, and you’ll quickly find that it doesn’t require a blockchain to exist.

What really makes a blockchain desirable in the first place? Self-Sufficiency, Openness and Censorship Resistance. No matter HOW decentralised or scalable, you would not want to USE a blockchain without these properties. The Satoshi Properties are the real blockchain trilemma.

The Network Closure Problem

As we have learned, the networking layer (the p2p network) is a key part of a blockchain. It is crucial to its proper functioning and for the long-term maintenance of the Satoshi Properties.

But the networking layers of blockchains are not paid for by proof of work and proof of stake consensus mechanisms. Which means that only part of the public good of the blockchain is being self-funded. Which means that it is left to volunteers or private entities to provision the network layer.

From our discussion on public goods, we now know that this leaves only two possible options. Either:

  1. Don’t scale the public good, in this case the networking layer, so that volunteers can provide it for free — OR
  2. Let the private sector come in and fund the provision of the good, at the cost of privatisation and the loss of non-excludability

And this is exactly what the crypto space’s solutions to the problem are! Bitcoin decided not to scale (7tps) and let volunteers run the p2p network — and other chains decide to scale more, but let private companies come in and fund the network infrastructure.

In the case of Bitcoin, we don’t get commercial scale (volunteers only provide volunteer scale). In the case of other chains, what we see is that the network layer begins to get monopolised or cartelised by private companies who occupy positions of tremendous power on the network. Moreover, these private companies themselves run on totally centralised, also privately owned companies like AWS. Also note than in addition to centralisation posing a censorship risk, it also poses the risk of being more easily hacked or disrupted. This all brings us to the network closure problem.

Let’s talk about the prototypical private company which has emerged to replace a blockchain’s volunteer peer to peer network.

Infura is a node-providing company for the Ethereum blockchain. Remember that name. They are a private company which sets up nodes for people like developers, to connect end-users (you) to applications on the underlying blockchain — and route transactions they collect across the p2p network.

Here’s a quick overview of what Infura does from their FAQ’s:

“Blockchain applications need connections to peer-to-peer networks which can require long initialization times. It can take hours or days to sync a node with the Ethereum blockchain and can use more bandwidth and storage than you had planned.

It can get expensive to store the full Ethereum blockchain and these costs will scale as you add more nodes to expand your infrastructure. As your infrastructure becomes more complex, you may need full-time site reliability engineers and DevOps teams to help you maintain it.

Infura solves all of these problems by providing infrastructure and tools that make it quick, easy and cost-effective for developers to connect to Ethereum and IPFS and start building awesome decentralized applications.”

Essentially, the busier a dapp, the more transactions need to be processed, the more and stronger nodes are needed to manage the increased data flows for that dapp. By providing and running node infrastructure for Ethereum developers, Infura covers a massive cost (money and time), processing up to 100,000 requests per day for free.

All this ‘free’ or ‘free-market provided’ infrastructure sounds great in theory, but it actually comes at a massive cost.

Firstly, there is a centralisation risk. Since Infura exists solely to spin up Ethereum nodes, they experience the benefits of economies of scale meaning, it is cheaper for them to run nodes than smaller parties running nodes. Obviously, this has led to massive centralising pressures. As Google and Amazon (web2 analogues to Infura) show us, IT industries tend towards monopoly. And what do we see with Infura? Infura nodes process 80–90% of all transaction flows on the Ethereum network. This is an extremely centralised system. Infura also owns and runs Metamask, which is how basically everybody connects to the Ethereum blockchain.

Infura runs so much of the Ethereum network that when they forgot to update a bunch of their nodes in 2020 — they unintentionally hard forked Ethereum (!) and crashed Metamask. What’s more, in 2022, Infura accidentally blocked the entire country of Venezuela and regions of Ukraine from using Ethereum while trying to comply with government sanctions (see here and here), also causing massive network slowdowns. More recently, Infura (and their largest competitor, Alchemy) blocked access to Tornado Cash to comply with government regulations. Since Infura is a monopoly and the problem of provisioning of an open network is not regulated by consensus, there is no punishment for this censorship. In classic web2 fashion, Infura also collects your data, too.

I’ll also mention that Infura itself (the AWS of Ethereum) runs on Amazon’s AWS — so we have a centralised company running the blockchain by using another centralised company. In 2019, 25% of Ethereum’s nodes already ran through Amazon (including Infura). Oh, and even Amazon themselves want a piece of the game of running Ethereum nodes now. All this centralisation poses what is called a ‘redundancy’ risk. Locating so much of the network infrastructure in one place increases the network’s vulnerability to hacks, political pressure, regulation, natural disasters, power outages and other disruptive pressures.

(Creator Unknown)

The reason the networking layers of Ethereum is mostly run by large businesses subject to regulatory capture is because volunteers can’t afford to sustain the scale the blockchains is trying to process. This means that only for-profit firms can afford to run the network. Companies like Infura and Alchemy monopolising the network layer are merely symptoms of the volunteer problem — the fact that blockchains do not pay for network-layer infrastructure out of consensus.

Due to the volunteer problem, a mere handful of privately owned and run companies are running the second largest crypto network on the planet — and there is nothing stopping them from abusing their position of power. Hoping that the private companies who decide to run blockchain infrastructure will not bend to political whims or exert their own control over the network undermines the virtue of trustlessness which the entire space was founded upon.

There are some competitors to Infura — such as Alchemy. But if Infura suddenly broke or got hacked, Alchemy would not have the capacity to handle all of the network traffic which Infura is processing, because Infura is so much larger. However, even if Alchemy did have this capacity, it does not fix the problems created by allowing the private sector to run your networking layer.

There is a second, still deeper problem here.

The REAL Network Closure problem is not merely about centralisation. Put simply, the network closure problem is that the business models needed for the private sector to provide the p2p network undermine consensus openness (universal participation on equal terms).

Because the private sector cannot afford to run nodes for everybody at a loss indefinitely, they have to monetise the transactions they collect in order to sustain their business. Monetising transaction flows just means using the transactions they collect to make money — normally by taking part of their fees. An example of how this would happen is if Infura either set up their own staking/mining nodes and only routed transactions to themselves — or if they partnered with existing miners/stakers and agreed to route transactions exclusively to them for a portion of the transaction fees.

Here’s the problem though: if user transactions represent profit for private infrastructure providers like Infura, then these companies will have no incentive to share them with others on the network — which puts them in a position to determine who can participate in the network and at what rate of profitability. They will have an incentive to collect the transactions, because they can make money off them, but a dis-incentive to share the transactions freely with all network participants, because otherwise somebody else could get the fees without giving them a cut. There would be no recourse or cost to a company like Infura should they decide to do this, or engage in any other exclusionary behaviours.

We don’t want those with power to exclude on our network layers

Although there are certainly issues regarding economies of scale and single points of failure, network closure is ultimately an entirely distinct matter to centralisation. Whether there is one or ten firms providing the networking layer of the blockchain, since they are not paid out of consensus they all must monetise transactions, and so they all must introduce closure around network data flows. It is necessary so that they do not go bankrupt.

This forces these companies into de-facto positions of power to determine who gets those transactions (who makes money off of them) and who doesn’t. They will either forward transactions they collect to their own mining/staking nodes, or miners/stakers who they are partnered with. That way, they can take a cut. Consequently, if a network decides to scale and leave network infrastructure to the free market, as private firms fill the void left by volunteers they will come to determine who is able to participate in consensus. The property of openness gets destroyed.

At no point do Infura-like businesses have an incentive to share transactions they collect with the rest of the miners or stakers on the network, because they would not get paid for doing so. If they did share transactions freely, they would be doing costly work for free, and the rest of the network (including their competitors) would free-ride off of their work. This is not a viable strategy for a private company. Like the volunteers before them, free-riding pressures would build and the company which decided not to monetise transactions would be unable to bear their infrastructure costs at scale. Hence, relying on external markets to provide the networking layer (a public good) at scale necessarily leads to closure around data flows and the loss of public-good properties — because closure is the only way businesses can be incentivised to provide the good.

*Note: The blockchain trilemma is not a fact in the same way that the number of protons in a gold atom is a fact. The blockchain trilemma is just a feature of the way we have designed blockchains so far. Furthermore, one could imagine adding a ‘self-sufficiency’ axis to the above diagram.

“Businesses will process transactions in exchange for a part of the transaction fee, which requires adding closure to data flows, which cannibalizes the openness of consensus-layer data-flows. This kills the Network”

“The existence of private firms does not necessarily lead to monopolies (although it seems prevalent with high-tech businesses, so this isn’t necessarily wrong). The inevitability of closure is because private firms are now collecting money for the consensus layer — which is another public good.

Consensus requires permissionless access to the payment faucet to remain open/non-excludable/egalitarian. Firms on the network level can be free-ridden on by other firms on the network level [if they share transactions], based merely on their altruistic support for the underlying consensus layer.

And firms maximize income by cannibalizing those revenue flows. So transaction monetization becomes the norm, as with TAAL. The interior consensus-layer public good poisons and infects the incentive structure of the exterior market unless the external firms respond by closing access to the value they provide it.”

— David Lancashire

If Infura collects 90% of network transactions, unless they want to run at a loss, they will decide who gets the fees (who has the right to make money) and inevitably push other participants not privy to their fee-flow off the network over time. This is a closed system. Like the banking system. Which is the opposite of what Satoshi invented. In the banking system, banks take your transaction fees and provide you with transaction-processing services — but they do not share your transactions with other banks for free. Instead, they introduce closure so that they can make a profit and continue providing the service. It’s not that Infura is evil —they’re not at all. It’s just that they need a business model to survive, and this business model contradicts the value of openness (that anyone can participate on equal terms) in the blockchain space.

The problem here is very simple: if we wanted privatisation of data flows on the internet, we would just use web2. Companies like Google already do this for us, and the cost is closure: them having the powers of censorship and excludability.

This is the exact same reason Amazon, Google or Facebook are not open. They need to pay for the infrastructure, so have to monetise and take ownership of data flows. Which puts them in a position of power to exclude others from those data flows. Amazon, Google and Facebook are not, however, attempting to provide public goods. It all comes down to the fact that providing a non-excludable good can only be done by volunteers at a loss, or by the private sector introducing closure (excluding others).

The genius of Satoshi was figuring out how to endogenously pay for network security and induce the private sector to provide something with public good properties in an open manner (in a way where everyone can participate equally). But even Satoshi was not able to figure out how to pay the networking layer of cryptocurrencies, which is why Bitcoin’s network is run by a culture of volunteers. For chains which do not constrain their scalability, the abandoning of Section 5.1 of the Bitcoin whitepaper is inevitable when private firms who need to monetise transactions to cover costs step in to provide the networking layer. The ability to limit who can participate on the network profitably can even be seen as a type of discouragement attack.

Note: Some people suggest that lite clients will fix the Infura problem, but forget that lite clients ultimately still rely on full nodes run by companies like Infura (the only ones who can afford to run full nodes at the scale the chain requires) to process their transactions. More lite clients means higher costs (e.g. SPV generation costs) to the remaining full nodes who need to service lite blocks.

I’ve used Ethereum as my main example here, but it needs to be understood that this is not just an Ethereum problem. Like the data storage problem, every chain which decides to scale has or will have this problem. The network closure problem is probably the most overlooked, ignored and hand-waved away problems in the entire crypto space. But maybe that’s just because nobody knows how to fix it.

Block Subsidies and Network Security

Blockchains pay miners and stakers using some combination of fees and newly minted coins (monetary inflation). Bitcoin, for example, uses inflation and transaction fees to pay for its security. Because it has a hard cap of 21 million Bitcoins, eventually it will stop using inflation and pay miners solely with transaction fees. This means that Bitcoin’s security budget will eventually stop being subsidised by newly minted Bitcoins, and will shrink to just the size of the fee market.

If the price of Bitcoin does not sufficiently appreciate to make the value of transaction fees compensate for the lost block rewards each halving, many miners will be pushed into economic losses and leave the network. Less miners means the total hashrate falls. Since Bitcoin’s security (cost of attack) at any given point is a function of the total hashpower of the network, this means attacking the network becomes cheaper. In Proof of Work, the cost of attacking the network is 51% of the total hashpower. (Likewise, it is 51% of the stake in Proof of Stake)

Compared to the newly minted coins in each block (currently 6.25 BTC per block produced), transaction fees only make up a miniscule amount of the block reward which goes to miners. In fact, due to increased usage of the lightning network, Bitcoin fees are not scaling with adoption — and the main incentive for mining is still without a doubt the block reward. This poses a long-term problem with regards to Bitcoin’s security, which will eventually only be funded by fees. And the same goes for any hard-capped crypto asset.

To learn more about this issue from a variety of perspectives, I recommend reading the following:

One solution to the problem of the disappearing block subsidy is to make the chain’s native currency permanently inflationary — to commit to relying on the block subsidy indefinitely. The problem with this is that it destroys the monetary premium associated with the asset, which itself can pose security risks.

On ‘economic security’

Ethereum’s solution is to make Ether inflationary, but also burn enough of every transaction fee that it outweighs the inflation, and ETH becomes net deflationary. This manages to preserve the deflationary aspect of the currency while not relying on fees for security — but (to the best of my knowledge) at the cost of higher fees than if we didn’t have to burn part of each fee to achieve deflation.

For now, we can leave this problem at: the fixed supply monetary policy which Bitcoin champions — and which this space was founded upon — poses a threat to the security and long-run self-sufficiency of any chains which employ it.

Majoritarian Attacks

Majoritarian attacks, also known as 51% attacks, are a fundamental attack vector on Proof of Work and Proof of Stake networks. How they work is very simple.

The blockchain is a connected list of sequential blocks. At any given moment, block producers compete to add a new block to the ‘tip’ (the front, the newest end) of the chain. As mentioned earlier in the article, the tip of the chain can temporarily fork when two block producers create a block at the same time and both propagate their new block around the network. In this situation, the network will have two copies of the blockchain with different blocks at the tip, and doesn’t know how to decide which one to accept as reality.

The way the network resolves this split is by allowing the network participants to continue producing blocks freely on either of the chains — and selecting whichever chain is the longest chain as the true blockchain. At some point a miner on one of the chains will produce a block before miners on the other chain, propagate it around to the network, and their newly produced longest chain will be accepted. The block produced on the alternate chain gets deleted (an orphaned block), and everybody adopts the longest chain. Transactions in the orphaned block which did not make it on the longest chain will return to mempools. The odds of producing two blocks again and again, so that the fork is never resolved and two equal length chains are extended together forever, are next to zero.

This fork-resolution mechanism, this consensus mechanism of always adopting the longest chain, is called Nakomoto Consensus. It was another brilliant invention of Satoshi, and most blockchains use it to this day.

The problem which arises as a result of Nakomoto consensus is that if you can acquire a majority of the work required to produce a block (hash or stake), you will be able to consistently create the longest chain and inevitably orphan all blocks which you do not produce on the network.

Even if someone produced a block before you and the network added it to the chain, you could just ignore it and keep mining on your own copy of the chain. Since you have 51% of the work function and produce blocks at a faster rate than the rest of the network, eventually you will produce a longer chain and be able to orphan any other blocks the network produces. This means that you would collect the entirety of the network revenue, fees and block subsidy, in perpetuity. You could then use this money to fund your attack and continue collecting revenue (a fee-recycling/discouragement attack). With the ability to consistently override any alternative chains, you would have the power to decide every block that goes onto the chain. This means you could exclude transactions from ever making it onto the chain and censor transactions, killing the blockchain. This is a 51% attack.

Although 51% attacks would devastate any blockchain, I will note that in Proof of Work blockchains, past blocks are prohibitively costly to orphan because an attacker would have to rehash every subsequent block on the chain after the target block (including ones created during the attack) to create the longest chain.

Proof of Stake tries to achieve a similar property to this with ‘finality’ and slashing, but this comes at the cost of using closed validator sets to establish finality (a pre-Satoshi solution to the Byzantine Generals Problem which does not provide openness) — whereas in Bitcoin confirmation is not final but probabilistic, relying proof of work instead of closed voting rings to maintain cost of attack. The finality methodology can also have some problems at scale due to the need for constant communication between a larger group of network participants to establish finality — and could be subjected to halting attacks.

Later in this article we will discuss a potential solution for making the 51% attack unviable.

Discouragement attacks

Above, I briefly mentioned an example of a discouragement attack. In a discouragement attack, an attacker tries to prevent other network participants from receiving income to drive them off the network (discourage them from staying around). This can involve the attacker incurring a loss or lower than average ROI themselves for a period of time to render other actors unprofitable, so that ultimately others leave and they end up with increased power over the network. This would put them in a position to claim a greater share of network revenue — and could even set them up to perform a 51% attack. For example, spending money on hash to attain 51% of the hash to produce the longest chain, orphan all other miners, and then drive others off the network is a discouragement attack — even if initially it costs the attacker to pay for hash.

Although it is an unconventional way of looking at them — discouragement (and arguably, 51%) attacks arise from the fact that blockchains pay for block production. Since the creation of a block is the activity which gets paid, producing a majority of blocks will put you in a position to earn the most money, which you can then use to buy more hash/stake than the rest of the network. We’ll discuss this more later…

Commodification of the work function on secondary markets

Majoritarian and discouragement attacks are made even more problematic by the fact that the forms of ‘work’ used in Proof of Work and Proof of Stake (hash and stake respectively) inevitably become commodified on secondary markets.

I’ll explain this in simple terms:

  • PoW and PoS require miners/stakers to do work (mine/stake) in order to produce a block
  • Individuals can mine and stake freely
  • Eventually, someone gets the idea that they can rent hashpower or stake to individuals/parties in the broader economy outside of the blockchain (a secondary market)
  • Hashpower and Stake become commodified, meaning, treated like commodities. Markets form around buying, selling and renting them.

What is the problem with this? The problem is that if I own 30% of the hashpower of Bitcoin, I can rent another 20%, bear the cost of that for a while, and perform a 51% discouragement attack and eventually make my money back and more. 51% attacks using rented hash have already proven effective on networks with weaker security.

Similarly for stake, I could just rent stake and use that to attack the network. This excellent article details the rise of staking derivatives on Ethereum and how this secondary market for stake poses security risks from concentration of stake. I highly recommend reading it sometime, as it is a serious issue unto itself.

The worst part about the commodification of the work function is that nobody can stop people from renting hash or stake to others, because the renting occurs outside of the blockchain. It is a secondary market activity which the blockchain has no way of penalising, and therefore a seemingly insoluble problem. It’s as if the problem is happening in a higher dimension which the blockchain can’t access.

Credit: Interstellar (IMDB)

Miscellaneous Incentive-Based Problems:

There are some other incentive-based problems I wanted to briefly touch on, which I can devote more time to exploring in a future article. Different chains are afflicted with them to different degrees.

One of them is called header-first mining and we see it happening on Bitcoin. It is a significant problem because it shows that the only people who are actually paid anything at all out of consensus are avoiding storing the chain’s data. It is a classic example of an incentive misalignment problem.

Note: a block header is a unique, sort of signature each block has — and is composed of multiple components. It is deemed to be valid when the hash of it meets the requirements to win the proof of work puzzle, which requires brute force mining (spending electricity) to achieve. A block header can be valid even if there are invalid transactions inside the block.

Recall that Proof of Work and Proof of Stake do not pay for transaction validation. They only pay for mining and staking. This generates the problem of header-first mining, where to save space and time, miners only validate block headers — rather than full blocks (the transactions inside blocks). This weaker form of validation can lead to network-dysfunctional situations where they can validate a block with an invalid transaction inside it (see link for more). Header-first mining once led to a 6-block long invalid Bitcoin fork. Again, this problem is resultant from the fact that blockchains don’t pay for transaction validation — only mining/staking.

Another good example of how incentives can drive the behaviour of network participants in directions we don’t want is selfish mining. Again, this is actually a nontrivial security issue which arises due to incentive misalignment. To learn about selfish mining and a couple of other related problems like delayed block propagation clearly and quickly, I recommend this short and extremely sharp article by Aaron Van Wirdum.

Overview of Problems — A Graphic

The following graphic maps out all of the problems discussed in this article.

Part III — The Solutions: Current and Planned

In this section, I will give an overview and evaluation of the key proposed and implemented solutions to some of the above problems in the space today. A full discussion of each of these proposed solutions and their nuances would be a mega-article unto itself — and there are plenty of others who have already covered these topics in greater depth than I could hope to manage. So forgive me in advance if I leave things out.

Ethereum

Because I can’t cover every blockchain on Earth, I’m going to (mostly) go through solutions on the Ethereum roadmap. Ethereum has an extremely comprehensive roadmap which really tries to address many of the problems outlined, and many other blockchains are also pursuing these same solutions — so it should more than suffice for a broad overview of the solution space. If you’re an Ethereum maxi, please go easy on me if I leave things out!

A reminder that some of the key problems to fix are:

  • People will stop running nodes as chain size grows & storage costs increase, so chain scalability is constrained by the need to preserve the p2p network (tragedy of the commons problem)
  • All non-mining/staking nodes are volunteers, and the work they perform (routing work, data storage, transaction validation) is not economically incentivised, leading to cutting corners and not performing essential tasks (free rider problem)

Let’s dive into it.

Statelessness

To learn more about statelessness, see: Vitalik Article 1, Vitalik Article 2, Polynya Article 1

The Disintegration of the Persistence of Memory — Salvador Dali

“State refers to information that a node must hold in order to be able to process new incoming blocks and transactions. It is typically contrasted with history, information about past events which can be held for later rebroadcasting and archiving purposes, but is not strictly needed to continue to process the chain.”

- Vitalik Buterin [Article 1]

Statelessness is the idea that we can get people to validate blocks using light clients that don’t store state. This way, the argument goes, we can have more people validating transactions and subsequently improve decentralisation in spite of a growing chain size.

One version of statelessness is called weak statelessness where only block producers are required to store the chain state (be full nodes), and they provide proofs called ‘witnesses’ with each block which light (stateless) clients can then verify. When full nodes come across an invalid block, they will send what are called fraud proofs to light clients, giving only enough information to reject the invalid block. The argument for weak statelessness is that this fraud-proof mechanism only requires one honest full node to send out fraud proofs (packages of data) to light clients in order to have the network reject invalid blocks. Though I will note that in this hypothetical scenario, the bandwidth costs on that single honest node broadcasting the fraud proof to all other nodes would be significant — and this desired behaviour is not something the full node is paid for doing. So we would be trusting altruistic behaviour in the face of adverse incentives (as header-first mining in Bitcoin has shown, this is not a confidence-inspiring strategy).

The second version of statelessness is called strong statelessness, where no nodes store the full blockchain state and transaction senders are responsible for storing state that is needed to generate proofs for accounts relevant to them. Strong statelessness has its difficulties, however — and would shift a lot of responsibility to the users, rather than have the network solve the problem. Whether this is a good or bad thing seems to reduce down to a subjective value judgement.

“Strong statelessness is a very “elegant” solution in that it completely moves responsibility to the users, though to maintain a good user experience in practice there would need to be some kind of protocol created to help maintain state for users that are not personally running nodes or that need to interact with some unexpected account. The challenges with making such protocols are significant. Additionally, all statelessness increases the data bandwidth requirements of the network, and strong statelessness requires transactions to declare which accounts and storage keys they are interacting with (this notion is called access lists).”

Vitalik Buterin [Article 1]

It is important to note that neither of these approaches solve the underlying problem of incentivising people to actually run lightweight or non-block-producing full nodes and verify transactions for the network. The goal here was to bolster the p2p network by reducing data storage requirements for nodes, but even if statelessness makes this technically possible — it doesn’t make it incentivised behaviour. Remember game theory and the free rider problem — “if everybody else is doing it, then I don’t need to” is exactly how markets fail to provide and maintain public goods.

Yes, it would be good if people ran non-block-producing nodes. But “you should do this thing because it’s good, except we won’t pay you to do it and nothing bad happens to you if you don’t do it” is the same reason people don’t go around picking up other people’s rubbish. It’s the *exact* same reason people download torrents but don’t seed them to others, a phenomenon called ‘leeching’ [see examples 1, 2, 3, 4]. It’s the same reason that there’s always someone who doesn’t do any work in the group assignment. It’s still ultimately a volunteer based solution to conserving the p2p network at scale.

It only takes basic game theory or an understanding of human nature to see why this is unideal.

Weak statelessness also fails to solve the problem that somebody, somewhere still has to incur the costs of storing the growing chain state, and is not compensated for doing so. Fraud proofs still need to come from someone who actually stores the whole chain state and can fully verify blocks instead of just doing header-only validation. This ‘someone’ is full nodes (whether they produce blocks or not). Because the chain state just keeps on growing, so too will data storage and bandwidth costs for full nodes over time. Furthermore, the need to send out fraud proofs to light clients only increases bandwidth-costs for full nodes. Volunteers will still drop off the network over time, and we will lose decentralisation. Even if one ran a staking node, the consensus-collected staking payout has nothing to do with data storage and bandwidth costs. There is nothing which makes staking rewards necessarily cover these costs, and should the token price fall sufficiently (or not appreciate fast enough) and render block-producing full nodes unprofitable — what then?

The basic fact remains. Over time, the blockchain state continues growing — which means that node operating costs will rise and the p2p network is still being degraded.

I have some additional unanswered questions about spam attacks and sybil resistance related to weak statelessness (e.g. what happens if light nodes are spammed with valid fraud proofs, or if attackers set up sybil light nodes) — though I’m sure smarter people than I have figured those things out. There are also debates about the extent to which light nodes actually contribute to the p2p network more broadly, given that they rely wholly on full nodes to function. This is similar to debates about weak subjectivity, and how non-archive nodes (most full nodes on every non Bitcoin chain) in the first place require more trust than archive nodes.

Under strong statelessness, the chain state is stored in a more distributed way by users — meaning users incur the cost of storing relevant data. This is a more viable solution, but is not without its own issues. Bandwidth costs rise for users, and data availability/redundancy issues would increase. There also becomes a need to create a protocol which helps store and manages relevant state for users — which is part of what we were wanting the blockchain to do in the first place. Funding the creation and updating of such services for users may also run us into another public goods problem. Yet, to be honest, one of the biggest problems I see with users storing all data which is relevant to them at any given time is that from the user perspective, it feels like a massive downgrade from the web2 model. One could also easily imagine a world where to overcome this, the average person outsources data-storage responsibilities to centralised companies which introduce closure around data flows in order to provide their services. I mean, we did it with storing our money in banks.

Whether we want a relatively large, distributed and decentralised set of archive nodes so that we can access historical data (which we inevitably will, for a multitude of reasons) — or we just want the ever-growing state size of the blockchain to be stored by nodes — then people are going to need to incur the costs of providing those services. Making the average node not need to store state or history doesn’t change this fact, or incentivise the provision of infrastructure to achieve those ends. It just shifts the burden of doing so onto other network participants. In weak statelessness, full nodes bear all the costs. In strong statelessness, it is users and whoever builds the infrastructure to help users store relevant data. Both situations are less than ideal — and in both cases, archive nodes are not incentivised whatsoever.

If the chain’s data is going to exist, parties still need to bear the costs of storing it — and so long as they are not paid for this, expenses will grow at scale. Private businesses will only provide data services if they can monetise data flows and introduce closure. We haven’t gotten very far.

“In reality, “stateless” does not really mean “no state”! What it actually means is that you made state someone else’s problem…”

Ben Edginton, ‘What’s new in Eth2'

State Expiry

Don’t allow yourself to forget. Credit: Memento

In a similar vein to statelessness, we have state expiry. State expiry is when blockchain state that has not been accessed for a certain period of time is allowed to be dropped by full nodes (no longer counted as part of the blockchain’s ‘state’).

“There are many choices for the exact mechanic for how state can be renewed (eg. pre-paying a “rent” fee, or simply touching the account), but the general principle is that unless a state object is renewed explicitly, it is inactivated in some way. Hence, any action that creates new state objects (or refreshes existing ones) only burdens other nodes for a limited period of time, and not as is currently the case forever.”

Vitalik Buterin [Article 1]

State which is deemed expired can still be resurrected, however! It can be resurrected using the same sort of methodology used in statelessness — by somebody providing a proof (a ‘witness’) showing that the data is part of the inactive state. However, as Vitalik mentions: “In order to be able to generate such proofs, users themselves would need to store and maintain at least the part of the inactive state that corresponds to inactive state objects that they care about.”

In my opinion, state expiry seems like one of the solutions to state bloat which would actually impact the core problem in a desirable way. It’s a move in the right direction, because it actually serves as a counteracting force (though not necessarily enough of a counteracting force) to the growing chain size. I have two major problems with state expiry, however.

First, is that existing state-expiry mechanisms are merely attempts at centrally planning blockchain economies. In my opinion, this approach is not only doomed to fail — but subverts the basic principles of blockchain.

Secondly, just like with statelessness, state expiry does not suddenly incentivise the provision and maintenance of archive nodes which store historical state (this would still remain a problem) and full nodes on the p2p network which store the current state of the blockchain. Further, asking users to start storing their own data and to manually reactivate expired state is not really the most elegant solution to the problem of state bloat.

Alternatively, some suggest that we should just offload expired state to data storage blockchains like Arweave and Filecoin.

“Users can revive expired data by providing a witness proof and paying gas to have the corresponding data reappended to the active tree. What about expired state? […] This will almost certainly be very, very expensive, so we’ll need some sort of infrastructure for expired state. I believe Solana is exploring using Arweave for similar state rent schemes, though I wasn’t able to find any details. IPFS, BitTorrent, Filecoin and others are all options.”

Polynya, ‘Statelessness + State Expiry

There are still deeper, incentive level problems here which technical solutions are not fixing. Just because state gets expired, it doesn’t mean that the amount of state being stored by full nodes is suddenly a sustainable amount which volunteers would be willing to incur the costs of storing. What if state grows faster than it expires by an amount that the cost of running a full node continually increases? Ethereum’s state is already increasing at an accelerating rate — why would this trend continue as adoption and usage grows? And even if we were to expire so much state that the blockchain size remained constant (an idea we can discuss later), we would still have the problem that because the entire peer to peer network consists of volunteers, the chain can’t scale.

Ultimately, full and archive nodes are still volunteers and are still not paid for data storage and bandwidth — and that is why we are trying to reduce the state size in the first place. And even though users could pay a rent-style fee when old state gets revived, there would be no fee-pricing mechanism which accounts for the real-world data storage costs reviving state imposes on full nodes. And the fee would not go to the peer to peer network storing the data, but to miners and stakers, anyway! If expired state is being revived for a certain fee, that fee should bear relevance to the cost of storing and retrieving that data — and actually be paid to those incurring it. The volunteer problem, scalability problem and data storage/pricing problem are not solved here.

In addition to all this, there is also a LOT of complexity in the implementation and integration of state expiry (and statelessness) into the Etheruem blockchain — see here for more. It gets really complicated, and a lot of decisions and tradeoffs will need to be made.

Sharding

One of the most popular methods for achieving scalability without moving transactions (data) off-chain is sharding. Sharding is (essentially) when you split the blockchain into multiple parallel blockchains (‘shards’), and they each contribute data to blocks that go on the chain. Each shard can process as much (or more) data as the entire blockchain could previously, leading to higher throughput (scalability).

To quote Vitalik:

“Sharding fundamentally gets around the above limitations, because it decouples the data contained on a blockchain from the data that a single node needs to process and store. Instead of nodes verifying blocks by personally downloading and executing them, they use advanced mathematical and cryptographic techniques to verify blocks indirectly.

As a result, sharded blockchains can safely have very high levels of transaction throughput that non-sharded blockchains cannot. This does require a lot of cryptographic cleverness in creating efficient substitutes for naive full validation that successfully reject invalid blocks, but it can be done: the theory is well-established and proof-of-concepts based on draft specifications are already being worked on.”

Vitalik Buterin, ‘The Limits to Blockchain Scalability

This sounds great! Scalability on the base layer! In fact, sharding is such an appealing solution to scaling that it is also being implemented by Ethereum, Near, Elrond, Cardano, Tezos, Polkadot, Zilliqa and many other blockchains — albeit with differences in type and implementation.

Except, as economist Thomas Sowell wisely said — and as you should know by now: “there are no solutions, only tradeoffs.” So what’s the catch?

In the same article, Vitalik cites two major limits to scalability through sharding — ‘minimum user count’ and ‘history retrievability’. He explains it more simply and concisely than I ever could, so I’m just going to quote these parts of his article directly:

‘Minimum User Count’ — Vitalik Buterin, ‘The Limits of Blockchain Scalability’
‘History Retrievability’ — Vitalik Buterin, ‘The Limits of Blockchain Scalability

In short:

  • When you shard, you split up the node set across shards. This makes each shard equivalent to a blockchain with a smaller total set of nodes. If you do this too much, then problems start to arise.
  • Data sharding (each shard’s nodes only storing its own shard’s data) means that unless the total node count of the network is sufficiently increased to offset the effective node reduction from sharding, we could run into critical issues with data storage, data availability and processing the entire blockchain — where there certain data just isn’t stored, and certain shards fail to process all of their transactions.
  • Splitting up the validator set also reduces decentralisation per shard, lowering shard security by making it easier to attain 51% of a shard. This, and other problems can only be stopped by increasing the total network node-count so that the number of nodes in each shard is equal to the total number of nodes prior to sharding the blockchain.
  • Sharding increases the chain throughput, putting greater pressure on archive nodes proportional to the scalability gains to store the blockchain history. (Archive nodes are still not paid for this exponentially more expensive work)
The nature of trade-offs is that every ‘solution’ in one place creates problems somewhere else.

The ultimate problem here is that simply ‘increasing’ the number of nodes to counteract the strain sharding puts on the chain is not an easy task. These light nodes are volunteers, and as we know, there are massive incentive issues at play here. There are no economic incentives to be a light node and do the work which light nodes do, and there is a prisoner’s dilemma-style situation where people have an individual incentive to stop running a node and free ride on the work of other nodes, however lightweight and cheap the nodes may be to run.

In his article, Vitalik describes the fundamental solution to making sharding (and the blockchain) sustainable — a solution to increasing the minimum node count of the chain:

“For a blockchain to be decentralized, it’s crucially important for regular users to be able to run a node, and to have a culture where running nodes is a common activity.” — Vitalik Buterin

This is what solving the trilemma comes down to, in the end. Culture.

The blockchain can’t shard (scale) too much because it needs more nodes? Then just develop a culture of node running. This touches on the heart of the volunteer problem and the scalability problem. While a culture of righteously self-sacrificial altruistic nodes would, of course, increase the scalability potential of any blockchain, we have already thoroughly outlined that:

  1. ‘Culture’ is not a commercially scalable approach (volunteers = volunteer scale)
    AND
  2. It is not an economically viable approach on an incentive level (the game theory of the prisoner’s dilemma and the free rider effect kicks in and there is a market failure to provide the needed infrastructure).

Sharding does not fix the volunteer problem, and only improves blockchain scalability at the cost of security. The fundamental trilemma trade-off still exists. If increasing the minimum node count to retain decentralisation is needed to scale the chain securely under sharding — well, that was already the case without it.

Moving Computation Off-Chain — Rollups

In this section, I’m going to explain (roughly) what rollups are and how they work. This understanding will help us evaluate their effectiveness in solving the problems of blockchains we’ve outlined.

If you’d like to skip right to the evaluation, scroll down until you reach the bit titled: ***Rollups Summary/TLDR***

A key part of the current blockchain scaling solutions is the idea of ‘blockchain departmentalisation’. The idea is that we can unbundle the consensus layer, execution layer and data availability layers of the blockchain should we so wish, and split the tasks up across multiple chains (which means, off-chain). So called ‘monolithic chains’, which try and do everything on layer 1, have been deemed inadequate. Blockchain departmentalisation is generally done with the main goal of addressing the scalability problem — not any of the others which we have discussed.

I will defer to an article by rollup aficionado Polynya to describe what these different blockchain ‘layers’ do:

“Execution: Execute transactions as fast as possible. This is where we’ll see most innovation, and we could see large-scale applications build their dedicated execution chains optimized for specific usecases. Example: Reddit building their own rollup.

Consensus: Provide security, coordination and store transaction proofs and “metadata”. Perhaps it’d be better to call these “Security chains” going forward as the other two types may also have consensus mechanisms, albeit much simpler ones.

Data availability: Provide data availability and store compressed data for transactions”

There are different ways we could mix and match these layers. Rollups are simply off-chain execution layers. Validiums are when consensus, execution and data availability are all on different chains. And Volitions can be rollups or validiums for a given consensus layer (e.g. Ethereum, Proof of Stake) depending on what the user chooses.

“Rollups only do execution, while relying on a different chain for security and data availability. Execution: chain 1; consensus and data availability: chain 0.

Validiums are rollups which use a secondary source for data availability (which makes them not rollups, but validiums). Execution: chain 1; consensus: chain 0; data availability: chain 2.

Volition builds on this by offering users the choice of rollup and validium within the same state. Execution: chain 1; consensus: chain 0; data availability: chain 0 (rollup mode), chain 2 (validium mode).”

It is worth noting that there are such things as ‘data availability chains’, which Polynya describes as follows:

Data Availability Chain — A new breed of L1s with no execution layers, focused entirely on data availability for other chains. They have their own consensus mechanisms purely for data. It’s important to note these chains are useless by themselves, and need other execution chains — whether monolithic or validiums — to leverage them.”

Ethereum’s strategy is to achieve scalability by only using the base layer for consensus and data availability, and to move all the heavy computational lifting off-chain to rollups (an approach detailed by Polynya here). In this way, they hope to increase chain throughput whilst retaining decentralisation on the base layer.

Let’s learn a bit about rollups so that we can examine this strategy.

What rollups do is execute a whole heap of transactions off-chain — and then only submit the final state to be posted to the main chain. This way, nodes don’t need to verify as many transactions as if every single transaction took place on-chain — instead just verifying a ‘rollup’ or ‘batch’ of all of the transactions which took place off-chain.

Rollups handle the heavy computation of transactions off chain, only submitting final ‘batches’ to the chain

The reason that rollups can process so many more transactions than the main chain is because they are not consensus-based systems, and instead rely on centralised block producers called ‘sequencers’, who have very powerful and expensive hardware. Centralisation enables the processing of vastly more transactions, because the single ‘node’ does not need to repeatedly achieve consensus with other nodes every time it processes a transaction.

Rollup users must deposit funds into one of these smart contracts before getting an equivalent amount unlocked on the rollup. A third party, known as a sequencer, credits the user with funds on the rollup after receiving proof of the latter’s deposit in the rollup contract

- Alchemy

But wait just a minute here — we can’t just go around trusting centralised sequencers to not do something shady. So how are they kept in check? There are two main methods. One is using fraud proofs (described by Vitalik here), which involves individuals checking that rollup transactions are valid and signalling a red alert when they find some misbehaviour, so that the main chain rejects the invalid transactions. This requires sequencers to share transaction data (make the data available) so that people can verify it to submit fraud proofs. This innocent until proven guilty method is employed by optimistic rollups.

The other method is validity proofs (see link for explanation), which are submitted with every batch sequencers post to the main chain — and definitively proves that the entire batch is valid. The validity proof is verified by full nodes on the consensus layer, and confirms the validity of the batch. This method is employed by ZK-rollups. Those who compute the ZK proofs are called ‘Provers’. It is important to note that ZK-Rollup sequencers must still make the data available. If they didn’t, then users would not be able to see their balances and interact with the rollup.

“With rollups, we do the transaction processing off-chain, but we post transaction data on-chain. The amount of data we post on-chain is the minimum amount required to locally validate the rollups transaction. By putting data on-chain, anyone can detect fraud, initiate withdrawals, or personally start producing transaction batches.”

Preethi Kasireddy, A Normie’s Guide To Rollups

If you’d like to know more about what’s going on under the hood and how these proofs work, these three articles describe them in greater depth, and in varying levels of complexity: (simple, intermediate, technical; video).

An Artist’s Depiction of Cryptographers Implementing ZK Moon Math — ‘Celestial Pablum’ by Remedios Varo Uranga

However, a key takeaway is that regardless of the method of proof used, data availability is essential to making this whole rollup thing work.

If the data availability solution/layer used by a rollup is unable to keep up with the amount of data the rollup’s sequencer wants to dump on it, then the sequencer (and the rollup) can’t process more transactions even if it wanted to, leading to higher gas fees like we see in Ethereum today.

This is exactly why data availability is extremely important — guaranteeing data availability allows us to ensure rollup sequencers behave, and maximizing the data space throughput of a data availability solution/layer is crucial if rollups are to maximize their transaction throughput.

Yuan Han Li, ‘WTF is Data Availability?’, (emphasis added)

So data availability is a fundamental constraint on scaling with rollups. Now, the Data Availability Problem is that we need to make sure that sequencers make data available on the data availability layer (in this case, the Ethereum L1), for the above reasons. We don’t want sequencers withholding data from the network, but since they exist outside of consensus we can’t easily compel them to give it to us.

“The obvious solution to the data availability problem would just be to force the full nodes to download all the data dumped by the sequencer onto the data availability layer/solution — but we know this gets us nowhere since it would require the full nodes to keep up with the sequencer’s rate of transaction computation, thereby raising the hardware requirements of running a full node and worsening decentralization.”

Yuan Han Li, ‘WTF is Data Availability?’

Because full nodes can’t download all the data at once, but still need to be able to download any given piece of data when they need to verify specific things (and because users need data availability too), there needs to be a method of determining whether sequencers are making data available (on the data availability layer) at all times. This method is called ‘data availability proofs’.

One strategy, called ‘data availability sampling’ is to have full nodes request random pieces of data from sequencers. If the sequencer doesn’t have the specific pieces of data that are requested, then the network node will send out a red alert. With many full nodes doing random data sampling, no one node has to store all the data the sequencer holds — but the network can still test that the sequencer is holding most of the data. Sequencers don’t know what data is going to be requested, so if they are trying to withhold data and make data unavailable, they will likely be caught out. This can be combined with something like data sharding, where nodes in a given shard only store their own shard’s data, and use sampling to check data availability for other shards.

Data Availability Sampling. Credit: Quentin Tarantino

However, a malicious sequencer could still get away with not storing a very small portion of the data — a single transaction, for example — because the only way to ensure that every piece of data is available is to actually download every piece of data. Data availability sampling only gives us probabilistic guarantees. So the plan is to also throw in some very smart technique called erasure coding, as well as fraud proofs (or validity proofs!) and anything else we can come up with to make things as secure as possible. See here for a thorough academic article on this (co-authored by Vitalik Buterin).

***Rollups Summary/TLDR***

Okay. Let’s say we can sort out the details, and make all this work. Where do rollups get us? Well, they don’t solve the volunteer problem of paying people to run nodes in the p2p network. They partially assist in slowing the data storage problem by moving computation off-chain. They do not price and pay for data storage on the network (despite adding data to the chain). They do not solve the network closure problem either. So they aren’t a fix-all. But what rollups certainly do, is give us a template to help achieve scalability beyond whatever the base-layer capacities of a blockchain are — and if you count off-chain scaling as scaling, then you can (generously) say that they at solve the scalability problem. Their ability to use honest-minority assumptions for block production is a significant advantage — allowing them to inherit the security of the main chain.

What rollups need, above all, is the strongest and most secure settlement layer that they can possibly get. Bitcoin would be an obvious choice, but it is not rollup-compatible. Though, its reliance on a block subsidy for security does leave something to be desired. Ethereum is a promising second candidate, with lots of adoption and solid decentralisation. Ethereum is also positioning itself to be a data availability and consensus layer for rollups to settle on. You could just as well put rollups on *insert your favourite chain* here.

Problems and Challenges With Rollups

In my opinion, there are also some pretty serious problems to still be figured out with rollups. And I’m not talking about things like working with the complexity of ZK Rollups, secure cross-rollup communication, or figuring out inter-rollup composability. I think those things are achievable.

For rollups, there is the quite significant problem of centralisation as an attack vector. With just one sequencer per rollup, there is a risk of the sequencer being attacked and the portion of users which rely on that rollup being disrupted. Whether it is a hacking, a physical attack, government coercion, or downtime due to a power outage or earthquake — central points of failure are never ideal and can be massively disruptive. This is not to mention the fact that since rollups are off-chain entities and cannot be regulated by consensus, there is nothing stopping sequencers from not putting certain transactions onto the chain if they so choose, for whatever reason. We’ll discuss this more in a second.

In the face of all this, it has been suggested that we should ‘decentralise’ rollups, by creating for each rollup its own Proof of Work / Proof of Stake consensus layer! Others have suggested that we shard rollups. Maybe it’s turtles all the way down… some certainly think so. But just how ‘decentralised’ these rollups will really be (and should be) is another question entirely.

It turns out that fundamental trade-offs exist whether you’re on L2 or L1

Is a small ring of people staking to ‘elect’ a sequencer from a pool of two or three sequencers decentralised? Do the ‘stakers’ get paid a portion of the sequencer’s profits (transaction fees) — or some token which gives them a discount in the rollup? Such a token would still eat into the sequencer’s profits. There needs to be an economic incentive to participate in this secondary consensus layer, or people won’t do it at a meaningful scale. And why exactly would sequencers and those building the rollups decide to share their profits with stakers? With leading rollup companies like Starkware being valued at $8 Billion, why would they decentralise and take a profit cut? It’s not like end users would notice the difference. If I was a sequencer-prover which a rollup depended on, my incentive is to run my own centralised rollup. No consensus layer needed.

Furthermore, calling a rollup decentralised and thinking we’ve solved our problems just because we introduced a closed voting ring to elect who produces blocks is a stretch — because not only would there be only a handful of block producers, but the rollup staking ring itself (which has its own security budget) could be compromised and 51% attacked. It costs money to stake, and asking people to spend money to ‘decentralise’ a rollup at little benefit to themselves is a stretch. Furthermore, creating a consensus layer at L2 splits the staking budget between the main chain and the new consensus layer that has been built. With multiple rollups, this problem gets amplified.

In other proposed rollup ‘decentralisation’ models, multiple sequencers will be the ones staking, competing to produce their own blocks for the rollup (just like a L1 consensus layer). But sequencers need to be able to run and maintain expensive hardware so they can achieve computational scale — which is clearly at odds with the rollup’s capacity for decentralisation. How decentralised can a rollup be if there are only a handful of people actually able to participate and be sequencers per rollup? This is not decentralisation — it’s just contending with the age old decentralisation-scalability dilemma off-chain, and then opting for centralised but scalable block production instead of decentralisation and low scalability.

Even if we could leverage all of the nodes participating in the main chain’s (in this case, Ethereum’s) consensus mechanism — and we had statelessness implemented so there were as many nodes as possible — it doesn’t mean a wide range of sequencers/provers suddenly appears for us to elect. The pool of block producers is constrained by the need for high throughput capacity, so we are still fundamentally limited by the scalability-decentralisation trade-off. If we then decided to lower the scalability standards of rollups to have a more decentralised range of block producers, we lose out on scale. If we lowered it even more, to make our rollups really decentralised and secure —but they would fail to scale and we would have just recreated the main chain with all its trade-offs on layer 2.

Of course, it is worth mentioning that nobody NEEDS to use rollups, and the failure of any given rollup does not in any way impact the more decentralised and secure base layer of a chain like Ethereum at all. But rollups failing would be tantamount to the base layer failing in the sense that the scalability problem (which is the only problem rollups attempt to fix) was never solved. Ultimately, what all this shows is that the layer 2 solution is itself struggling with the same inescapable trade-offs as the base layer.

In reality, there are no concrete plans for how to decentralise rollups — only promises. So while the scalability vs decentralisation trade-off is able to be massively reduced by disaggregating the execution layer from the consensus layer and only centralising the execution layer (the rollup), we are still just making tradeoffs.

“So what’s the result? Block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented.” — Vitalik Buterin

For a short list of some other problems with such a centralised block production system, see this section of Vitalik’s older writeup on sharding (the link will scroll you to the correct point in the article).

There’s another matter relating to rollups cannibalising the main chain’s security. The main chain’s native token, in this case ETH, derives from people demanding it so that they can use it to pay fees (gas) and interact with the base-layer Ethereum blockchain. If most transactions occur off-chain at negligible fees which get paid to sequencers (as is the great promise of rollups), this should cannibalise the economic value which would otherwise be flowing to the ETH currency. As we have learned before — the economics of a blockchain’s native token are important because that is what is used to incentivise the network’s security. A counterargument to this line of thinking is that rollup transactions are a bonus to what is already a fully saturated base-chain — but if people could just use rollups for cheap, I don’t see why would they stay on the base layer? I won’t venture into this debate in any further here.

Finally, there is the subject of rollup profitability. What is the incentive for companies like Optimism, Arbitrum, StarkWare and Zksync to bring rollups to Ethereum? They obviously wouldn’t spend so much money unless they were going to make it back, plus some. I’ve already hinted at the answer in the above — and the answer is fees. In order to fund the whole operation of designing, building and running the rollup (no one of these things is an easy task), companies like Starkware will collect fees from transactions which they process. Now — just how large the fee-payment they collect is would depend on their specific business model. They’ll work something like this. Given that the goal is to process as many transactions as possible for the main chain, I imagine they will try to go for a low-fee, high transaction volume business model. Others will try implement a token with a supply they control a portion of — but this is just fees with extra steps. I am not sure exactly how much volume would be required to cover all the costs of building and running the rollup — but one would hope that they’d break even and that the whole process would be a competitive use of the capital.

While I have no problem with fees in themselves (they’re essential to making a blockchain run), if you’ll recall the previous section on The Network Closure Problem, you might see how they can pose a problem here. A given rollup operator will receive transactions, process them, take a few, and put the batch on the chain. Whether they are fully centralised, or running on Proof of Stake, they have a dis-incentive to share their transactions because for them, transactions represent profit — and there is no volunteer p2p network to propagate the transactions around between rollups (though perhaps some thought should be given to this idea). Just like Infura, the business model of rollup operators is transaction monetisation. And just like Infura, a rollup operator could simply exclude your transaction from going onto the chain if they wanted to. Section 5.1 of the Bitcoin whitepaper is discarded once more. With no p2p network, and a narrow range of block producers — this is more problematic. However centralised or decentralised a rollup might be would not change the fact that they are not incentivised to share transactions, and not punished for withholding them.

In terms of getting around block producers potentially excluding certain transactions, I have seen discussion of ‘secondary transaction channels’ and ‘light unassisted exits’ for rollups (and these might be the same thing, I’m not sure) as solutions. To the best of my (limited) understanding, it seems like a decentralised rollup with stakers can have stakers “submit lists of transactions which the next block must include.” This seems like it could work, assuming the rollup PoS had sufficient security (cost of attack) and decentralisation. Though, again, I don’t see why sequencers would even subject themselves to governance by other parties when they could just opt for a centralised rollup.

Data Storage Blockchains — Arweave and Filecoin

Let’s go back to the Data Storage Problem — which, if you remember, was never actually solved. Blockchains grow and grow and grow, and nobody even pays for the storage of data — it is handled by volunteers in the p2p network and archive nodes. Often, the p2p network only stores the current state (which itself grows indefinitely and at an increasing rate) because the full blockchain history is too costly to store. If we optimistically assume that the chain size does not grow to the point where the cost of storing it exceeds the profits of all business models relying on it, there may be at least some small number of parties willing to perform archival functions and store the entire history of a blockchain.

Data storage blockchains are blockchains which are designed to … store data. Like all blockchains, they are powered by a cryptocurrency — which is used to incentivise network participants to do the work outlined by consensus (which in this case, is … storing data). This is cool. Normal blockchains don’t do this.

Some suggest that the data storage problem of blockchains can be solved by offloading data onto data storage chains. Users pay to have data stored, and to retrieve it too, if the model has retrieval costs. They could also act as data availability layers for those interested in disaggregating blockchains. Recall that Solana (a chain which is scaling extremely fast) ‘solved’ its data storage problem by archiving its blockchain state to Arweave and requiring full nodes to only store the past two days’ worth of data. Here, I’m going to be discussing two major players in the space: Arweave and Filecoin.

Filecoin and Arweave have different models, so let’s first go through their basic functioning both before discussing their viability.

Filecoin Model

Note: sentences and paragraphs from this section may be quoted or paraphrased from the Filecoin whitepaper.

In Filecoin, “clients [users] spend FIL tokens for storing and retrieving data, and miners earn tokens by storing and serving data.”

“The Filecoin network employs Proof-of-Spacetime and Proof-of Replication to guarantee that miners have correctly stored the data they committed to store.”

“Proof-of-Replication is a novel Proof-of-Storage which allows a server to convince a user that some data has been replicated to its own uniquely dedicated physical storage.”

“Proof-of-Spacetime “enables an efficient prover to convince a verifier that they are storing some data for some period of time.” “The Filecoin protocol employs Proof-of-Spacetime to audit the storage offered by miners.”

The Filecoin blockchain supports two decentralised exchanges which pairs miners and users: one for data storage (the ‘Storage Market’) and one for data retrieval (the ‘Retrieval Market’). “In brief, clients and miners set the prices for the services they request or provide by submitting orders to the respective markets.” This is a free-market pricing system, where price is determined by the intersection of suppliers’ willingness to sell data storage/retrieval and buyers’ willingness to pay for it. This self-determined pricing system helps ensure that miners do not incur economic losses which would make the provision of the service unsustainable.

Filecoin’s network of full nodes “guarantees that miners are rewarded and clients are charged if the service requested has been successfully provided.”

“Clients can store their data by paying Storage Miners in Filecoin tokens.”

“A client submits a bid order to the on-chain Storage Market orderbook. When a matching ask order from miners is found, the client sends the piece to the miner. Both parties sign a deal order and submit it to the Storage Market orderbook. Clients should be able to decide the amount of physical replicas of their pieces either by submitting multiple orders (or specifying a replication factor in the order). Higher redundancy results in a higher tolerance of storage faults”

“Clients can retrieve data by paying Retrieval Miners in Filecoin tokens.”

“They submit a bid order to the Retrieval Market orderbook by gossiping their order to the network. When a matching ask order from miners is found, the client receives the piece from the miner. When received, both parties sign a deal order and submit it to the blockchain to confirm that the exchange succeeded”

Storage Miners must also ‘pledge’ (basically, stake) to provide storage to the Network. “Storage Miners pledge their storage to the network by depositing collateral via a pledge transaction [basically, staking] in the blockchain. The collateral is deposited for the time intended to provide the service, and it is returned if the miner generates proofs of storage for the data they commit to store. If some proofs of storage fail, a proportional amount of collateral is lost.”

Although there’s more (forgive me Filecoin maxis), that’s the essence of Filecoin. See here or here if you want some videos to start going a little deeper into the project.

Arweave Model

Note: sentences and paragraphs from this section may be quoted or paraphrased from the Arweave whitepaper and yellowpaper.

Instead of a blockchain, Arweave uses something called a ‘blockweave’. In Arweave, “each block is linked to two prior blocks: the previous block in the ‘chain’ (as with traditional blockchain protocols), and a block from the previous history of the blockchain (the ‘recall block’).” This ‘recall block’ associated with every block on the chain is what differentiates the blockweave from the blockchain.

Every time miners want to start mining on a new block (to make money), a recall block (a previous block on the chain) will be selected at random for the block, and miners will need to prove that they have it in order to start mining. This incentivises miners to store past data. This mechanism is called ‘Proof of Access’, or ‘PoA’, and is reminiscent of data availability sampling solutions discussed earlier in the article.

A major difference between Arweave and Filecoin is described in the Yellowpaper:

“PoA takes a probabilistic and incentive-driven approach to maximising the number of redundant copies of any individual piece of data in the network. By contrast, other decentralised storage networks specify an exact number of redundant copies that should be provided for a given piece of data, and mediate this using a system of ‘contracts’”

So, how does Arweave actually pay for data storage? Similar to Filecoin, when users want to add data to Arweave, they need to pay miners in the chain’s native token, AR. But unlike Filecoin, the price of data determined is not set by market forces. Here’s how it works:

“Transaction pricing in the Arweave network comes in two components: a highly conservative estimate of the perpetual storage cost, and an instantly released transaction reward to incentivise a miner to accept new transactions into the new block.” — Arweave Yellowpaper

Essentially, what the Arweave team has done, is:

  • Looked at the historical trend of data storage costs (which declines over time as technology improves)
  • Conservatively extrapolated this trend forward, to get an approximate prediction of how much data storage costs in the future will likely be.
  • Make users pay some AR (the Arweave native token) when they add transactions to a block.
  • Give part of each fee to miners immediately, while “most of the transaction fee goes to a storage endowment”. The endowment releases the fees to miners over time, and is meant to cover the cost of storing that data forever — being based on the conservative cost-of-storage estimates baked into the consensus mechanism.
  • Arweave will also give a (temporary) block subsidy to miners until the AR supply cap is reached, at which point the network will rely solely on fees.
  • The network does not release endowment funds if, by the protocol’s own estimates, miners will be sufficiently compensated by fees + the block reward. In these instances, it will save endowment funds to release in future instances where fees + the block reward are not covering mining costs.

So when you ‘deposit’ data on Arweave, you pay an initial cost for perpetual storage based off of the conservative cost-estimates made by the chain, and this gets paid out to miners like rent over time. The goal of this mechanism is to allow “the network to distribute appropriate quantities of tokens to miners over time, in order to sustainably incentivise the perpetual storage of arbitrary quantities of data.” By arbitrary, they mean any amount of data put onto the chain. Arweave’s promise of perpetual data storage also lies in stark contrast to Filecoin, which has users store data for a period of time.

The Library of Alexandria, Artistic Depiction

Discussion of Data Storage Blockchains

Data storage chains can be used by any user to store data — but also might be able to be used by other blockchains to store historical chain data. Let’s critically evaluate some potential problems:

***Modelling Risk***

Some people take issue with assumptions about data storage costs made by Arweave. Anyone who’s done financial modelling or valuations or investing before may have heard that the past results are not necessarily indicative of what will happen in the future. While it is true that the future is inherently uncertain, for the sake of discussion we can grant Arweave the benefit of the doubt that their conservative estimates will play out and not be disrupted by black-swan events.

***Paying for Bandwidth Costs***

Retrieving data from the blockchain has a cost. For example, it is not costless to send entire movies to somebody else — it costs bandwidth. If you’re a miner on Filecoin or Arweave, you’re going to be serving up a lot of data to a lot of people and frequently incurring the cost of doing so.

‘The Creation of Adam’ — Michelangelo

Now, Filecoin has built into it a pricing mechanism for the private retrieval of data. This is an intelligent move, as it ensures the bandwidth cost of miners is actually compensated for — meaning the network pays for this critical function and achieves self-sufficiency. Arweave does not have such a mechanism, and does not in its payment mechanism incentivise miners to perpetually serve data to users. This is problematic, because we want our blockchains to pay for the infrastructure necessary for them to survive with certainty. If miners are not sufficiently compensated for serving data, and doing so would put them into economic losses — then they either won’t serve data, or will be forced to leave the network.

However, both networks fail to make nodes pay for the bandwidth needed to make data available to the public. This makes them both unsuitable for IPFS-style casual data access. On our regular blockchains, which are public ledgers, everybody can access data freely (though due to the fact that they only pay for mining and staking, they don’t pay for bandwidth either).

***Up-Only Risk***

To function properly, these blockchains must pay nodes more than it costs to store data. If they don’t do this, then nobody will run a node and store data for the people.

But remember, nodes get paid in the native currency of their chain (FIL/AR), whose market price fluctuates due to market forces. So put another way, the coins earned from being a miner need to be sold at prices which compensate miners for the costs of storing data (likely denominated in USD or some other currency) — or nodes will be running at a loss, which is unsustainable and will lead to network collapse.

Such a model works ‘on the way up’, when the token price is increasing and it is profitable to mine on the network. But if the price of the network’s coin falls low enough, it will be unprofitable for miners to continue storing data they have already agreed to store at past prices — at which point they would be incentivised to delete the data and leave the network. If this is confusing, try considering the limit case (which applies to any blockchain): if the price of AR/FIL went to zero, miners would have no incentive to continue spending money storing and serving data they have previously accepted.

Indeed, the Arweave yellowpaper describes that the profitability of miners is “naturally satisfied by consistent release of tokens from the endowment” only when “assuming a stable token price in fiat terms” and accurate cost-modelling predictions. It’s a good thing the prices of cryptocurrencies aren’t volatile or anything… In my opinion, the assumption of a stable or always-increasing token price is suspect. Whether due to aggressive speculation, global macroeconomic events, or black swan events (e.g. war) — the prices of cryptocurrencies are never guaranteed.

It should be noted that Arweave suffers from this problem more than Filecoin, because it attempts to cover the costs of perpetual data storage with one-off upfront payments at the point of ‘depositing’ data. Users pay a certain amount of AR tokens to miners today to store data forever—but the price of these AR tokens fluctuates and miners may not actually be compensated for the data they are being required to store.

In the event of a downwards price spiral or prolonged price decline, both Filecoin and Arweave miners would suffer economic losses. In Arweave, the losses would last until the token price appreciated enough to cover the cost of storing all of the data that has ever been put on Arweave. In Filecoin, the losses would at some point stop when the previously agreed upon storage contract expires. Whether in either case miners could (and would) actually afford to incur losses for an uncertain period of time is questionable. At scale, the amount of losses would be greater — and the problem exacerbated.

To me, this is reminiscent of the Luna downward price spiral, where the model’s stability relied on the LUNA token price continually going up.

The Arweave whitepaper also states that “the incentive to maintain the weave also increases as the network and documents will reinforce the value of the tokens”. I think what they are getting at with this comment is that as increasingly important and needed data is stored on the network, people will be ‘incentivised’ to pay for it — and cover the potential losses of miners should the above scenario happen. However, economics teaches us that another possible outcome is that you could just get a market failure to provide the service instead, because 1) people don’t provide massive services at scale for free, 2) even if they tried to, incurring economic losses would not be sustainable, and 3) there may be a prisoners-dilemma scenario, where people reason: “if someone is going to pay for it, then I don’t need to.”

How about new data? If the token price collapses for whatever reason, does the acceptance of new data onto either network also put miners into losses?

Filecoin has a free-market price for data storage for a fixed period of time that is set between miners and users. This means that in the event of a downward price spiral, miners would not take on data unless they thought that the USD value of the FIL currency they were being paid in would exceed the costs of data storage for the storage period. So they would be able to either refrain from accepting new data until the FIL token price stabilised, or they could agree to store data on conservative estimates of the FIL price.

Arweave does not use a market pricing system for its data pricing — so how does it know what the price of the AR token is in order to adjust how much to pay miners for new data with endowment funds? Instead of markets or oracles, Arweave uses a mechanism outlined here to estimate its token price.

I won’t delve too much into it here, but I believe that this ‘proxy’ approach is strictly inferior to a real-world market pricing approach. One of the assumptions is that: “as inflation rewards decrease, the price of AR should increase in order to keep up with the costs of mining” (emphasis added). Furthermore, it relies on using increasing mining difficulty as a proxy for price going up — but there are often significant divergences between mining difficulty and price on other chains which could lead to serious mispricing of data.

The map does not equal the territory.

For some examples of how difficulty is not always a good estimate for price (due to significant directional divergence AND the fact that it increases at different rates to price), see Litecoin, Ethereum, Bitcoin, Dogecoin and Monero. These divergences between mining difficulty and token price are frequent enough and last long enough to make estimating your (token denominated) profits on the basis of mining difficulty a bad strategy. Implied approximations are not equivalent to real price, and this has economic consequences. Imagine trying to run a profitable shoe business by approximating the price of leather using the price of cows instead of using the real price of leather in your calculations. There’s a good chance you’d run into economic losses and be unprofitable. This is all notwithstanding the fact that the Arweave Consensus Mechanism does not pay for the serving of data to users either.

To summarise: data storage blockchains pay their miners in cryptocurrency, but miners have real-world hashing/data storage/data retrieval costs denominated in alternative currencies like dollars. The price of the native currency of the chain which nodes get paid in (FIL/AR) fluctuates freely due to market forces. Both protocols will induce people to put data on the chain. But if the token price falls enough, the historical payment to store the data (including any endowment funds) may no longer cover the costs of storing the data. In such a scenario, miners are incentivised to leave the network, and the chain would likely stop functioning.

***The Security Problem***

Reliance on an alternative chain for any network function poses security risks. Since Filecoin and Arweave run on completely different consensus mechanisms to any chain that would be leveraging them for data storage (e.g. Solana, Ethereum), they could be targeted for long-range attacks on reliant chains.

Attacking a data storage chain directly impacts any chains relying on it. If a chain like Ethereum relies on a data storage blockchain for a given function (e.g. archiving historical data), then Ethereum reduces their security threshold for that particular network function to the whatever that data storage chain’s level of security is. This also means that any incidental attacks on Filecoin or Arweave which have nothing to do with chains relying on it would threaten the stability of chains relying on them.

Also, I am not sure if there is some bridging risk when trying to get a blockchain to communicate with a data storage chain — but the history of bridges in crypto essentially consists of a history of catastrophic failures. So there’s that.

***Arweave Wildfire***

On a completely different note — Arweave claims to solve “the problem of data sharing in a decentralised network by making the rapid fulfilment of data requests on the network a necessary part of participation” with a system called ‘Wildfire’. This is a direct attempt to solve part of the volunteer (or ‘free rider’) problem of blockchains, where the collection and sharing of transactions is not an economically incentivised behaviour, and is handled by a volunteer network.

What Wildfire does, is create a ranking system for each node based on how quickly they respond to requests and accept data from others. If your ranking is low, it means new blocks and transactions are distributed to you more slowly, which eats into your mining profits. If your ranking is very bad, you can get blacklisted from the network entirely. This is intended to create a financial incentivise nodes to share data.

I think Wildfire is a very good attempt to address a fundamental problem, which other chains would do well to pay notice to. Though I am not really sold on how effective it will be. It has added an incentive to share data flows around the network (you don’t lose profits)— but it fails to truly dis-incentivise the undesired behaviour of transaction hoarding, which is still clearly profitable. This means that there is still a direct profit incentive for people to undertake the undesirable behaviour — which, if it outweighs the dis-incentive Wildfire introduces — means the undesirable outcome will still occur. Because of this, people will still try to figure out ways to game the wildfire system. Furthermore, if large and network-valuable miners are blacklisted because of Wildfire, it would undermine security by lowering the hashrate — which is probably not a worthwhile trade-off.

Wildfire only incentivises miners to perform a basic network function to help maintain openness — it does not economically incentivise a p2p network to exist. So a key aspect of the volunteer problem remains unsolved. However, I am very impressed with this innovation and attempt to solve a fundamental problem. They see the problem, and are attempting to remedy it.

Concluding Thoughts on Data Storage Blockchains

I have been relatively critical here, but that’s kind of the point of this part of the article. Ultimately, I am very glad that pure data storage blockchains exist (all blockchains store data), and I think there are some really cool ideas being tested and explored in both Filecoin and Arweave. Unlike other chains, they actually take the data storage/pricing problem seriously and attempt to address it in innovative ways. I want the experiments to continue, and am interested to see how it all plays out. But at least for the purpose of solving the data storage problem for other chains, I do not feel confident in them — owing to questions regarding their long-term viability, and also due to issues regarding bandwidth and open, self-sustainable public data availability.

That being said, for chains which have elected to scale more rapidly (e.g. Solana), using a data storage chain for the archiving of data is probably necessary to preserve the p2p network, and preferable to nothing.

Part IV — A New Set of Solutions

It’s time to wake up from the dream.

‘Dream caused by the flight of a bee around a pomegranate a second before awakening’ — Salvador Dali, 1944

Snapping Back to Reality

Let us recall our key problems. We have the volunteer problem, which creates on the one hand the blockchain trilemma (due to the data storage and scalability problems) — and on the other hand, the network closure problem.

In addition to all of this, we have the problems of majoritarian attacks, discouragement attacks, block subsidy reliance, the commodification of the work functions of PoW and PoS chains, and other incentive-based problems such as header-first mining and delayed block propagation.

In response to this stack of problems, we have come up with a myriad of impressive and innovative technical solutions to try and squeeze the most we can out of the trade-offs which constrain us. Statelessness, state expiry, sharding, off-chain scaling, off-chain data storage — we’ve covered them all. However, these technical solutions either fail to address the fundamental issues at hand, or further sacrifice the Satoshi Properties in some way.

In discussing so many technical ideas, we fall deeper into the dream and forget what the problems really are. What are they again?

  1. The p2p network is comprised of volunteers, and we free-ride on the work they do and the critical network infrastructure they provide. This work is not paid for — and so it is underprovided. This problem gets worse at scale.
  2. We also dump data on the chain today which nodes bear the cost of in the future, leading to blockchain bloat and an eventual collapse of the network infrastructure which the p2p network provides.

These problems mean that the self-sufficiency of every network which has decided to scale is in question. The benefits of scalability today don’t matter if we the chain isn’t going to exist tomorrow.

As a refresher, let’s go back to Bitcoin. Bitcoin (BTC) doesn’t fix these problems. Bitcoin accepts these problems, and then decides not to scale in response to them. Remember why can’t Bitcoin scale? Because scaling would increase costs to unsustainable levels for the volunteer p2p network, leading to a crumbling of essential network infrastructure and a loss of openness and security.

And why is transaction collection, validation, propagation; the provisioning of user-facing infrastructure; the storage of data not paid for to begin with? Because Proof of Work and Proof of Stake blockchains pay only for mining and staking.

The fundamental problems here are not technical problems. They are incentive problems. When all is said and done, no amount of technical tinkering will have solved the core problems which create the blockchain trilemma.

In the face of these problems, I am going to discuss a new set of solutions proposed by the innovative new blockchain Saito, which I believe solve the fundamental incentive issues at the heart of our current blockchain paradigm. I am going to discuss the blockchain design, and then the consensus mechanism design. These two solutions go hand-in-hand, so I suggest reserving all judgements until the whole thing has been covered.

Just to dispel any misconceptions before I begin — Saito is designed as a layer 1 blockchain unto itself. It can play a supporting role for other blockchains (which we will discuss later), but to avoid any confusion it should definitely be understood as its own self-sufficient blockchain. It provides a basis for a sound monetary ledger (like Bitcoin) as well as being a ledger to support web3 applications (like Ethereum). I’ll say it now: Saito is different. But keep an open mind, and you might just find some very interesting treasures in the unknown.

Original artwork by Alex Gray

A New Kind of Blockchain

Let’s start by looking at how Saito addresses the data storage and pricing problems with a new kind of blockchain design.

In normal blockchains, we add data to the chain today and it stays there forever. We start with block 1, and then add block 2 on top of that — and then blocks 3, 4, 5, …, 742686. The blockchain grows block by block, indefinitely. As we know, the problem with this is that the costs of storing the infinitely growing amount of data forever are not paid for — and volunteers bear the burden. Over time, we get blockchain bloat and network collapse. This is a serious and unsolved problem.

To fix this, Saito first establishes for its blockchain a unit of time called an ‘epoch’. An epoch lasts a certain number of blocks (e.g. every 100 blocks the chain adds). Epoch length can be specified by consensus code.

Saito then specifies that when a block falls out of the current epoch, the transactions within it become unspendable. For example, imagine a Saito chain with an epoch length of 100 blocks. When the 101st block gets added to the chain and a new epoch begins, all of the transactions in block 1 are made unspendable.

From: ‘How to Build a Blockchain that Will Never Collapse’ (Video)

However, any transactions in the old block (block 1) which have enough tokens to pay a special fee called a ‘rebroadcasting fee’ must be included by the block producer within the new block that was just added to the tip of the chain. So whoever creates the new 101st block is forced by consensus to include in block 101 all transactions from block 1 that have enough money to pay this special fee. Whoever creates the 102nd block will need to rebroadcast all transactions in block 2 with enough to pay the fee. This system is called ‘Automatic Transaction Rebroadcasting’ (ATR).

A neat side effect of the ATR system is that block-producers are forced to store a full copy of the chain, as they are not allowed to produce blocks unless they have all the historical transactions to rebroadcast into their current block. This is nice, because now we no longer need to worry about security issues like header-first validation, or problems like the provisioning of archive nodes. Not only this — they are paid for it.

From: ‘Fixing the Tragedy of the Commons in Blockchain’ (Video)

After two epochs pass, block producers are allowed to delete an old block’s data. So in the example with a 100-block epoch, nodes will delete block 1’s data once block 200 is produced. Two epochs must pass before block 1 can get deleted: the epoch covering blocks 1–100 AND the epoch covering blocks 101–200. Similarly, block 2’s data will be deleted when block 201 is produced.

This whole mechanism gives us some interesting properties. What we have here is not a blockchain where the number of blocks grows forever. Instead, it is a blockchain of a fixed length of whatever the epoch length is. It is best conceived as a looping, circular chain.

Note that the epoch length could be made dynamic (fluctuating), but this is beside the point here.

The blocksize can be set to be any fixed value, or allowed to float freely, where block producers can create blocks of whatever size they want. This is purely a consensus decision. We’ll get back to this later.

From the outset, what the introduction of ‘epochs’ into the blockchain has done is create a rent style system which forces data to pay to stay on chain at fixed intervals. If you put transact and put data on the chain, you will need to pay a rebroadcasting fee to have it stay on the chain (since you are responsible for the chain size growing). This is just like the real world, where if you use cloud storage providers like Google Drive, Microsoft OneDrive or Dropbox — you pay something like a monthly fee to compensate for the provision of the service. Since blockchains are essentially giant databases, this comparison makes sense. One might object: “but on chain X you don’t need to pay to keep data on chain, so why would you do it on Saito?” However, other chains are only the way they are because the cost of growing the chain (which the user who adds a transaction to the chain is responsible for) is offloaded to volunteers. The real burden of proof is on chains who don’t pay for their own data storage costs to explain why they’ll be able to get away with doing so in the long-term without facing catastrophic infrastructural problems as pressures on volunteers grow worse.

One of the biggest lies the crypto space has convinced itself of is that data storage on blockchains is somehow able to be free just because it’s on a blockchain, and can remain free forever without the chain collapsing. That *somebody* is going to pay for it, so you don’t need to worry about it.

A question arises here. How is this automatic transaction rebroadcasting fee actually determined? We don’t want central planning, because that gives us no guarantee we’ll set the right price. But we also don’t know how much the ATR fee should be to sufficiently cover the costs of storing the data for nodes. How can we solve the data pricing problem with this system?

The solution to the data pricing problem is that the ATR fee which transactions pay to stay on the chain is made to equal a positive multiple of the (smoothed) average fee paid by transactions over the most recent epoch. For example, the ATR fee could be set to equal to 2x, 1.5x or 1.1x the average per-byte fee in the last epoch. But it must be a positive multiple, because fees lower than what nodes are currently accepting would mean fees below what is profitable for them to accept.

Like in Bitcoin and Ethereum, the fees of new transactions in Saito are set by the supply of and demand for blockspace — and block producers will not accept new transactions whose fees do not cover their marginal cost of being processed. Meaning, they will charge fees for new transactions which at minimum cover the relevant costs associated with them (e.g. processing, bandwidth). But unlike Bitcoin and Ethereum, in Saito block producers must pay an upfront cost of storing the entire previous epoch’s worth of data (the entire chain) before they can start producing new blocks. Because of this, in Saito block producers will not take on new transactions unless their fees are high enough to cover the costs of processing the transaction and the data (transaction) storage costs they have incurred. Data storage costs are factored in to the marginal cost of new transactions.

Cash in hand mate.

In chains like Bitcoin and Ethereum, block producers get paid today to process transactions which they don’t need to store forever, but which stay on the chain forever. They can leave the network, or pass these costs onto volunteers in the p2p network or other miners/stakers, and continue on with their header-first validation. Saito, however, makes chain storage a necessary cost of block production — incentivising block producers to charge a fee which also covers the data storage costs.

Since the ATR fee is not set by block producers, block producers can only raise fees on new transactions that come onto the chain. However, the amount which old transactions have to pay in ATR fees is directly determined by the cost of new transactions (it is necessarily a positive multiple of the recent epoch’s average fee) — so in reality, block producers can influence the ATR fee which old transactions pay also to ensure that they cover all relevant costs. (note here that for reasons we can discuss later, Saito’s fees will still be much lower than other chains)

Let’s summarise. Block producers will charge whatever fees recuperate their operational costs. Saito makes them pay upfront costs for the thing we want them to do (store the whole chain) in advance, so they are forced to raise fees on new transactions (which raises ATR fees paid by old transactions) to cover these costs. In contrast with other blockchains, which infinitely add data to the chain without pricing or paying for it, and offload more and more of this data to volunteers who eventually won’t be able to store it — we now have a blockchain which pays for its own data costs. In Saito, block producers aren’t paid upfront and just expected to stay around and cover future costs — future costs pay for themselves, because old transactions pay ATR fees.

An incredible consequence of the fact that individual transactions fall off the chain if they don’t pay a fee to stay on the chain is that the blockchain could never collapse from growing too large in size, even if we allowed it to scale. Suppose we were to remove the blocksize cap and allow block producers to post blocks of any size — scaling the chain as fast as possible. Suppose next that the blockchain grew so large in size that data processing and per-epoch storage costs were growing so high as to make nodes unprofitable. (There is a marginal cost to processing transactions which rises as throughput does, so there is a level of fees producers cannot accept without accepting losses). In this case, the fees charged on new transactions would rise to cover costs, which would increase ATR fees even more (they are a positive multiple of regular fees). This would continue right up until the point where enough data was pruned off the chain (for not being able to pay the higher ATR fee) that costs would go down and it would no longer be unprofitable to accept new data onto the chain. In this way, the ATR mechanism would allow the chain to self-regulate its own size based on what network participants can afford to store.

Put another way: if the per-byte storage and processing cost of a transaction is covered by a fee, then there would be an incentive to accept the fee and grow the chain. But if the marginal (additional) amount of data put on the chain is unprofitable to accept, then the chain will not grow. In fact, if the chain was so large that maintaining it was unprofitable, it would prune off data and shrink in size until it the marginal transaction was no longer unprofitable to process. This system with a variable blocksize would give us an equilibrium chain size determined by free market forces, where the amount of data coming into the chain is equal to the amount of data which goes out.

In this thought experiment, with a freely fluctuating blocksize and massive demand to meet it, only those with the hardware to store a high-throughput blockchain would be able to participate. However, we are not in the same situation as traditional blockchains. Because Saito actually compensates nodes for the costs of running this hardware, rather than hoping volunteers just continue to bear the burden, the amount of ‘decentralisation’ lost by scaling would be significantly less than volunteer-based systems.

Obviously, one could still impose a blocksize cap on a Saito-type chain if they wanted to ensure a minimum level of decentralisation. But keep in mind we’re only talking about fixing the data storage/pricing problem here — not the blockchain trilemma. The solution we have discussed introduces a pruning mechanism which prices and pays for data for chains of any level scalability — ensuring that even a blockchain like Solana would never collapse. However, this doesn’t remove the trade-off between scale and decentralisation. We’ll get to how Saito dismantles the blockchain trilemma later.

‘Three Worlds’ — M.C. Escher

The introduction of fixed length epochs and ATR fees gives the blockchain a way to create a market price for data. In regular blockchains, we can’t price the data storage costs of transactions because they are on the chain forever. These unpaid costs are borne as losses by volunteer nodes, who will eventually drop off the network as costs rise over time. In a Saito based blockchain, the transaction pays rent (an ATR fee) every epoch (fixed period of time) to stay on the chain. This periodic fee creates a pricing and payment mechanism for on-chain data. It can be considered an advanced form of state expiry, with state automatically paying rent at prices set by market forces rather than by a central planner. For some simple videos explaining this mechanism, see here, here and here.

At this point I would remind you that the only possible coin which has a chance (but no guarantee) of being able to handle infinitely increasing chain size is Bitcoin, under the hope that hardware storage costs decline at a sufficiently high rate to lower or keep flat storage costs over time. Offloading data to Arweave relies on questionable assumptions, as discussed — and data sharding requires an ever-increasing (but never incentivised) node count.

In non-Saito blockchains, mining/staking nodes often prune (delete) data because they have already been paid — even though they are not supposed to. The data has already paid its fee, and its deletion is incentivised because it won’t pay anything else in the future, despite the blockchain’s promise of perpetual storage. Even in Bitcoin, miners are guilty of blockchain pruning. A Saito-style chain design forces nodes not to prune, because you can’t make new blocks if you prune old data. All the work of storing data and validating blocks happens before getting paid. Nodes are forced to store data forever as long as there is money to pay for it to be stored. Moreover, a market-based price is used for data storage. After transactions stop paying ATR fees, then the network allows them to fall off the chain. In my opinion, this is preferable to both indiscriminate data pruning and degrading the p2p network over time.

Overview of Chain Design by Co-Founder of Saito

What we have discussed here is a new kind of blockchain design that, for a given level of throughput, solves the data storage and pricing problem. This means we have a blockchain which is not subject to blockchain bloat and collapse, helping to achieve the Satoshi Property of self-sufficiency/self-funding/chain sustainability on a level beyond security.

But we are only just getting started with Saito.

A New Kind of Consensus Mechanism

Saito brings with its novel chain design a new kind of consensus mechanism, as an alternative to Proof of Work and Proof of Stake. It’s got a bit too many moving parts for a ‘Proof of X’ style name to be valid, which is why it’s instead named ‘Saito Consensus’. If we were to go along with naming convention, the least reductive name (one I’ve seen suggested by a community member) would be ‘Proof of Value’, as what Saito Consensus does is incentivise nodes to deliver value to users, as defined by what users are willing to pay for.

Without any further hesitation, let’s explore Saito Consensus.

Saito Consensus Part 1: Routing Work & Block Production

Let’s quickly review how PoW/PoS consensus mechanisms work.

  1. Users interact with the network. Their transactions are received by a node, which then shares it across the network to other nodes. Those nodes then propagate the transaction even further across the network. Without a volunteer p2p network, or at scale, we run into network closure as there is no economic incentive for block producers to broadcast transactions.
  2. Miners/Stakers bundle transactions they have received into blocks, and compete to put them on the chain. They expend work (hash/stake) to produce blocks, and get paid when they propose a valid block.

In Saito, things work differently. It is the peer network which produces blocks. To produce a block, they are required to have a certain amount of ‘routing work’. Routing work is a measurement of the efficient collection of fees.

A user’s transaction comes into the blockchain, being first received by node A on the p2p network. Node A broadcasts it to peer nodes B, C and D — who then further broadcast it to other nodes E, F, G — and so on. Each time the transaction (or, fee) is broadcast to another node, we say it ‘hops’ deeper into the network. In this example, the first hop was from the user to node A. The second hop was from node A to nodes B, C and D — and the third hop was from nodes B, C and D to nodes E, F and G.

Note that the one fee has multiple ‘routing paths’ through the network. Routing paths trace the different ways a transaction hops across network nodes. They are kept track of by every node attaching cryptographic ‘routing signatures’ to a transaction before rebroadcasting it. Keep in mind that even if a node has a transaction, it does not automatically mean that it is on every routing path a transaction has. As below, node B and node C both have the same transaction given to them by node A — but are on different routing paths (blue and orange).

One transaction, three routing paths (see bottom left of image)

I said that in Saito nodes need a certain amount of ‘routing work’ to produce a block, and that routing work is ‘a measurement of the efficient collection and sharing of fees’. How exactly does this measurement work?

Each fee generates a certain amount of routing work ‘points’ for a node depending on how many hops deep it is into the network. Every hop deeper a transaction is into the network, the amount of routing work it gives to the node collecting it halves. The first node to collect a fee from a user gets the most routing work points out of it. The second node gets half as many routing work points as the first node did. The third node gets half as many routing work points as the second node. And so on. This means that the deeper you are in the network, the more inbound fee-flow you need to produce blocks at a competitive pace.

Routing work points are derived from the transaction fee. Every hop, the routing work a fee provides gets cut in half.

As the above diagram shows, a single transaction can generate a bunch of routing work for different nodes as it passes through the network. The earlier nodes to collect the transaction get more routing work from it.

When a transaction is put into a block by a node, it contributes ‘routing work’ to that block. The amount of routing work a transaction contributes to a block is equal to the amount of routing work it generates for the node putting it into a block. When a node accumulates enough total routing work — when it meets a routing work threshold — they can publish a block on the chain.

Let’s run through this mechanism:

First, a transaction enters the network. Then, it gets broadcasted around the network by different nodes, creating multiple routing paths. A transaction is worth more to nodes who are quicker to collect it, because nodes earlier on in the transaction’s routing path gain exponentially more routing work from collecting a given transaction. Eventually, one of the nodes that is in one of the transactions’ many routing paths accumulates enough total routing work (from collecting other fees as well) to publish a block containing that transaction (among others in their mempool) onto the blockchain.

Here’s a key detail. The payout mechanism in Saito is probabilistic.

Every block contains transactions, and every transaction in a block has a unique routing path with a total amount of routing work associated with it. The total amount of routing work for any given transaction is equal to the sum of routing work points it generated for all the nodes in its routing path. Every block therefore has a ‘total’ amount of routing work, equal to the sum of total routing work for all of the transactions inside it.

The probability of getting paid is the amount of routing work a node contributed to a block divided by the total routing work in the block.

If you contributed 10% of routing work to a block, you have a 10% chance at receiving that block’s reward. If you contributed 90% of the block’s routing work, you have a 90% chance at being paid the block reward. This means that on average, nodes get paid proportionally to the amount of routing work they contribute to any published block. Even if somebody else publishes the block.

Example of payout chance with one-transaction which has made 4 hops into the network. It is not necessarily the node which produces a block which gets paid. Earlier-hop nodes have a higher chance of receiving the block reward.

This, combined with the fact that the amount of routing work nodes get halves per hop, has several implications.

The first is that if you are not the only node with a transaction, and do not have enough routing work to publish a block, you will be racing to get that transaction to as many other nodes as possible — in the hopes that they will then put it on the chain so that you can get a chance at being paid. If the same transaction with an alternate routing path that you are not on gets put on the chain, then you are not eligible to get paid.

The second, and hugely important implication, is that it means that the nodes which are facing and closest to users will get paid more.

Nodes on Saito are incentivised to contribute value to the network, which requires operating public infrastructure. If you are a third-hop node, you have an incentive to become a second or first hop node. This means positioning yourself closer to users, which means improving your efficiency at servicing users — becoming a better ISP. Network-layer efficiency is not something other blockchains create competition for.

It also means attracting more direct fee flow so that you can become a first hop node. This could be achieved by providing anything that users value (whether you service lite clients, charge lower fees, or donate some of your profits to charity). Since users can decide who their payment is broadcast to, nodes will do whatever it is that users value and are willing to pay for.

If users want faster transaction confirmation, a node can advertise this and route transactions throughout the network. If users don’t care about transaction confirmation speed and want low fees, a node could instead not forward transactions, reducing their bandwidth costs (therefore transactions fees) and market themselves based off of that. If users want to fund the development of software or applications on the network, then they can! A dapp developer can code up their dapp to have transactions route through their node first, to provide a revenue stream (which interestingly, allows for tokenless dapps). Or if a Saito application relies on off-chain data, then nodes will be incentivised to run infrastructure to have that data so that they can collect transaction fees relevant to that app. This could even mean running nodes for other blockchains. Users could even route fees to a multisig address representing a consortium of interests (e.g. an animal welfare group), should they wish. A routing work-based system such as this pays for whatever users want by putting nodes in competition to collect user transactions first — meaning, service users.

This leads to a network which dynamically reconfigures to provide value to users — as defined by their terms.

With this innovation, the Volunteer / Free-Rider Problem and the Network Closure Problem are solved. The network now self-optimises to provide whatever infrastructure it needs — rather than simply only paying for mining or staking.

“Where other blockchains explicitly define which activities have value, Saito lets the users signal what services provide value through fee-pricing, while the network infers who deserves payment.” — Saito Whitepaper

But what we have covered so far is only half of Saito Consensus. We still need to make this entire thing at least as secure as PoW and PoS. Without security, nothing has actually been solved at all. Maybe we could even do better than Proof of Work and Proof of Stake security?

Saito Consensus Part 2: Payment and Security

In Proof of Work and Proof of Stake consensus mechanisms, miners and stakers get paid for producing valid blocks and adding them to the chain. It is expensive to produce blocks (requiring hash or stake), and after producing a block you get paid the fees inside of it, plus a block subsidy if there is one. The p2p network (apart from those nodes on it who are also mining and staking) consists of volunteers.

In Saito Consensus, the p2p network produces blocks and gets paid proportionally to routing work they contribute to each block. This incentivises the provisioning of network infrastructure using the hop-halving routing work mechanism outlined above. But it only takes routing work (derived from the efficient collection of fees) to produce a block. Which means that block production is not a very costly affair compared to PoW/PoS. This does not afford us much security.

“Saito cannot simply give the fees directly to block producers: that would allow attackers to use the income from one block to generate the routing work needed to produce the next.

Dividing up the payment between different nodes is preferable, but as long as block-producers have any influence over who gets paid a savvy attacker can sybil the network or conduct grinding attacks that target the token-issuing mechanism.”

— Saito Whitepaper

Notice I have said that in Saito Consensus, nodes must accumulate routing work in order to produce a block. I have also said that nodes get paid proportionally to the amount of routing work they contribute to blocks. But nowhere have I said that Saito nodes get paid for producing blocks.

One of the crucial differences between Saito Consensus and other consensus mechanisms is that Saito Consensus does not pay for block production. Producing a block and getting paid are not co-incidental events in a Saito-style blockchain. Instead, the production of blocks is merely a precondition for getting paid.

So how does Saito do it then? How does the chain achieve security? The answer is: instead of making block production the expensive activity, Saito Consensus makes getting paid the expensive activity.

Saito pays nodes with the fees inside of blocks. There is no block subsidy. Once a node produces a block, all of the fees inside of it are burned (destroyed). To resurrect the fees so that they can get paid, nodes need to do some work. Each block has attached to it a hashing puzzle, called a ‘golden ticket’. Miners compete (by spending money) to find the golden ticket solution for the newly produced block, and if they successfully find it and broadcast it around the network, the fees from the block (the block reward) get released. Only one solution may be included in any block, and the solution must be included in the very next block to be considered valid.

Every block has a ‘golden ticket’ mining puzzle whose solution must be included in the next block to unlock the block’s fees.

This hashing process is similar to Bitcoin, where miners compete to find a unique mining solution for every block — but the main difference is that in Saito Consensus, miners are not block producers. The p2p network produces blocks, and miners unlock the funds from blocks. The difficulty of mining adjusts to target a certain number of golden ticket solutions per block (in Classic Saito Consensus, 1).

So while producing a block is a pre-requisite to receiving payment in Saito Consensus, it is not sufficient to receive payment. Funds must be unlocked through hashing work, which costs money. This gives us the comparable security properties to PoW (for now), but with some interesting new properties.

After a miner finds the golden ticket solution for a block and broadcasts it out to the network, the block’s fees are released. But who do they go to? We want to pay routing nodes, who did the routing work we want to incentivise — but we also want to pay mining nodes, who help to secure the blockchain. So we pay both. The block reward is split between routing nodes and mining nodes, according to a consensus-determined ratio called ‘Paysplit’. Discussion of the paysplit ventures into more advanced territory — so for now (and this is the default in Saito) let’s just say 50% of the block reward goes to a random routing node on the network who contributed routing work to that block, and 50% goes to the mining node who solved the mining puzzle for that block.

I say random, but in actuality the probability of a routing node receiving payment from a block is made to be proportional to their contribution to the total routing work of that block. By random, I just mean that all nodes who contributed routing work to the block are eligible to receive the block reward. Remember that this is a statistical payout. Only one routing node gets paid each block. It may not be the node who did the most routing work for that block, but on average it will be. This means that over time— but not necessarily for any individual block — nodes get paid according to how much work they contribute to the network.

Let’s reiterate what’s happening in Saito Consensus:

  1. Nodes produce blocks by accumulating routing work (collecting fees).
  2. When a block is produced, the fees inside it get burned.
  3. A costly golden ticket hash-lottery solution must be found to resurrect the fees inside the block and release them for payment.
  4. The block fees get paid out to the miner who found the block’s golden ticket gets paid, and a random routing node on the network — with routing node selection probabilistically based on how much routing work they contributed to the block. A given routing node earns money proportional to its share of the total routing work in a block on average— and a given miner earns money proportional to its share of the total hashpower in the network.
Saito Consensus Visualised (remember, it’s not necessarily the routing node which produced the block that gets paid)

The outcome of this whole process is to gain Proof of Work style security out of a system which incentivises the p2p network to efficiently provide value to users — as well as critical network infrastructure.

“This system has several major advantages over proof-of-work and proof-of-stake mechanisms. The most important is that Saito explicitly distributes fees to the nodes that service users, collect transactions and produce blocks, and does so in proportion to the amount of value that these actors provide to the overall network.

Network nodes compete for access to lucrative inbound transaction flow, and will happily fund whatever development activities are needed to get users on the network. Of note, the services provided by edge nodes to attract Saito usage can include public-facing infrastructure needed by other blockchains.”

— Saito Whitepaper

“Reducing Saito to “pays for the p2p network” is missing the point. That isn’t the problem. The issue is scaling an open consensus layer.” — David Lancashire, co-founder of Saito

Saito Consensus Overview (Credit: Saito Network)

Saito Consensus Part 3: Majoritarian (51%) Attacks Eliminated?

I said that Saito Consensus gives comparable security to PoW systems like Bitcoin. But since miners are only getting paid half (instead of all) of each block’s fees, the chain’s mining budget is halved — so shouldn’t Saito’s security only be half as much as PoW?

Something else is going on here. Something that actually makes Saito Consensus more secure than Proof of Work and Proof of Stake. And it has to do with two key facts:

  1. That all of the routing work accumulated by each node on the network does not suddenly disappear (get orphaned) when any other given node produces a block.
  2. That producing a block and being paid are no longer tightly bound together. Instead, being paid depends on how much routing work you have contributed to a given produced block. (note: all block producers contribute some work to a given block, but never necessarily enough to guarantee they get paid)

Recall that attacking the blockchain requires producing the longest chain. In PoW, this requires 51% of mining power. In PoS, it’s 51% of stake. Once you get 51%, it’s game over. You will continually produce the longest chain and override (orphan) any blocks produced by others — pushing other participants off the network and taking all the network’s income in the process. Majoritarian attacks and discouragement attacks are profitable, and therefore incentivised.

In Saito Consensus, to produce a block you need to collect fee-paying transactions. This means that the only way to produce the longest chain is to collect 51% of inbound fee flow from honest users, OR fill blocks with your own fee-paying transactions to get 51% of total network fees.

But unlike in Proof of Work and Proof of Stake, in Saito Consensus if you are producing blocks and have 51% or more of the work function (in this case, fee collection), you cannot profitably produce your own chain and orphan blocks produced by smaller players. Meaning, you cannot profitably 51% attack the chain. In fact, you cannot profitably attack the chain, period.

This is a big claim to make. Really big. No other blockchain can make this claim. Even Satoshi didn’t manage this with Bitcoin. So let’s walk through some attack scenarios to see how it’s done. If at any point the following gets too in-the-details for you, just skip past the attack scenarios and you’ll eventually find a TLDR.

*Blockchain fork joke*

ATTACK SCENARIO I: SPENDING MONEY TO ATTACK THE CHAIN

Let’s examine what happens if you try to fill blocks with your own fee-paying transactions so that you can get 51% of routing work. This is the equivalent of paying for 51% of hash or stake in a discouragement attack.

There are two possible outcomes:

1) You spend your own money (in the form of transaction fees which you route to yourself) to produce a block, and then successfully mine on that block to unlock the fees.

In this case, the block reward (which is the fees you filled the block with) gets split between the miner (you) and the only node who contributed to the routing work of the block — also you. So upon finding the golden ticket solution you end up with all of the fees in the block MINUS the cost of electricity you spent hashing. But since all the fees in the block were yours to begin with, this means you will end up with less money than you started with. Attacking the network comes at a cost, and if you continue to do it you will lose more money.

2 ) You spend your own money (fees) to produce the block, and SOMEONE ELSE successfully finds your block’s golden ticket. Either you refused to mine, or they just outcompeted you.

In the case where you did no mining at all and left the mining to dedicated network miners, when a golden ticket is found for the block you would at most only get back 50% of the block fees you put in. You would only get back the routing-node portion of the block’s fees (50% of them), and the miner would take the other 50% of block fees due to the paysplit. So you lose 50% of your money per block.

If you did spend money mining, and then some other miner on the network found the block’s golden ticket solution, you would only get back the routing-node portion of the block fees MINUS the money you spent on hashing. So you would lose more than 50% of the fees you put in to create every block.

Unless you are participating in the network honestly, producing the longest chain requires spending your own money to fill blocks with fees to accumulate routing work. To recover back the funds you put in the block, you either have to spend your own money mining, or lose half the block’s fees to somebody else. This means you would be losing money and the attack would therefore be impossible to sustain over time. Attackers would eventually go bankrupt. This contrasts to PoW and PoS, where one can spend money to perform hash or stake rental attacks and turn a profit which they can then use to fund the attack further. This is a huge advancement.

Furthermore, if an attacker was ever filling blocks with transactions routed to them by other network nodes — then there’s a chance they wouldn’t even get the routing node payment for any given block! The routing-node portion of the block reward goes to a random node who contributed routing work to it — not necessarily the block producer.

That covers a heap of attack situations. But let’s do a thought-experiment and set up the most favourable attack conditions possible (however unlikely they might actually be) and see how an attacker fares.

What would happen if an honest block producer happened to find themselves with 51% of the work in Saito? Meaning, they have convinced 51% of users to route their transactions directly to them. In Proof of Work/Stake, this would again be a game-over situation, like any situation where 51% of the work function is compromised. 51% of the work function means you can produce the longest chain, which means you can build a chain which outpaces the rest of the network — and subsequently orphan all the blocks produced by others, driving them off the network. A situation where an honest node comes across 51% of the work function — maybe due to centralising pressures around block production — is also equivalent to the top two or three mining/staking pools on the network colluding to attain 51% of work and produce the longest chain.

So how does Saito fare under these conditions?

ATTACK SCENARIO II: 51% NODE ATTACKS NETWORK

Suppose there is a Saito network made up of honest participants, but one of the honest routing nodes is so large (I’ll call it a meganode) that it consistently takes in 51% of all the fee inflow into the network directly from users. So it is a first-hop routing node, and is collecting 51% of network fees on a consistent basis. This means it is able to produce 51% of blocks on the network only using fees it collects from users directly. This could be a giant Infura-like monopoly, or several top routing nodes colluding together.

To make this attack extra difficult to defend against, let’s also assume this node doesn’t forward any transactions that it collects to other nodes in the network. This means its blocks consist entirely of its own first-hop fees, giving it a 100% chance of receiving the routing node payout for every block it produces. This may be slightly unrealistic already, because transactions which are not shared take longer to get into blocks than if they are shared — and a node collecting 51% of fee flow would presumably have done that by offering superior transaction confirmation times for users.

Now imagine that this large routing node gets cursed by Hera, and in a fit of madness decides to ‘attack’ the network. It decides to add blocks to the tip of the chain per normal, but to only add its own blocks, ignoring all blocks produced by the rest of the network. It is trying to perform a 51% attack.

To understand what this means, pretend the chain is up to block 100. An honest node proposes block 101. The honest network nodes accept it, and start working on block 102, 103 and so on. The attacker, however, ignores it and builds on their own chain instead — finding its own block 101, 102 and so on, using the fees which it collects directly from users to do so. The attacker has more routing work than the rest of the network (51%), so is producing blocks at a faster rate. This means eventually (e.g. 20 or so blocks down the line) the attacker’s chain will outpace the honest chain in length, and will be a valid longest chain. The attacker can continue ignoring all blocks produced by the honest network, orphan everybody’s work, and eventually push everybody off of the network by denying them income.

Except — they can’t do this in Saito Consensus. It’s true that in PoW/PoS, the work you did to produce a block disappears once a new block is created (by anybody) and added to the chain. If you were mining on a block of your own and then somebody else produced a valid block, you’d have to throw your block away and start from zero work again. Your work gets orphaned. But in Saito, ‘work’ is associated with transactions instead of blocks. This means when somebody adds a block on the chain, you still have all the routing work you have done for the transactions you have collected which were not in that new block. No matter who produces the block. (And if you were on the routing path of any of the transactions which WERE in the newly produced block, you would have a shot at getting paid the routing-node portion of that block’s fees.)

Recall that the same transaction can be in the mempools of multiple nodes. If the network produces a block which contains transactions that are also in your mempool, then those transactions get deleted from your mempool because they have already made their way onto the chain. This ensures transactions which multiple nodes collect (e.g. ‘pay Socrates $100’) are not published on the chain twice.

However, nodes are still left with all the transactions they have accumulated which did not go into the newest block. In Saito, this means that they still have all of the routing work they accumulated which did not go into that block — so their routing work is not orphaned! In fact, they even have a head start on producing the next block, since the most recent block producer just depleted their routing work to make the newest block.

But how does this prevent an attacker who can produce the longest chain from ignoring blocks proposed by the rest of the network? What good is the rest of the network keeping their routing work and being able to produce blocks if the attacker still has more routing work in total — so can still produce a longer chain and ignore their blocks?

The answer is that if the rest of the network does not lose their routing work and the attacker is spending all of their routing work to make blocks, the rest of the network will end up with more routing work than the attacker and be able to produce the longest chain.

A saver always ends up with more than a spender. Once the honest network accumulates more routing work than an attacker, they can start building on the attacker’s chain and be able to create the longest chain.

Let’s say that in this attack scenario we have outlined, the total amount of fee-flow coming into the blockchain is $100 per minute. So the honest network is collecting 49% of that ($49) per minute, and the attacker has 51% of that ($51) per minute. Let’s also say that the amount of routing work it takes to produce a block (called the ‘burnfee’) is equal to $250 (any number would do).

Right up until the point of the attack, the meganode is doing exactly what the network wants it to do. It is collecting transactions from honest users, putting them into blocks, and then posting those blocks on the chain. It is also allowing the rest of the network to post their blocks on the chain. The honest network collects $49 per minute, and the meganode collects $51 per minute. They produce blocks and add them to the chain every time they accumulate $250 worth of fees. In this instance, the block production schedule looks something like the following table.

Pre-Attack Block Schedule — Tabulated

The meganode has enough fees to make a block after five minutes of collecting fees— so they do. The honest network has enough fees to make a block at six minutes — so they do. And so on. Everybody is building on the same chain. All is good. The blockchain would look like this:

Pre-Attack Block Schedule — Visualised

But now, let’s see what differs when the attack takes place.

In the attack, the attacker will only build upon their own blocks — ignoring any blocks the rest of the network produces. When the honest network proposes a block (at minute 6), the attacker will ignore it until it has two blocks (made at minutes 5 and 10), then add blocks 5 and 10 to the chain simultaneously in place of the honest network’s single block — creating a longer chain and thereby excluding the honest network’s block. The attacker can get two blocks on the chain before the honest network can, because controlling 51% of the block-producing work allows them to produce blocks at a faster rate. This outcome is shown in the graphic below:

The attacker produces 2 blocks before the honest network can, so can orphan blocks the honest network produces. In PoW/PoS, at the point of having their block orphaned, the honest network would need to start producing a new block from scratch. The situation would repeat again.

If this was a PoW/PoS system, the work which had been put into that ignored block would be unsalvageable (orphaned). After adopting the attacker’s new longest chain, the honest network would need to start building up work from scratch — at the same time as the attacker — to produce a new block. So they’d be in the same situation as before; they would be in a losing battle to add two blocks to the chain before the attacker does. The attacker would be able to repeat this same strategy indefinitely because they have more work and are producing blocks at a faster rate than the honest network. They could forever build the longest chain on their own and orphan all other blocks.

However, as we have discussed, in Saito the honest network’s work is NOT orphaned, even if their block is excluded from the chain. Work is associated with transactions, not blocks. This means that the honest network can accumulate routing work while the attacker is performing the attack, and will stockpile enough routing work to eventually produce more than one block on top of the next block the attacker post. This would be more routing work than the attacker would be able to match, and put the honest network on the longest chain.

By accumulating multiple blocks worth of work, the honest network is able to temporarily outpace the attacker in block production speed. They use their surplus routing work to build multiple consecutive blocks on the attackers chain. The attacker’s own chain is now shorter than the chain containing the multiple consecutive blocks posted by the honest network. If attackers do not acquiesce and build on top of those honest blocks, they will no longer be operating on the longest chain. This outcome is shown in the following table:

The honest network accumulating two blocks worth of routing work and adding it to the tip of the attacker’s chain. This puts the honest network on a longer chain than the attacker. The attacker builds on their honest blocks instead of operating on the shortest chain.

In the table above, the honest network accumulates routing work while its blocks are being ignored (in minutes 6,7,8,9,10). Unlike PoW/PoS, the non-attackers can actually accumulate work over time, and do not lose any block production work from having their blocks excluded from the chain.

Having accumulated some additional routing work, the honest network then posts two blocks consecutively at minutes 11 and 12 on top of the attacker’s block produced at minute 10 (shown above). The honest network then continues building on that chain.

If the attacker tried to continue building on its own chain with the most recent block it produced (at minute 10), it would then be operating on a shorter chain than the honest network. The following graphic illustrates how when work is not orphaned, the honest network being censored can accumulate enough work to build the longest chain:

Ignored blocks can be used by the honest network to create a longer chain. If the attacker kept trying to build on their own blocks, they would be on the shorter chain. The honest network’s work is never orphanable and is always growing.

Since the attacker is producing blocks at a faster rate than the honest network, if they continued mining on their own shorter chain they would eventually create the longest chain again — but this would take a long time, as blocks are still being added to the new, longer chain by the honest network. In the above scenario, where the honest network withholds one block (from time 6–10) and then posts two blocks at once (at time 11 and 12), the attacker’s chain consisting of their own blocks would be shorter by one block at minute 15— and it would take the attacker producing 51 more blocks in order to eventually outpace the honest network’s ever-growing chain. This would be at time 251 minutes.

At that point, however, the honest network would have produced 48 total blocks, which again, is work that is not orphanable. So if the attacker was stubborn and kept building on their own blocks, eventually producing 51 more blocks and reclaiming the longest chain, the honest network would immediately have 48 blocks worth of work to dump on the tip of the chain. Outpacing that would take a long time. An exponentially longer amount of time than it already took to outpace a mere one-block lead.

As you can see, the inability to orphan work means that it would quickly become impossible to try and sustain the longest chain, because you would be competing against an exponential amount of accumulated work. This holds whether the attacker has 51%, 68% or any amount up to 99% of the total network fee flow. The ability for the honest network to effectively stockpile work and use past work in the future means that attackers are not just competing against the current percentage of fee flow the rest of the network controls — but that amount multiplied by however long they try and sustain the attack for. This renders 51% attacks impossible.

If in the beginning the honest network had let the attacker add three consecutive blocks on the tip of the chain before posting any blocks, it would have accumulated enough routing work to itself post three consecutive blocks (at times 16, 17, 18 minutes) on top of the attacker’s third block. This is shown in the table below. If this occurred, an attacker building only on their own blocks would need to produce 111 blocks before they would eventually recapture the longest chain (by which point they would find a mountain of routing work immediately dumped on the tip of the chain that they would have to overcome). So stockpiling two blocks made the attacker spend 251 minutes reclaiming the longest chain, and stockpiling three blocks made the attacker spend 545 minutes.

The honest network accumulating three blocks worth of routing work before adding the tip of the attacker’s chain. Attackers would need to produce 111 blocks on its block at time 15 to outpace the honest network’s 3 block lead.

The time it takes for an attacker to reclaim the longest chain gets exponentially longer the more consecutive blocks the honest network posts. So in the first place, the honest network can just let the attacker post ten or so blocks in a row, and then dump ten blocks of their own to stall the attacker for a prohibitive amount of time. This stops the attack dead in its tracks. The attacker would find it easier to simply add their own blocks on top of the newly-added honest blocks, rather than spend an ungodly amount of time (what would become years) trying to outpace the new longest chain only to have it continually re-extended with all the un-orphanable work it accumulated.

The only way attackers would not need to allow honest blocks onto the chain is if they spent their own money to produce as many blocks as the honest chain posted to the chain (plus one) so that they can retain the longest chain. However, we have already proven that this is not a viable long-term strategy — as it results in the attacker constantly bleeding money until they eventually run out (nobody has infinite money).

To summarise all this:

Imagine an attacker produces 99 blocks per minute, and the entire honest network only produces 1 block per minute. A ‘99% attack’ should be possible. Yet, because the honest network can stockpile and accumulate their work over any amount of time and then add it on top of the attacker’s chain — in order to maintain the longest chain the attacker would need to be able to produce more blocks in 1 minute than entire honest network in an arbitrary amount of time. They can only do this by spending their own money — which would mean losing money.

In short: the 51% attack has been eliminated.

In the discussion of this attack, I have been leaving out a key detail. I have actually been understating how secure Saito Consensus is. Producing your own longest chain and orphaning the honest network’s blocks is not merely prohibited by time, as demonstrated above — but is itself a costly process. It actually actively costs money to orphan other people’s work and try and produce your own chain. So the more blocks the honest network forces an attacker make to catch up to the longest chain, the more money the attacker will have to spend to continue the attack.

Just like in the previous attack scenario where attackers spend their own money to attack the chain, attacking a Saito network is not a profitable activity — meaning attacks are unsustainable without an infinite amount of money. To go through how this works, let’s look at the situation in a little more depth.

The image below shows that before the 51% attack, the blockchain was taking in $100 per block — or $400 in four minutes. After the attack, however, all honest network blocks are excluded from the chain. This means that the blockchain’s income (the fees it collects) per unit of time is cut in half.

Total network fee inflow before and after the attack, visualised

If the attacker tries to produce blocks at the same rate as the whole network did before, they would need to include half the fees in every block (because the amount of fees it collects from users is unchanged). Alternatively, the attacker could keep its blocks the same fee size as before the attack — but the blockchain would only produce blocks at half the speed as it did before the attack. No matter which way you cut it, the attacker only brings $200 into the network over a four minute span of time, so after excluding the honest network’s blocks, the total amount of fees the chain brings in is halved.

Now, consider the fact that in Saito Classic, the difficulty of mining adjusts to target one golden ticket solution per block. Meaning that for a given total hashrate (mining power of the network), the time that it takes a miner to find a golden ticket solution for a block will adjust so that on average, only one is found per block.

The mining adjustment process works like this:

  • If a bunch of blocks are made and mining difficulty is so high that the entire network of miners is not able to produce a single golden ticket for any of those blocks, then mining difficulty (and the associated cost of mining) will slowly adjust downwards until the network can produce one golden ticket per block. Blocks have to go by without any golden tickets being found for them in order for difficulty to start adjusting downwards.
  • Conversely, if the mining difficulty is so low that everybody and their mother’s cat is finding a golden ticket solution for each block, then the difficulty (and the associated cost of mining) will increase up until the point where only one golden ticket is produced by the network’s miners every block.

Remember that in either case the adjustment of mining difficulty is a slow process which takes time.

Now, let’s say that the pre-attack mining difficulty (which is just some number, like 4359353) creates real world hashing costs to miners of $X per block to find a golden ticket.

Once the attack is launched, the amount of network fees is halved. However, mining difficulty (and therefore mining cost) remains the same. It still costs $X. This means that for a given period of time, the same mining spend returns half as much money as it used to. Mining profitability is cut massively after the attack. This is shown in the table below.

Mining difficulty is set to target n golden tickets per block (here n=1), and during time 1–8 results in $X of costs per block. t = units of time (e.g. minutes, hours).

Reduced mining profitability pushes many miners off the network. This makes the total hashrate of the network fall. Because total hashrate has fallen, not every block produced has an associated golden ticket found anymore. Remember that difficulty is adjusted to target one golden ticket per block at the previous network hashrate.

The difficulty adjustment process takes some time, so mining difficulty remains at its previous level for now. Eventually, difficulty will adjust downwards so that for the new network hashrate, one golden ticket can be produced per block. But for now, less than one golden ticket per block is able to be produced. This means that there are blocks which are being produced for which there is no reward.

From the perspective of the attacker, who is ignoring the rest of the network’s blocks, they are now the only one producing blocks for the blockchain. The attacker is continually producing blocks, trying to maintain the longest chain. But not all of their blocks have golden tickets found now. And it is a costly affair to produce blocks (collect transactions and run the network infrastructure) in the first place. This means that an attacker producing blocks without golden tickets (for no reward) is producing blocks at a loss.

In this case, the cost of attack to the attacker would be the cost of slowly reducing mining difficulty by producing blocks without golden tickets. Meaning, the cost incurred running massive network infrastructure (to take in 51% of fee flow) while waiting for difficulty to adjust downwards. The larger the total fee throughput of the chain — the greater this cost will be.

Alternatively, the attacker could try and spend money hashing to unlock the fees from blocks. But they would need to match the hashrate of the WHOLE network pre-attack in order to guarantee that they produce a golden ticket for any given block, because that’s what the difficulty is adjusted for. But if they did that, then the difficulty would never adjust downwards. So they would just be burning money hashing.

In addition to all of this, there are other reasons why the attack would quickly fail. Consider that most of the time, the attacker is spending time trying to build on a shorter, invalid chain. It takes a long time for them to recapture the longest chain — and they only do so for an extremely brief amount of time.

This means that users have sent them transactions, and those transactions are not getting onto the chain for long periods of time. If a user has to wait ten minutes for their transaction to get onto the chain, they won’t tolerate it. They’ll just rebroadcast their transaction to another node on the network (not the attacker). Fee flow will shift towards the honest network which has the longest chain, and the attacker would have lost its 51% of fees — and a lot of money in the process of trying to attack the network.

Where attacks are not eliminated in their entirety, what Saito Consensus does is make it expensive to attack the network, even when attacker has 51% or more of the work.

In Proof of Work and Proof of Stake systems, if the top miners or stakers collude to gain 51% of the work function, there is no cost to them if they decide to attack the chain. In Saito Consensus, it always costs money to attack the network. Even if you have 99% of the network’s routing work. This means that even if a chain running Saito Consensus only had one single node (e.g. Google) and was entirely centralised, it would still cost them to attack the network. In a system which always punishes attackers, the properties of decentralisation and censorship resistance are decoupled. Granted, there are other reasons why decentralisation might be valuable — but we can get to those later.

“The math works out so the cost of attacking the network is always greater than 100 percent of the fee throughput of the honest chain. This is double the security of POW and POS, where the cost-of-attack is capped at 50% of inbound fee flow.” — Introduction to Saito Consensus

I have not explained everything in the fullest depth here, and there is more to the math behind this for those who want to look deeper. But this doesn’t stop one from realising that up until now, consensus designers have been digging in the wrong place in their efforts to improve what Satoshi built. It might help to look at how majoritarian attacks can be solved in Proof of Work systems like Bitcoin to get a sense for what’s going on here. A short and to-the-point overview of the Saito Consensus design can be found here.

The ELI5/TLDR is that what Saito Consensus gives us is a system where it is always profitable to do what the network wants you to do, and always unprofitable to attack the network. If you collect transactions from honest users, verify them and put them into blocks, you are doing what the network wants. Whether you have 5%, 51% or 80% of the work. But what’s crucial is that if you at any point (not just before having 51% of work) try anything malicious — you are guaranteed to lose money for as long as you try doing it.

The below Twitter thread has aggregated many resources (created by others) explaining Saito Consensus. I suggest using it as a reference point to learn more, as I don’t think Saito Consensus is going away.

Advanced Saito Security — Increasing Cost of Attack Above 100% of Fee Flow

On Self-Overcoming

There is a pretty simple way to make Saito Consensus even more secure. There are two parts to this solution.

1) Targeting Golden Tickets Every Two Blocks

Instead of making difficulty adjust to target one golden ticket every block — we can instead make it target one golden ticket every two blocks. This will lead to network difficulty being twice as high, all else remaining equal.

The fees in every second block without a golden ticket do not get paid out.

As before, difficulty will automatically increase or decrease if there are too many or too few tickets produced over any length of the blockchain.

2) Staking

To avoid rampant deflation (which in the long-run would eventually burn the entire token supply), the money in every second block which no longer has a golden ticket needs to go somewhere. But we can’t just pay it out to routing nodes and mining nodes — because that would put us back in the same situation we were in before, where they collect every block’s fees.

So in order to avoid deflation, and to divert money away from potential attackers trying to recapture their money, a staking mechanism is introduced.

Users can ‘stake’ their Saito. If a block does not have a golden ticket in it (which should be every second block), then the money in that block goes to stakers instead of being burned. Payout is proportional to amount staked.

This system funnels even more money away from attackers, increasing cost of attack above 100% of network fee flow. Paying out every second block’s fees to stakers effectively halves the block reward again under all conditions, making attacking even more unprofitable. It also requires the attacker to commit new money to staking (on top of what they spend mining + collecting fees) in order to recapture funds they spend attacking the network.

Splitting the money up makes it harder for attackers to capture

Recently, a proposal was drafted to leverage the ATR mechanism of Saito to implement staking. Under this system, staking payouts would be made to individual UTXO’s which loop around the chain and pay ATR fees.

Implications of this modified approach to staking include:

  • “All transactions on the network are “auto-staked” immediately on their inclusion in the blockchain, with their payout issued in a predictable amount of time [one epoch].”
  • “The staking table is suddenly backed by 100% of the active token supply, further increasing the cost of attacks on the chain.”
  • “ Perpetual storage is (suddenly) possible, if an ATR transaction contains a UTXO holding enough SAITO that its staking payout is larger than the rebroadcasting fee, that transaction will never fall off the chain. Since 25% of the average transaction fee is redistributed to the staking table on average, in equilibrium, this would suggest that transactions willing to lock-up 400% of the long-term equilibrium transaction fee can offset their rebroadcast costs with staking income in perpetuity.”

That last point is very interesting. If the staking reward one receives is high enough, it will automatically cover the ATR fees of the on-chain data/transaction and permanent data storage at no added cost becomes possible. This is an elegant way of achieving permanent storage on a transient ledger. While the ATR market-price based rent-style option for data permanence is still available, with staking users are given a way of reducing those costs, and possibly even eliminating them entirely and finding themselves in profit.

The modified mechanism of targeting a golden ticket every n blocks is referred to as the ‘powsplit’ in Saito. As n increases and more block fees are paid to stakers, so too does the cost of attack. To understand why, try thinking through what happens when attacker spend their own money to produce blocks. In half the blocks, half of the fees go to miners — and in the other half, all of the fees go to stakers. Attackers bleed much more money. And in majoritarian attack conditions where an attacker is using honest fee-flow to build the longest chain, mining profitability is even further reduced — so the cost of attack is increased then, too.

Example of Saito Staking Payout Mechanism

Commodification of the Work Function

We have discussed how Saito fixes a multitude of attacks — but have not touched on the fact that Saito does not suffer from the problem of an increasingly commodified work function.

As we have covered, in PoW/PoS there is nothing stopping the emergence of external markets for hash or stake where ‘work’ can be bought, sold or rented. This creates centralising risks and the threat of attackers purchasing work in real time to execute 51% attacks.

In Saito, however, this is not a possible outcome. Because the work function is the efficient collection of fees, it cannot be gamed.

Let’s say I have accumulated some routing work, and it is worth X to me to produce a block with it. If I share it with you, it will only be worth X/2 to you. Half as much — because the amount of routing work derived from a fee halves with every hop into the network. So buying work becomes extremely inefficient. I also won’t sell you that work for less than it can earn me if I share it with other network nodes.

Not only this, but even IF you bought routing work from me, and you used it to make blocks, I still have a chance of winning that block’s reward proportional to my contribution to the total routing work in the block. Since I was an earlier-hop node in the transactions’ routing paths, I am statistically more likely to get paid than you for the block than you (66% chance of payout).

In short, the hop-halving mechanism and the statistical payout of the block reward make commodifying the work function in Saito unfeasible.

Proof of Work? Proof of Stake? Saito Consensus.

Mining and Staking are both forms of ‘work’ which are used to produce blocks in Proof of Work and Proof of Stake systems.

Saito is neither PoW nor PoS, because it uses routing work to produce blocks. Mining plays a role, but that role is not block production — it is only to unlock the funds from blocks. ‘Staking’ still plays a role, but that role is not block production — it is only to divert money away from attackers.

In either case — mining or staking do not serve the same purpose as in Proof of Work/Stake. Saying that Saito uses “proof of work and proof of stake” is a misleading and inaccurate way of conceiving the consensus mechanism, because there are fundamental differences in the way mining and staking are leveraged in Saito.

The multifaceted approach outlined above — with routing work in the middle of it all — is why it has been named ‘Saito Consensus’.

“A great way to understand Saito Consensus is to think about what blockchain users need. They don’t need hashing or staking. They need nodes to take their transactions, get them into blocks and to share those blocks to other users.

Saito consensus rewards this. It says that we should not be taking users’ money and giving it to miners and stakers, we should be giving it to the nodes that are helping them, that’s the P2P network.

How do we keep the chain secure and open while paying for nodes? The very simple answer is that transactions track how they got into blocks. When a block is created we can inspect it, figure out who did the work to the transactions into the block and pay the network accordingly.

The transient blockchain is similar. Fees to use blockchains go up and up over time because the nodes running the chain have to store more and more data. Either the nodes will stop doing that and the chain will collapse, or users will pay more and more for the same service.

The transient chain just means that every transaction coming into the block is paying for a certain amount of time on the chain. When a transactions’ time is up, it has to pay for another period on the chain or drop off.

Automatic rebroadcasting is just a way of allowing for users to pay in advance for more time on chain, and making sure that consensus honours this.”

Saito Network, Twitter

Revisiting Scaling and the Blocksize Debate

The Blockchain Trilemma

You remember the blockchain trilemma. That classic constraint defining blockchain systems, which is just about the only thing which Proof of Work and Proof of Stake maximalists agree on. As a refresher, let’s go through Vitalik Buterin’s summary of the trilemma from his article on sharding:

“The scalability trilemma says that there are three properties that a blockchain try to have, and that, if you stick to “simple” techniques, you can only get two of those three. The three properties are:

Scalability: the chain can process more transactions than a single regular node (think: a consumer laptop) can verify.

Decentralization: the chain can run without any trust dependencies on a small group of large centralized actors. This is typically interpreted to mean that there should not be any trust (or even honest-majority assumption) of a set of nodes that you cannot join with just a consumer laptop.

Security: the chain can resist a large percentage of participating nodes trying to attack it (ideally 50%; anything above 25% is fine, 5% is definitely not fine)”

Vitalik goes on to describe the three possible ways this trade-off can be made, which “only get two of the three” properties we want. (Emphasis added)

  • First, he says, there are “Traditional blockchains” like Bitcoin. “These rely on every participant running a full node that verifies every transaction, and so they have decentralization and security, but not scalability.”
  • Then there are “high-TPS chains”, which “rely on a small number of nodes (often 10–100) maintaining consensus among themselves, with users having to trust a majority of these nodes. This is scalable and secure (using the definitions above), but it is not decentralized.”
  • Finally, there are “multi-chain ecosystems”, which “have different applications live on different chains”, and use “cross-chain-communication protocols to talk between them. This is decentralized and scalable, but it is not secure, because an attacker need only get a consensus node majority in one of the many chains (so often <1% of the whole ecosystem) to break that chain and possibly cause ripple effects that cause great damage to applications in other chains.”
The Blockchain Trilemma (Credit: Vitalik Buterin)

Where does Saito fit into this picture? Well, Saito-class chains stand in a category of their own.

In the first place, Saito more efficiently allocates its resources, and has a different trade-off curve to other chains.

For a given level of scalability, Saito achieves greater decentralisation than Proof of Work and Proof of Stake chains. For example, if you compare Saito and Bitcoin without volunteers — Saito will have greater decentralisation because its peer to peer network is actually paid for. Its provisioning is economically incentivised. If you compare Saito and Bitcoin with both having volunteers, Saito would still have more nodes in total, because in addition to volunteers it would have economically incentivised nodes. Additionally, the point at which nodes drop off the network over time due to rising chain-growth costs would be later in Saito than a Bitcoin chain of the same blocksize, since nodes are actually paid.

Similarly, Saito offers greater security guarantees than PoW/PoS chains of the same scalability. A Saito class blockchain is always secured by greater than or equal to 100% of fee flow, versus 51% in PoW/PoS chains without block subsidies. Not only does this mean that security rises with fee throughput — but it means that Saito Consensus sustainably allows for non-inflationary money on the base layer. This means that once a Saito-class chain is secure enough (has enough fee flow), it will be able to support a greater monetary asset than other blockchains. A stronger monetary asset ensures the self-sustainability of the network into the future, as it creates a stronger incentive for node runners to join the network. If you don’t understand this — imagine a cryptocurrency which was massively inflationary, whose price kept going down over time. Few people would be incentivised to join that network as a miner or staker and be paid in a depreciating asset.

Not to mention, increasing scalability on a Saito-type chain results in a significantly lesser decline in decentralisation than on a PoW/PoS chain. The reason for this is simple. Scale does not kill the peer to peer network (decentralisation) in Saito because Saito pays the peer to peer network for the costs which come with increasing scale!

So to begin with, compared to like-for-like PoW/PoS chains, a blockchain running Saito Consensus gets more decentralisation and more security than PoW/PoS chains — as well as a better scalability-decentralisation trade-off curve.

“Anything you can do, I can do better.”

But in the second place, there is nothing stopping Saito from being decentralised, scalable and secure. The blockchain trilemma is founded on premises which do not apply in a Saito Consensus context.

The trilemma is created because in Proof of Work and Proof of Stake chains, you can’t pay for network scale without pulling money away from security (mining & staking). But in Saito Consensus, paying for the network is the security mechanism. So security scales by allocating funds to the network. Higher scale means more fee throughput, which means more security — because security is a function of fee flow.

On a chain like Bitcoin, to increase security one would need to increase the mining difficulty. This could be done by reducing the mining reward, and burning money. In Saito, the way you get security is by increasing the fee volume which the p2p network processes — and the p2p network is paid for this. There is no trade-off, because paying the network is the same method which gets us security. When routing work is the work function of the chain, greater fee flow (scalability) makes the network more secure.

This collapses the ‘security’ and ‘scalability’ corners of the triangle into the same category, leaving only a dilemma between scalability and decentralisation.

Now let us remind ourselves why we want decentralisation in the first place. There are two key reasons.

  1. Censorship resistance; preserving non-excludability. This is the primary reason decentralisation is valued. Because in PoW and PoS systems, centralisation enables costless censorship.
  2. Redundancy. Meaning, resistance to single points of failure which could go down for whatever reason.

But now recall that Saito Consensus has no honest majority assumptions — and does not rely on trusting any nodes on the network, even if they are running above consumer-grade hardware. There is no need to trust nodes at any level of centralisation, because there is an always-positive cost of attack. This means Saito would always be ‘decentralised enough’, by the standard Vitalik has outlined — able to run without any “trust dependencies on a small group of large, centralised actors” — regardless of its blocksize and throughput.

There is never a point where a large percentage of nodes trying to attack a Saito network will not face a positive cost of attack proportional to the total fee throughput of the network. This is exactly like the 51% guarantee PoW/PoS chains provide, but just stronger — working all the way up to 100% of work centralisation. Even a hypothetical 99% node could not profitably sustain an attack on a Saito Consensus chain.

In terms of redundancy, because Saito Consensus incentivises the production and optimisation of a user-facing network, even a Saito chain which attempted to process Google amounts of data would incentivise decentralisation on the basis that nodes which are geographically close to users will be able to better serve them. Meaning that sets of competing nodes would spring up across all major continents to seize on the proximity arbitrage they had.

But remember — not only is there nothing stopping Saito from being as decentralised as any existing chain — Saito Consensus should by all rights create a more decentralised network than any Proof of Work or Proof of Stake network ever could for a given level of scale—as in Saito, the network is paid for by consensus. It is not a volunteer network, so does not crumble with scale.

It could also easily be argued that a chain which pays its network to exist will face less redundancy issues than chains which rely solely on cultural propaganda to influence people to set up network nodes in the face of rising costs. Remember that up until now — culture has been the best and only solution to ‘decentralisation’ in the space.

Redundancy

I find it important to mention here that for comparisons to other chains to be valid, we need to take into account the properties which Saito brings to the table which other chains may not. For example, at no scalability level does Saito sacrifice network openness. This is not the case with other chains that attempt to scale, whose consensus mechanisms do not internally pay for the networking layer. Secondly, due to the ATR mechanism — there is no level of scale which Saito is not self-sustainable at. PoW/PoS chains which attempt to scale fail to deal with the data storage and pricing problems — making their scale unsustainable in the long-term. In Saito, nodes are paid to store the chain — so do not get pushed off of the network at scale. Comparing Saito with other chains which claim to have “achieved scalability” is therefore invalid unless network openness and self-sufficiency are also taken into consideration. One might also add network infrastructure self-optimisation and greater security guarantees to that list.

So there are three key things to note here regarding Saito and the blockchain trilemma.

  1. Saito aligns security and paying for network, turning the trilemma into a dilemma. By paying for the p2p network, scalability is always achieved without sacrificing security.
  2. Saito achieves the major goals of decentralisation (censorship resistance, non-excludability) at any level of decentralisation. ‘Decentralisation’ is revealed to be valued predominantly because it is the only means to those properties in PoW/PoS systems.
  3. Saito Consensus exhibits a superior trade-off curve and a more efficient use of resources than PoW/PoS systems— making it more decentralised and secure for any given level of scalability.

A hypothetical Saito Consensus maximalist would likely deny or dismiss the trilemma entirely. Instead, they would argue that the blockchain trilemma is a condition particular to blockchains only of a specific kind. That is, Proof of Work and Proof of Stake consensus mechanisms. The scale-decentralisation trade-off still remains, as it is an almost tautological problem. But that is all that remains. Decentralisation becomes irrelevant beyond the point of redundancy, as Saito’s honest-minority consensus mechanism decouples decentralisation from network control. For this reason, the blockchain trilemma is either irrelevant or wrong.

A New Way of Thinking

What really makes a blockchain desirable in the first place? Self-Sufficiency, Openness and Censorship Resistance. No matter HOW decentralised, secure or scalable, you would not want to USE a blockchain without these properties. The Satoshi Properties are the real blockchain trilemma.

Saito — with its novel chain design and consensus mechanism — solves a lot of problems.

The volunteer problem is solved. The data storage and pricing problem is solved. The scalability problem, the network closure problem, the block subsidy problem, majoritarian attacks, discouragement attacks, the commodification of the work function, header-first mining, delayed block propagation , and a host of incentive problems — are all solved.

And most importantly, in my opinion, The Satoshi Properties are achieved. We want a blockchain that doesn’t collapse, and a consensus mechanism which pays for things we value today and into the future. We also want censorship resistance and the property of openness (non-excludability). These things are what make a blockchain desirable in the first place. Self-Sufficiency, Openness and Censorship Resistance. No matter HOW decentralised, secure or scalable, you would not want to USE a blockchain without these three properties. The Satoshi Properties are the real blockchain trilemma. With Saito, we get them.

How do we get them? By fixing incentive problems, rather than technical problems.

It should be easy to see by now. Why can’t blockchains scale without losing openness, and without sacrificing their p2p networks? Because networks do not internally pay for data, bandwidth and infrastructure costs. Incentive problem. Why do majoritarian and discouragement attacks exist? Because when someone is in a position to execute a majoritarian attack in PoW/PoS, they are financially incentivised to do so. Incentive problem. The dream that we all bought into has been that the biggest problems in blockchains are technical, when in fact they are economic.

On other blockchains, core activities which are central to the chain’s long-term existence are not incentivised. On Saito, every activity required to keep the chain operational is incentivised. A Saito network can sustain itself indefinitely. This is the virtue of network self-sufficiency which the biggest blockchains on the market currently lack.

Maybe if we just wait long enough, our problems will go away?

Saito seems to mark a paradigm shift in the blockchain space. It a project that I think, if they were still around, Satoshi would actually support.

In my opinion its most impressive aspect is not that it prices and pays for data storage, or that it incentivises the provisioning of much-needed network infrastructure instead of just mining and staking so that it can scale openly. The most impressive aspect of Saito is that it figured out how to make attacking the network costly under all circumstances — meaning that no matter what the network topology (whatever the level of decentralisation is), censorship resistance and openness are secured by a quantifiable cost. And the more it scales, the higher this cost grows.

This fact brings with it a new way of thinking. The reason we valued decentralisation in the first place was that it was essential for trustlessness. Without it, PoW and PoS networks lose openness and self-sufficiency. But Saito Consensus achieves trustlessness, openness and censorship resistance whether it has one node or one million. And the way it is designed, it can be more decentralised than any other chain on the market pound for pound. By solving the fundamental public goods problem at the heart of the space, ‘a culture of nodes’ supporting the network would merely be icing on the cake.

A classic introductory Saito video. Highly recommend watching.

With Saito, we have a blockchain which pays for whatever users want. Some are already speculating that Saito Consensus will eventually be used to fund open source software more generally. Beyond that, Saito might even be able to fund public goods outside of the digital world. One can quite easily conceive of a world where routing nodes compete to attract fee-flow by serving the values of users — donating a portion of their profits to charity X or funding business Y to attract more fee flow. If the public school which I went to as a child set up a Saito node — or if a conglomeration of schools pooled resources to run a node — I would gladly route my fees through them to help fund schools, for example.

“In networks like Saito, fee collection is the form of work that secures the network. This is preferable to mining/staking as it pays ISPs/Infura(s). So people will build apps just like miners design ASICs” — David Lancashire, Saito Co-Founder

What’s important is understanding that important and long-standing problems facing all major blockchains — including many thought to be unsolvable — have been fixed. And what’s more, they have been fixed without sacrificing the Satoshi Properties which give blockchains their value. Bitcoin’s solution to the problems which threaten eventual blockchain collapse was to not scale so that volunteers could support the network. Other networks choose to trade-off long-term sustainability for scalability, and are trying their best to use technical optimisations to push out and delay the inevitable for as long as possible. All of these ‘solutions’ are merely trade-offs. By tackling problems at the incentive layer, Saito need not be subject to the same trade-offs which other blockchains face. It can provide greater security than Bitcoin while incentivising a resilient (rather than volunteer) network — allowing it to achieve scale without putting the long-term self-sustainability of the chain at risk.

To put it simply: crypto’s ticking time bomb has finally been defused.

Phew.

What Kind of Chain is Saito, Again?

I’ve spent a lot of time talking about Saito’s chain design and consensus mechanism. But what exactly is Saito?

Saito is a base-layer blockchain which supports its own native cryptocurrency. Like Bitcoin, Ethereum and alt Layer 1’s, its native cryptocurrency (Saito) has the potential to be money more broadly. In this sense, the Saito ledger competes as a monetary base layer. It has been described as “a version of Bitcoin that forces attackers to do 2x the hashing, but which only pays them at most 1/3 of the fees, because routing nodes collect the rest. Double the security, while paying for the P2P network.”

However, Saito has ambitions to do more than just money. Its goal is to be the backbone for web3. The current web runs through centralised internet service providers (ISP’s). It is not a trustless system. Saito envisions a world where the web runs through Saito nodes, and internet data travels through the Saito network — which provides guarantees of trustlessness, censorship-resistance, peer-to-peer, and non-excludability.

The simple answer to the question ‘What is Saito’ is: Saito is Bitcoin, but it handles data in addition to just money. Except, Saito has not only trustless — but self-funding and scalable network infrastructure (and it retains openness at scale!).

I think the best way to conceptualise Saito is as an open data network. Unlike Bitcoin, Saito transactions can carry arbitrary data, allowing users to send and receive data in a permissionless, trustless and secure way. ‘Data’ could be anything. It could be monetary transactions, messages, emails, videos, streams, poker hands, personal information, social media posts or IOT data. It can be anything. Especially those things which users would prefer not to run through centralised hands. Keep in mind that data can be transferred using whatever encryption methods satisfy a given user’s wants, and there needn’t be any worry about that data being tampered with, stolen or used for malicious purposes. This contrasts with the web2 world, where we rely on trusted parties not to misuse our data.

Everything is data.

Saito offers browser-native decentralised applications. There is no need for third party extensions like Metamask. All Metamask does is connect to Infura, who runs Ethereum’s peer-to-peer network and forwards transactions to validators. But on Saito, the p2p network is all that there is. All nodes are doing what Infura is doing and you connect to them directly.

In Saito, applications are modules which run inside user wallets. When you interface with a website like the Saito Arcade, the ‘website’ is actually a lite-client wallet which runs in the browser. The wallet shows you the application — and when new transactions come in, the wallet updates the application. When you transfer somebody data using Saito, whether you are messaging or playing a game with them, you are operating fully peer to peer.

“There’s no Facebook, there’s no Google, and there’s no Apple deciding if you can run this game or that game. It’s fully open.”

Because Saito is a blockchain, all data which gets put on the chain (for any period of time) gets universally broadcast out to the entire network. This means party A on one side of the earth can put data on the chain, and party B can see it and act based on this data. Off-chain software can listen to the chain and react accordingly. Two parties can interact in a trustless, peer-to-peer manner.

For example, users can download a Twitter-like application in their wallet, and use the Saito network to transfer data in a peer-to-peer manner without trusting any third parties. Using public/private key infrastructure, they can ensure that only the intended party (or parties) sees and reads their messages. Similarly, the Saito team has recently developed an alpha version of a trustless peer-to-peer video chat application — like Zoom, but without the company running it.

Whether it’s the Apple app store or Steam taking a 30% cut of all app and game sales through their platform, or Facebook and Reddit making money from owning and selling your data — all instances of closure can be eliminated using open infrastructure which blockchains built right provide.

“[Blockchains with open infrastructure can be used for] anything that requires universal data distribution where you don’t want the network effects owned by a private firm or taxed by the state. App stores / Facebook / distributed applications built around keypairs.” — David Lancashire, Saito Co-Founder

Saito’s Richard Parris gives another cool example. Suppose you buy an IOT camera which records your home so that you can check on things while you’re out of the house. Currently, you need to go to a trusted provider who streams the recording to their data centre, and then makes it available to you upon request. You trust that they do not use this data of yours which they control for anything else. In a Saito world, you’d have a little LCD screen (a small, 10 cent piece of electronics) and a wallet generator on the camera. You’d press it to generate a new wallet, scan that in an app on your phone, and go on with your day. When you want to wake the camera up or access its feed, you could simply press a button on your phone which sends that address on the network a couple of Saito, some cryptographic information and some location information over the internet. The camera would respond, and you would establish a point-to-point encrypted transfer between you and the camera, and you could use the regular internet (because we have encryption capabilities) to send the stream directly to yourself. Now your camera is streaming directly to your phone peer to peer in an encrypted manner, and no trusted third party is in possession of your data at any point. This is real web3.

It’d be nice to know that nobody’s watching.

Saito does not have smart contracts. But remember that it does not need smart contracts for applications. It has no EVM on the base layer (though these can exist and be interfaced with on L2), and is a UTXO based system. It is a network which can be used to send and receive data peer-to-peer (and store data), without trusted third parties. Because Saito is a UTXO system, network nodes do not need to execute every transaction — instead they just need to keep track of the UTXO hash map (a data structure) which tracks whether UTXO’s have been spent or not. This creates massive efficiency gains and reduces strain on the p2p network, as nodes do not need to execute transactions.

In this system, one could use a chain like Ethereum as a layer 2 should they so wish. If this requires data storage, only a subset of nodes (those which service users) need to do that, and they can fund the data storage by providing that data to users in exchange for L1 transaction flow that sends the transactions into the EVM.

“What you can do is have 5, or 10, or 100, or N computers on Saito (the user can decide) take transactions sent from a specific address, and treat them as instructions to execute on L2 EVM. So you could do smart contracts on layer 2. You can let the layer 2 people worry about their state, but the stuff that the blockchain is supposed to provide, we’re not slowing down because not everything needs to be an application.”

— David Lancashire, Saito Co-Founder, Interview

BelvedereM. C. Escher

There are also interesting (currently hypothetical) models which would enable users to transact with the Saito network freely, should they so choose.

“You can run whatever applications you want, and if you want some free tokens — enough to use the network — why don’t you install this advertising module. Or that advertising module. And you decide if you want it. You don’t need it. But if you want a little bit of free tokens, then okay, do that. And maybe when you’re using the chain it shows you an ad or asks you to fill out a survey. The advertisers send users tokens, and the users can use those tokens to make transactions. It sounds a bit like Brave, but the difference is there’s no company in the middle — Brave doesn’t exist! There’s no company deciding “you get to advertise, we collect money and give you a little bit of the tokens.” It’s: you decide who you want to deal with because it’s entirely open.”

“The beautiful thing about routing work is that all of a sudden, we have a model where: the old internet is ‘advertisers pay Facebook and you pay your ISP. The new internet is ‘the advertisers pay you, and when you do whatever you want to do on the internet, your ISP collects the money because serving you is how they get paid. As Saito grows, the model grows because the values of the fees and values of the transactions grow. So what’s your ISP going to do to get you to use them? Maybe they’re gonna give you a bunch of free off-chain data. Maybe they’re gonna give you liquidity on the lightning network. Maybe they’re gonna give you a free mobile phone number, because they’re China Mobile and they’re competing with China Unicom”

— David Lancashire, Saito Co-Founder, Interview

If all this is too much, the TLDR is: Saito is base layer money with web3 capabilities, browser-native dapps and true peer-to-peer. With scalability. Without sacrificing trustlessness.

Interoperability — Helping Other Chains

There is one other thing I forgot to mention. Saito is a layer 1 blockchain which by itself can create and sustain web3 — but it can also assist other blockchains in their missions too. In addition to being a self-sufficient and open Layer 1 protocol, Saito’s unique consensus mechanism can actually be used in a node-as-a-service (NAAS) style fashion to service other blockchains. In this way, it is able to alleviate their volunteer infrastructure problems.

“You’ve got a friend in me”

The Saito Network can run the p2p network for Bitcoin, Ethereum — or any other chain. Saito pays for fee collection, but does not specify where these fees need to come from. Some examples of how Saito can fund external to the network infrastructure include:

  • A Saito node running lightning network channels on Bitcoin and paying for liquidity provision in those channels because Bitcoin users didn’t want to set up their own payment channels. Users will route their transactions to a specialised Saito node offering them the service they demand, and the node will charge fees which cover the costs of the infrastructure.
  • A Saito node running a node on the Ethereum blockchain to route transactions for a dapp like Uniswap (instead of developers using Infura, they — and anyone else — can use Saito to route transactions for their dapp)
  • A Saito node running a Polkadot parachain node because people want to play poker with a token on that parachain.

Saito can also reduce congestion on their networks by taking on data which nobody wants stored forever on a permanent ledger. In doing this, the data-storage load on other network’s nodes — as well as network congestion and subsequently fees — can be reduced. Saito Co-Founder Richard Parris elaborates:

“We do not need to compete head on with Ethereum. We can actually be very helpful by providing simple ways for developers to take load that needs the PKI, the public broadcast aspects of blockchain with cryptographic security, and move as much of the transient, transactional kind of stuff as possible onto Saito — and just have Ethereum do what it does well: hold the tokens, move the tokens. And that’s possible on Saito because our transaction size isn’t limited. So you can put a whole bunch of Ethereum transactions into one Saito transaction. We’ve actually built an early version prototype of this on a testnet to show that this is possible.”

“You can put as much data as you want in Saito transactions, you just have to pay for that space on chain. You just have to find a node willing to take that transaction from you, and pay the appropriate fee for that transaction size.”

“What it means as a developer is that you have access to these key features of blockchain that allow web3. […] Consider the example of an NFT auction. The current models are all web2 options. They’re off chain. You could do it on chain, on a cheaper chain. But do you really want to fill a blockchain that’s permanent with thousands of bids? And do people really want to pay $20,000 in fees between them just for an auction? What you could do is put all those transactions into one Saito transaction, and just have software put the biggest one on the Ethereum chain.”

“One of the huge things for developers is that you can suddenly use much more traditional methods to develop, because you can just put data into the transactions — and so you can work a bit like an API — I can just dump information for you, I can send moves in a game, I can send a business transaction, anything I like — but I’m getting proper web3 when I do that.”

“Our sense of how things will move forward is that EVM chains and things that need to be a permanent record will stay using the common models, but the transactional things will move onto blockchains like Saito to provide a web3 underpinning for those token networks.”

— Richard Parris, Saito Co-Founder, Interview

What’s the biggest takeaway? Saito has the capacity to inadvertently solve infrastructure provision on other blockchains (should users desire it) as a consequence of its routing-work based mechanism. It can also act as a supporting (open) data layer for other chains to help ease data costs and reduce network congestion. In Ethereum’s case, it can replace Infura.

Advanced Scaling —Sharding, Layer 2's

In my opinion, the Ethereum roadmap of disaggregating consensus / execution / data availability is the best solution to the problems in the blockchain space in a pre-Saito paradigm. The solutions outlined and being developed by Ethereum team are focused on preserving the core quality of trustlessness which blockchains exist to solve.

But by creating a self-sufficient, open and censorship resistant base layer which is not plagued by the myriad of problems found on other chains — and which is capable of being more decentralised, more secure and more scalable than all major chains today — I now believe that a scaled-up (secure) Saito chain would be a better base-layer for L2 scaling than BTC and ETH. Saito’s consensus mechanism does one better than resisting attacks under ‘honest minority’ conditions. It has resistance to dishonest majority conditions, up to (but not including) 100% of the network. Not only this, but it is free from block-subsidy security problems and other critical problems covered ad nauseum in this article.

The fact is that PoW and PoS chains are wrestling with trade-offs that just do not exist in Saito Consensus. Every technical solution being devised to squeeze a little more optimisation out of pre-Saito systems can be applied more effectively on a Saito-Consensus style system not subject to those constraints.

If you wanted to do sharding, you could build a sharded Saito-type chain. If you want L2 and rollups, you could do that too. Remember how in an attempt to decentralise rollups, people are trying to build entire new proof of stake consensus mechanisms for individual rollups? Giving rollups their whole own blockchain trilemma? Fractal scaling? Well now, instead of PoS, this could be done with Saito Consensus — which does not suffer from profitable attacks at any level of centralisation, and also incentivises the efficient collection of user fees. The rollup method can still work as it has been outlined, but it now has a better and more secure base layer to build upon and model itself off. A base layer which pays for its own infrastructure costs and can guarantee that it won’t collapse due to scale or volunteer pressure over time.

On the Role of Mining in Saito Consensus

One of the biggest critiques levied against Bitcoin is that mining is ‘wasteful’ and ‘inefficient’, as the sum total of mining for the Bitcoin network only produces a network capable of 7tps. I am not going to comment here on the thorough discussions which have taken place regarding this critique. Instead, I will only focus on how Saito Consensus differs from Proof of Work in this regard.

In Saito, mining only receives half of the block reward. So by default, for a given network fee-flow there will be half as much mining on a Saito Consensus chain as on a Proof of Work chain like Bitcoin. If we then target only one golden ticket every two blocks, Saito Consensus becomes only 25% the hashing of Proof of Work for a given fee flow. With further advancements in consensus, targeting one golden ticket per n>2 blocks may be possible, and these gains could be pushed further. Saito Consensus is also double as secure as PoW, being secured by 100% of fee flow rather than 51%. So there is 1/4 as much hashing (energy consumption), for twice the gain. Making Saito Consensus a vastly more energy-efficient system than traditional PoW.

Lightweight

On the matter of efficiency gains, the only Saito chain which currently exists has allowed its blocksize to float freely — and be regulated by market forces using the ATR mechanism rather than by fiat (decree). Blocks will never grow so big that they collapse the p2p network because nodes will stop accepting transactions if it is unprofitable to store them (which they have to do). Rising fees from blockspace demand will raise ATR fees further — pruning off old data. While this will lead to more centralisation than a puny 1 megabyte blocksize cap, it doesn’t really matter. First, by incentivising nodes to join the network Saito will retain greater decentralisation than other chains. Secondly, the degree of decentralisation is entirely irrelevant past the point of redundancy (which will easily be attained), because Saito does not even require decentralisation (a specific network topology) to achieve security (cost of attack), censorship resistance and protection from nefarious actors.

Given that Saito is able to scale orders of magnitude more than Bitcoin — the (smaller) electricity expenditure used to secure vastly more network activity. So considering the blockchain ‘wasteful’ or ‘inefficient’ per unit of electricity spend compared to alternative means is no longer a valid critique. This is all notwithstanding the fact that the same electricity spend on hashing actually gives you even greater security on a high-scalability Saito Consensus chain processing a lot of fee-flow, because for Saito cost of attack is a function of fee flow.

From an economic perspective, the question of hashing is simple. If the network supports value for people (in the form of applications and usage) then hashing is not wasted electricity. Just like the internet currently. Or Christmas lights, or washing machines, or Facebook. All these things consume incredible amounts of energy, yet are produced because people demand them. Some might consider it ‘wasteful’, but that is only with reference to their own subjective value judgements. Clearly there are others who do not see it that way, and get value out of these things. I think coming after an energy-efficient blockchain for providing the means to be free from financial censorship, disintermediate mega-corporations and empower users to escape monopoly control is a bit disingenuous — and often really motivated by a fear of change. One only need consider the total electricity and bandwidth spend of humanity on video games, porn and mindless entertainment to see this point.

I can see how the argument that ‘it is not wasted electricity if it provides value to people’ is more compelling to industry outsiders when made for an application-capable platform like Saito which has more pre-crypto analogues (e.g. app stores), rather than a purely monetary network Bitcoin which feels more like a ponzi to someone who’s never thought deeply about it. For an application-type blockchain, one can make the simple comparison: ‘electricity is used to pay for running apps and infrastructure for apps, just like with Apple or Amazon — except it requires hashing to get rid of the Apple or Amazon from the picture, and just have the app free-standing as a public good.’

There is also a case to be made that consensus mechanisms with mining lead to a fairer token distribution over time because of token-selling pressures from miners who need to recover their costs. This is not insignificant when trying to design a fair and open protocol. Despite the fact that token distribution has no impact on the underlying security of a chain running Saito Consensus.

Atlas Holding up the World (Credit: Flickr)

Whether the Saito that exists now will be the chain which takes the space, whether it gets forked and goes through its own darwinistic process of evolution like Bitcoin did, or whether some of its key ideas are implemented by other chains — it doesn’t really matter. The point of this article is to showcase that solutions exist to problems this space considers to be ‘unsolvable’ problems — and that we should be discussing them and taking them very seriously. In my opinion, Saito Consensus is an improvement on PoW and PoS. If you hate hashing (despite it being multiples more energy efficient in Saito), you can build a version of Saito Consensus which doesn’t rely on it — and instead uses random number generation to derive security. Whatever one’s stance on hashing, I think that the fundamental innovation of Saito Consensus, as well as the ATR-looping chain mechanism, will play a large role in the future of this space.

The Saito team is very vocal that they want discourse about both Saito Consensus and the chain design mechanism. There is more nuance than what I have discussed here — and with the bright minds of the space put together, there is no doubt room for further innovation which will benefit everybody. I am simply a community member trying to spread the message. For Saito as it exists today, I see the biggest challenge as increasing fee flow to get security. Real progress is currently being made towards this end. Bootstrapping network usage is not easy, but if it can be pulled off, the security gains for the chain will be massive and the network will have proven itself to the market at scale.

So What Now?

“Dreams feel real while we’re in them. It’s only when we wake up that we realize something was actually strange.”

— Inception

Proof of Work and Proof of Stake consensus mechanisms have intrinsic flaws which create the blockchain trilemma and majoritarian attacks. Many chains are growing in size at an unsustainable rate, and with enough time their networks will collapse under their own weight. Bitcoin’s network remains open, but at the cost of limiting scalability. The provisioning of critical network infrastructure beyond mining and staking is totally unaccounted for. Network openness and long-term sustainability is an afterthought.

What’s more, technical solutions are endlessly thrown at what are actually economic problems requiring incentive reform. For a prototypical example, see Adam Back deflecting critiques about Bitcoin’s security model relying on the (temporary) block subsidy. What we see is a complete neglect of the existence of market failures to provide public goods.

Saito targets these problems on the economic level. It realigns incentives to induce the provisioning of infrastructure — and to make it impossible for network participants to attack (censor, exclude people from) the network without going bankrupt. No matter how decentralised Bitcoin is, if the top mining pools collude, they could profitably attack the chain if they so wished. The equivalent applies for any other chain.

I believe that if we care about the potential of our blockchains, Saito’s innovations should be taken seriously and grappled with by the users and builders of this industry. That is why I wrote this article. If you could find the little clapping icon👏 on the bottom of your screen and hold down your click button on it — you will be able to give this article up to 50 claps. I would appreciate you doing so, as it took a lot of time and effort to create.

Saito — before and after entering the world of the dream

In the movie Inception, there is a character named Saito. Saito enters the dream world with a purpose, but ends up getting trapped there, unable to escape. After many decades pass — with his mind confined to the illusory world — the young Saito grows into an old man who forgets that he is in a dream. Eventually, Saito is found in the world of the dream, and must be convinced that he is dreaming before he can finally wake up and return to reality.

There are some major problems with blockchains as we know them. And instead of solving them, we have simply forgotten about them. We have convinced ourselves that they are unsolvable — and have resigned ourselves to working around them instead. Now, we live in the world of the dream, and our ability to make progress is being hindered by the illusions we have come to believe. We are all Saito. You are Saito.

The question now is not “when moon?”

The question now is not “is Saito perfect?”

The real question is — knowing all of this — can you go back to the dream?

TLDR

--

--