Exploring the Fat Protocol Thesis

Avi Felman
Ledger Capital
Published in
3 min readJun 26, 2018


The exploration is adapted from the original USV thesis, which you can read here.

You know protocols.

SMTP for email transfer, HTTP for hyperlinks, TCP/IP for data transfer. Every application you use is built on top of those protocols. Facebook, Google, Amazon, would not be able to exist without these protocols. Since the advent of the internet, the majority of value has accrued to the applications on top of pre-existing protocols, and the protocols themselves have not generated much wealth for the people who invested in them.

The underlying argument of the fat protocol thesis is that blockchain technology can fundamentally flip that dynamic and change it such that the protocol layer accrues more value than the application layer. The theory posits two main reasons why this happens:

  1. Shared data layer
  2. Speculative token attachment

First: the shared data allows for user data to be held in a central place (namely the protocol/blockchain) and to be shared equally among the applications built on top. Historically data was siloed and barriers to entry were large. With a shared data center, it’s easier for companies to build on top and for them to work together. For example, it takes quite a bit of effort to transfer your assets from Robinhood to E-trade (built on different bases), but it’s seamless to do so between Coinbase and Binance.

Second: the speculative token attachment encourages building and speculation on early stage protocols, as any application built on top of the protocol will increase it’s value and create a loop of value creation. As applications are built, the protocol accrues more value. As the protocol accrues value, people are incentivized to build applications. Since the tokens are needed to access & use the protocol, they go up in value the more a protocol is used. It’s a virtuous cycle.

Due to the protocol gaining value linearly with application usage, the theory posits that that applications cannot accrue value larger than that of the underlying protocol.

Issues with the thesis:

One: Applications often end up building their own application specific protocols. It’s unclear whether the underlying protocol would be able to adapt to swallow those needs. The protocols being developed are more focused when they are developed from a application centered perspective, as you would know exactly what features should be develop and how they should be developed.

Two: My own bloated protocol thesis. If every protocol had a native token it would be cumbersome to interact with each other and seamlessly transact. I don’t want to live in a world where I have to own 100 different coins for 100 different protocols. I would like protocols that handle basic functionality, and applications that take care of complexity. If the underlying protocol cannot adopt all types of complexity needed to run a full scale application (which is likely) then new protocols must be developed, and with it would likely come with additional tokens.

Three: Off chain applications that accrue value could theoretically enjoy more value than the underlying protocol. Take for example Enigma, which operates as an off chain addition to the Ethereum protocol. Simplifying a bit, you need the Enigma token to process private transactions, and the results of those transactions are posted the blockchain.

The Enigma token is correlated with the intensity of the computations performed on the Enigma layer. If the computations are incredibly intensive, it will consume more Enigma tokens (and therefore drive the price of Enigma up). Regardless of the intensity, the same amount of Ethereum will be used…because the only thing ethereum is being used for is posting the result to the blockchain.

You can see here that Enigma could theoretically surpass Ethereum in value if they had the same number as transactions but the Enigma token was consumed faster because the computational intensity on the Enigma layer was more.

Four: The shared data view also holds some issues. The shared data thesis only holds true for very specific types of data. It would be unwise to have a shared pool of private data, and would definitely be unwise to have a shared pool of all the data that Facebook collects (syncing costs would be insane!)

If you enjoyed my article in any way, feel free to follow me Twitter. I also write a 🔥 daily crypto newsletter where I discuss trading, my analysis of news and everything crypto.