Cryptoasset Valuation #3: The Winner-Take-All Myth and Skinnyfat Protocols

Michael Zochowski
Logos Network
Published in
12 min readDec 19, 2018

Which side are you on?

This is the third of a series of articles exploring valuation of cryptoassets. This article assumes an understanding of the Equation of Exchange valuation model, which was explained in the first article.

Thin vs Fat Protocols

The past two years have seen an unprecedented explosion of new decentralized protocols, spurred by numerous technical advances, practical improvements, and increased investor interest.

The vast majority of the genuinely innovative protocols are what I call general-purpose smart contract platforms (GSCPs). These projects effectively aim to be a better, more scalable version of Ethereum, with full virtual machines and smart contract support. Their declared goals are to build the optimal platform for most, if not all, decentralized applications, from payments to computation to gaming to decentralized legal contracts. GSCPs include Dfinity, Algorand, EOS, Tezos, NEO, Hashgraph, IOTA, Ethereum, and many more.

This winner-take-all approach is commonly known as the Fat Protocol thesis, which was coined by Joel Monegro in a namesake USV blog post. The original post mostly argued that protocols will accrue more value than their applications,¹ but it has since matured into the belief that the vast majority of value and usage will accrue to a single platform. Proponents argue that protocol’s network effect will overwhelm any limitations or deficiencies in specific use cases compared to its competitors.

A small minority of new protocols, including Logos, have much more specialized designs tailored to a specific use case. These protocols aim to be the optimal solution for that single area of application, but they do not have broader aspirations.

These projects align with the Thin Protocol thesis, which was most notably proposed by Teemu Paivinen in a Zeppelin blog post as a direct response to Fat Protocols. The specialized protocols expect that their advantages in their target use case will allow them to dominate their niche.

Teemu takes it one step further, arguing that the ability to cheaply fork networks and the rise of interoperability technologies will result in hyperspecialization where no single protocol accrues significant value.²

Clearly, these two approaches are incompatible. So far, prevailing wisdom has largely sided with the Fat Protocol thesis, resulting in the GSCPs grabbing most of the public’s mindshare. Empirical evidence, however, has so far sided with the Thin Protocol camp, with a huge proliferation of derivative networks and forks with only incremental improvements.

So which one is correct? This question has massive implications for cryptoasset valuation and investment — should you bet the farm on a single general platform winning out, or do you invest in many niche, specialized networks with compelling use cases?

For a number of reasons, I believe that neither Fat nor Thin Protocols are entirely correct. There are real benefits of specialization that cannot be ignored, as well as numerous economic and technological reasons why no network will be the single dominant, fat protocol. Nevertheless, network effects are real and networks cannot always be arbitrarily forked, which calls into question the assumptions of Thin Protocols.

Instead, the most likely outcome balancing these effects is something in between, where there are many specialized DLT verticals, each of which is dominated by no more than a few networks. Sticking with the established jargon and, for lack of a better term, we can call these networks Skinnyfat Protocols.

Skinnyfat Protocols: not quite fat, not quite thin

Specialized Dominates General Purpose

Of the two leading theories, Fat Protocols is far less likely than Thin Protocols. The argument was well articulated in the original Thin Protocols article, so I will focus on some more concrete details. In short, specialized protocols can massively outperform general protocols in their target use case, and increasingly mature interoperability technologies erode any universal network effect.

No Free Lunch

By definition, a specialized protocol is tailor-made for a specific use case, while a general purpose network necessarily takes a jack-of-all-trades approach. The question is whether it is possible for a general purpose network to be equally well suited, or at least competitive, with a specialized network in a specific use case. Most GSCP proponents, for example, operate under the assumption that their smart contract platform can be both the best payments network and the best decentralized computation network.

It has become increasingly clear that the answer to this question is emphatically, “No.” This is easily seen empirically in the context of payments, once you cut through the highly-contrived marketing benchmarks. Two of the most advanced smart contract platforms, Algorand³ and Dfinity⁴, claim less than 1,000 TPS capacity, which is far below what centralized payments networks today can handle. By comparison, Logos, the leading payments focused network, is the exception and processes over 17,000 transactions per second (TPS) on its initial test net under realistic conditions and will have capacity for more than 150,000 TPS when it is fully featured. A similar performance gap between Logos on the one hand and Algorand and Dfinity on the other is observable in confirmation latency, and this gap is consistent across advanced general protocols.

The fact that general protocols are not competitive with specialized protocols makes sense from first principles. At a high level, this can be deduced from a No Free Lunch principle — you can’t get something for nothing. After a few strict improvements (e.g. moving from proof-of-work to proof-of-stake), you very quickly get to a point where there are fundamental tradeoffs in DLT architecture that have a real impact.

For example, a general purpose network must include smart contracts — otherwise, it is not very general at all. Smart contracts impose significant overhead. There is no getting around this — it simply takes a lot more computational time and effort to validate a smart contract operation than a simple send.

As we have detailed in a previous article, off-chain solutions are not a silver bullet or even a good solution for many high performance use cases like payments, and a full, Turing-complete smart contract system will always be at a huge disadvantage to a specialized, limited network.

This performance gap is exacerbated by the fact that general networks need to optimize over all use cases, while a specialized network is able to tailor its design to a specific use case. This specialization is often suboptimal for other use cases but can result in a significant boost in performance for the use case the network cares about.

The performance gap has a real impact not only on the convenience and efficiency of use of the protocol but also the cost. If designed economically optimally, then capacity is a free market, and lower capacity directly translates to higher fees. Even if there are “no fees,” there is some economic deadweight loss — someone who valued their transaction did not get it through. Moreover, execution time has a real cost in itself. These costs mean that non-specialized networks are at a significant competitive disadvantage.⁵

Unfortunately for general purpose networks, a jack of all blockchain trades is necessarily a master of none.

Interoperability Means Competitive Pressures

The general purpose performance gap would not be the Achilles’ heel for the Fat Protocol thesis if blockchain were a normal market. In the payments space, for instance, many systems are outdated and inefficient but persist due to the oligopolistic market conditions. A new centralized payment rail would need to jump through countless regulatory hoops and technical integrations before it is useful, a process involving many years and likely billions of dollars of expenses. These barriers to entry protect incumbents, insulating them from competitive pressures and stifling competition, thereby allowing them to accrue huge value.

The world of blockchain is vastly different. In particular, there are a growing number of increasingly mature interoperability technologies that ratchet up the competitiveness of potential markets.

Centralized exchanges like Coinbase or Binance are the primary means of interoperability currently, and they are increasingly accessible, usable, and compliant from a regulatory perspective. Decentralized exchanges like 0x are further democratizing and automating this exchange market. Atomic swaps, interchain transfers, sidechains, and several other technologies look increasingly promising at further decentralizing efficient exchange.

As it becomes easier to swap to and from different networks, the competitive advantages of one protocol over another will be increasingly realizable. Since the difference between specialized and general protocols is likely to be very large, both in terms of performance and cost, users will have a commensurately large incentive to use a network tailored to the use case of interest rather, than an inferior general network.

Interoperability between networks is only one side of the coin, the other side being software interoperability. Here, too, the blockchain space is highly conducive to competition. Once one cryptocurrency is integrated in a particular project, it is very easy to integrate additional cryptocurrencies. The open source nature of both protocols and decentralized infrastructure makes it virtually impossible for it to be any other way. As a result, if a merchant and consumer can save money by switching from a general purpose network to a far more efficient payment network, it will be fairly easy for them to do so.

Given the magnitude of differences between general purpose and specialized networks, as well as unprecedented interoperability, a single fat protocol is highly unlikely.

There is a Network Effect — But It is Limited

So far, our arguments have closely followed the Thin Protocols canon. However, the Fat Protocol view does have some merit. Specifically, there certainly is a network effect that has an undeniable impact on the value of a network to prospective users.

Take payments, for example. A payment rail is definitely more useful if more merchants and consumers accept it. There is a reason why people favor Visa over Diners Club.

Moreover, the Thin Protocol argument has several faulty assumptions. First, forking a network does not result in a perfect economic substitute. There are real differences even if the codebase is entirely the same, particularly vis-à-vis network security, reliability, decentralization, and usefulness.

Bitcoin Gold is a good example — it forked Bitcoin without any meaningful changes for the end user,⁶ and it is a strictly inferior network from a usability standpoint.

Second, no swap or transaction is completely frictionless, and there is only so much you can automate. Exchanging between two networks always comes at some cost. At a minimum, there is inevitably bid-ask spread plus any slippage, which reflects the real risk associated with market making. There are additionally cognitive and implementation costs that may be even larger than the direct dollar costs. This can lead to some counterintuitive outcomes. For example, if you can save $0.01 by transacting in networks A and B for two very related use cases compared to just using network C, but if you need to spend $0.02 to swap between them, then network C will dominate.

Third, there are components of networks that are not forkable or replicable at all. For example, network funding can have a big impact on the prospects of the network. Similarly, many services built on top of a network are not open source and may be resistant to supporting a fork. More broadly, this concept is reflected in the somewhat cliche adage, “The best tech doesn’t always win.”

Together, these observations mean that interoperability can do so much to attenuate network effects. Unless there is a real, substantive value difference for using one network over the other that is greater than the cost of swapping, users will not have an incentive to move between networks. In general, this incentive will be present and impossible to ignore for a specialized network taking on a general network, but it will likely not exist for a specialized network that is only marginally better than another specialized network.

In other words, networks will become quite sticky once the marginal benefit of specialization no longer outweighs the marginal cost of additional swapping. This convergence will likely be quite rapid — once you get close to the limits of hardware for a particular use case, there isn’t much that can be improved.

Of course, I don’t think Thin Protocol proponents would say that networks are infinitely forkable and every minute advantage for each tiny niche use case will be eked out. But I think they greatly overestimate how much networks can specialize and underestimate the value that can accrue to a particular network.

Skinnyfat Protocols and Blockchain Verticals

The blockchain space at maturity, then, is unlikely to be dominated by a single fat protocol or spread thin across many small, niche protocols. Given the economic and technological phenomena described above, the much more likely outcome is the intermediate “Skinnyfat” Protocol, where there are one or two dominant but specialized protocols within each use case vertical.

The logic is pretty simple: a network specialized for a specific vertical will have a huge competitive advantage over general networks, but after a while, the incremental improvements will not be able to outweigh the real costs of using yet another network. Swapping between verticals will be relatively seamless and low frequency, such that the overall interoperability cost will be low.

In contrast to the Thin Protocols argument, these verticals will be relatively wide due to network stickiness. For example, a general decentralized data storage network will probably dominate a network focused on photograph storage and another focused on document storage, even if there is a slight advantage the hyper-specialized networks offer.

Exogenous, structural factors will define some verticals. In the data storage example, a network highly specialized around HIPAA compliance likely has room to exist due to regulatory rather than technological reasons.

Some verticals will be very broad as a result of a strictly dominant set of properties that a single network can offer, leaving no room for further specialization. In payments, for example, the quickest, cheapest, and most secure network would likely be equally optimal for B2C payments as P2P.

To the extent that a single vertical has multiple viable networks, we’d expect them to fall into some sort of Pareto distribution where the largest network has the bulk of market share, and each successive network only has a fraction of the previous network’s size. A likely cause of coexistence is geography: while DLT can erode borders, it probably will not erase them completely, at least not in any near future.

Value Accrual in a Skinnyfat World

Unlike Thin Protocols, the Skinnyfat outcome suggests that the dominant specialized protocols will accrue significant value that will not be forked away to oblivion. This partly reflects the entrenched network effects. However, as discussed in a previous article, accrued value does not necessarily correspond to economic rent. More importantly, it reflects the genuinely large market that a single dominant network within a vertical can capture.

While the value that a general purpose network might capture is probably significantly less than their supporters hope, this is not to say they will not capture any value. In fact, they all are specialized in their own way — for smart contracts! Decentralized computation itself is likely several verticals, such as GPU intensive computation vs CPU intensive computation, so I would not be surprised if several smart contract networks are very successful.

Right now, we are far away from any state close to maturity. No protocol has a network effect since nothing has any meaningful adoption. Even Ethereum has essentially no network effect since the Ethereum Virtual Machine (or WASM), Solidity, and their developer ecosystem can easily be ported to another protocol. As a result, the race is still wide open. From a valuation perspective, then, it makes sense to focus on the highest value verticals and the best protocols within those verticals.

We designed Logos from the ground up according to this thesis. Payments is the most compelling specialized vertical, and Logos is built to be the optimal payments network.

[1] I will not elaborate on this point in this article, but I agree with Joel that the vast majority of value will accrue at the protocol rather than application layer in blockchain.

[2] From the post: “A lot of users/applications indicates a ‘fat’ protocol and likely means the protocol has a large market capitalisation, creating an incentive for competition either by forking or specialised network creation.”

[3] Algorand claims 125 times the capacity of Bitcoin, roughly 500 TPS.

[4] Dfinity claims they will run at roughly 50x gas limit of Ethereum and 2x the block time, roughly 800–1000 TPS.

[5] The oft cited (by Bitcoin maximalists) Schelling effect does not come into play here in real-world applications, where there are informed participants and repeated games. The Lindy Effect may come into play in the form of network stickiness, which we discuss later.

[6] Bitcoin Gold’s only innovation — switching hashing algorithms — has virtually no impact from the perspective of an end user.

If you’d like to keep up with what we are doing:

Follow us: Twitter | Discord | Telegram | Reddit

Read: White paper | Logos website

--

--