Cryptoasset Valuation #2:
Debunking the Velocity “Problem”

Michael Zochowski
Logos Network
Published in
17 min readNov 13, 2018

Not every project was created equal when it comes to velocity.

The second in a series of articles exploring the valuation of cryptoassets. This article assumes an understanding of the Equation of Exchange valuation model, which was explained in the first article.

Image: The Blockchain Feed

A common valuation question we get involves the so-called velocity problem. The velocity problem states that functional cryptoassets will not accrue meaningful value because the velocity of these assets will be very high as a result of distributed ledger technology (DLT) reducing frictions and increasing efficiencies at equilibrium. It is commonly used as an argument that store-of-value is the only compelling use case for cryptoassets since its value is independent from the endogenous network economy.

While it is a real issue for pure utility networks like smart contract platforms, the velocity problem is not, despite popular belief, a problem for payments networks like Logos.

In this article, we will address some of the misconceptions involved in the arguments applying the velocity problem to payments use cases. Many of these misconceptions arise from the abstract nature of velocity, which is conceptually removed from our everyday experience. We will therefore also lay out a framework for concretely contextualizing velocities to make it easier to draw comparisons to real-world examples.

Equation of Exchange Model and Velocity

The velocity problem arises from the Equation of Exchange (EoE) valuation model, which we described in our first valuation article. We’ll restate the important parts here.

The EoE says that the value of a currency base M is

M = (P * Q) / V

where P*Q is total transaction volume denominated in the cryptoasset that serves as the currency for the network economy and V is the cryptoasset’s velocity.

The equation makes intuitive sense: if there are $200 worth of transactions on the network per year, and money changes hands 10 times per year, then there must be $20 in currency to facilitate those transactions. Note that the EoE gives the minimum network value and does not include potential for future growth. It does, however, identify the fundamental valuation drivers of a functional cryptoasset.

Clearly, network value M decreases as velocity V increases. This is the heart of the velocity problem.

If a DLT payment network is a much better version of cash, then wouldn’t it have a much higher velocity, which, in turn, would significantly impair or erode network value?

This question is complicated because velocity is hard to wrap your head around. We can recast the EoE in terms of H = 1/V, the average holding period of the cryptoasset. This helps a bit, but holding period is still an inaccessible concept. The fundamental question remains: what is a reasonable level for velocity?

Arguments of the Velocity Problem

Proponents of the velocity problem thesis argue that velocity can be very high, even arbitrarily so (equivalently, holding period could be arbitrarily short). The velocity problem is widely espoused in different ways, but the various formulations have common threads.

Kyle Samani, for example, gives a basic overview of the velocity problem in a Coindesk post. By his argument, cryptocurrency transactions on most networks would involve converting the token to a fiat (or some other stable base) on either end. This, he claims, would mean that there would be effectively no long-term holders of the tokens, and that value would primarily accrue to the market makers. He suggests a number of mechanisms that effectively create such a long-term holding base, including staking.

The most in-depth articulation of the velocity problem I have come across is John Pfeffer’s “An (Institutional) Investor’s Take on Cryptoassets”. He treats utility and payment functionalities separately but has fundamentally the same argument for both. In the context of a payments network, this argument can be boiled down to the following points:¹

(1) Equilibrium V can be very high in DLT networks due to frictionlessness, interoperability, and users’ economically rational desire to minimize working capital.

“Friction moving among cryptoassets is already low and will quickly disappear entirely with technologies like atomic swaps … The circulating portion of the tokens can circulate at the speed of computer processing and bandwidth — i.e., fast and accelerating. The implication is that average velocities can and are likely to be high, regardless of how many tokens are actually actively circulating for utility purposes to allocate network resources.”

(2) Interoperability and forkability of decentralized networks will necessitate efficient markets for network services that minimize costs.

“We’ve established that forks and competition in mining and among protocols lead us to an equilibrium outcome where PQ equals the aggregate cost of the computational resources (capital charge on or usage cost of processing and storage hardware, cost of bandwidth and energy) of maintaining the network … The implication is that network value of a utility protocol will converge on or near an equilibrium, where it is a fraction (denominator V) of the actual cost of the computing resources consumed to maintain the networks.”

(3) Staking will not lead to reduced velocity since undercutting by forked networks will minimize value staked (and thus capital cost of the network).

“The promoters of [staking] hope that it will … [create] an alchemic virtuous cycle wherein miners buy and lock up significant amounts of the native cryptoasset as an investment conveying them a right to a mining revenue stream, thereby reducing the velocity of the native cryptoasset and causing its value to rise to a level representing some multiple of their mining profits … This system operates a bit like a taxi medallion system … But here is where protocol-land is different from real-world taxi medallion schemes. Protocols are open source software and can be freely forked … Because the upstart taxi company didn’t have to pay for its taxi medallions, it and the other recipients of the new medallions can charge its passengers lower fares … If necessary, this process can be repeated indefinitely. The result is that the medallions have low values (as would the analogous native cryptoasset).”

(4) By this efficiency assumption, a payment network will have PQ equal to the aggregate cost of resources maintaining the network. Since V can be arbitrarily high, then network value will be correspondingly low.

“At mature equilibrium, the network value of such a token would be M = PQ/V where PQ is just the aggregate cost of the computing resources to run the chain (which may be thought of as the annual IT budget of an equivalent-volume incumbent payment system multiplied by some coefficient to adjust for the relative computing inefficiency of decentralised vs. centralised architectures) and V is, of course, some (probably high) velocity … The value implied by the correct valuation framework of M = PQ/V is much, much lower than the enterprise value of the incumbents.”

(5) Second layers and other scalability features will massively increase V:

“The effect of something like Layer 2 transaction processing is to massively increase V (and reduce M) for the payment component of the sum-of-parts valuation of a cryptoasset.”

Why velocity cannot be arbitrarily high

There are several issues with this logical framework for velocity:

(1) Swapping between cryptoassets is not and never will be completely trivial, immediate, or frictionless, even with decentralized exchanges and atomic swaps. Even extremely deep, liquid, and heavily automated exchange markets, like Forex or E-minis, have costs. This is not rent extraction — it reflects the real cost and risk associated with market making. These costs may be de minimis for a single transaction, but overall cost scales linearly with velocity. An arbitrarily high velocity leads to an arbitrarily high exchange cost, which, in turn, means higher friction. Beyond the hard costs, there are non-trivial cognitive, time, and other costs associated with each transaction, which will never be completely systematized (as anyone familiar with the complexity and tractability of even well-defined optimization problems will attest).

(2) Velocity can be as high as bandwidth and computer processing allows, but that doesn’t mean it will, even without any direct frictions. On the contrary, there are several very low friction analogues that, in fact, demonstrate the opposite. It is costless and easy to transact with other users on Venmo and similar applications or withdraw your balance, but people maintain non-trivial balances for extended periods of time (weeks or even months).

These empirically observable holding periods are reflective of the real costs of transacting even in the absence of direct fees outlined in the previous point. Incorporating these costs into Pfeffer’s efficient equilibrium argument indicates that there is some non-trivial holding period (and equivalently, a bounded velocity) where these marginal costs of swapping the payment asset for fiat equal the costs of holding the cryptoasset.

(3) Considering the flip side, while rational economic actors will seek to minimize working capital, that does not mean it is eliminated completely. Despite news of its demise, Americans keep a non-trivial amount of cash on hand — an average of $59 in 2015, according to the SF Fed. Even more significantly, the median checking account balance in the US was $3,400 (average $10.5k), according to the 2016 Fed SCF, even though this capital is non-productive since checking account interest rates were either nonexistent or just a few basis points at that time. Similarly, companies also keep meaningful working capital.

While, theoretically, this capital could be moved in and out of productive assets as needed, this does not occur. Why not? There are multiple costs —both the trading costs outlined above (fees, spreads, cognitive cost, etc.) and other costs, like risk —that offset the benefit of this marginal income.

Furthermore, many economic actors are not completely rational— they optimize for an approximate maximum, but other components of their utility functions (leisure time, other obligations, etc.), lack of knowledge, or pure laziness mean they settle for a suboptimal economic outcome. Automating and systematizing this process can only do so much; the human factor will always be a part of the equation.

(4) Staking and validator rewards have a real benefit that cannot be arbed away — securing the network and attracting high performance computation for transaction processing. A network that seeks to undercut another network by reducing validator reward is not a perfect economic substitute, even assuming it is able to reach the exact same user base (unlikely). The undercutting network will be fundamentally less secure (less value staked), lower performance (less competitive stakers), and higher cost (less performant validators).

(5) Rather than increasing velocity, second layers would likely decrease velocity by increasing average holding period. Second layers end up locking in capital to the second layer channels for extended periods of time, which is the equivalent of increasing working capital. Opening and closing channels to optimize for working capital defeats the purpose of the second layers. These channels are actually less capital efficient per transaction than their first layer counterparts, as the average amount of capital needed to facilitate a payment on a second layer is proportional to the number of hops the payment takes between the two participants. Even though channels reduce the friction and direct costs per transaction, they demand greater capital to be useful.

(6) Velocity is a complex process that is, in many ways, less driven by technology and frictions and more driven by consumer habits, which are only partially informed by tech. For instance, the US M1 velocity has declined significantly since 2007 even though frictions have undeniably decreased with the advent of mobile banking, Venmo, Zelle, etc. In a dynamic, broad payments economy, there are a lot of factors at play that are not fully understood

USD M1 Velocity has fallen precipitously since the recession

The Key Misconception: What P*Q Represents

Notwithstanding the foregoing, let’s assume that all of Pfeffer’s assumptions are correct and consider the key to his valuation argument. Even under these worst-case (and unrealistic) conditions, P*Q does not equal the cost of maintaining the network but rather the maximum total transaction value (denominated in the cryptoasset) over the average holding period. This has massive implications for the value of payments networks where, all else equal, total transaction value denominated in the base cryptoasset is much higher than utility networks.

To show why not, let’s consider an extreme counterexample that assumes all of Pfeffer’s assumptions on unbounded velocity, frictionlessness, optimized working capital, and efficiency are true. We will further assume that cost of validation is 0, which according to Pfeffer’s argument means that P*Q and, by extension, network value should also be 0. This economy involves continuous, frictionless buyers and sellers of a single good that costs $100. By the efficient equilibrium assumption, the buyers and sellers of the underlying cryptotoken will perfectly match — otherwise, there would be a non-zero amount of non-productive working capital at various times.²

In other words, this efficient economy works as follows: consumer A buys $100 worth of tokens, sends them to merchant A in exchange for the good, who then sells them to consumer B, who sends them to merchant B in exchange for the good, who sells them to consumer C, and so on. Note that in this model, there is no working capital, no investors, no basis risk taken by participants (if transaction speed is fast enough; basis risk and other costs can more generally be absorbed into the cost of the good itself), and no other assumptions that conflict with the assumed equilibrium.

In this case, the network value is $100 — for each incremental P*Q, there is a corresponding increase in V. This network value is infinitely greater than the cost of validation (0), so the assumption is incorrect. More generally, the correct network value is the capital needed to facilitate the maximal cross-sectional snapshot of transactions at any time. This includes all transaction fees and other costs associated with the resources maintaining the network, as well as the transactions themselves.

This toy example, in fact, demonstrates the worst case for valuation, and in reality P*Q will be substantially higher. For this scenario to occur, supply (i.e. the people finishing transactions) must equal demand (i.e. people initiating transactions) at all times with no one actually holding anything. In reality, there will be several real holders that will massively decrease V and correspondingly increase P*Q relative to the example. These include:

  1. Validators and network nodes (return on staking)
  2. Retail (equivalent to checking account or Venmo)
  3. Companies (working capital)
  4. Investors anticipating future growth of the local economy
  5. Full economies in unstable regions (e.g. Venezuela)

So what does P*Q/V represent in a more reasonable state of the world?

There will be some non-trivial cryptoasset holding period (1/V), and P*Q is the total transactions denominated in the cryptoasset. P*Q/V is then the maximum transacted in the cryptoasset over the given holding period. P*Q includes transaction fees to facilitate those payments, but it also includes the value of the payments themselves.

There is an interesting second order dynamic at play as well. Let’s assume that we are looking at a long-run equilibrium, as Pfeffer does, where speculative demand no longer dominates the asset value.³ Then, the value of the cryptoasset should be quite stable relative to the base currency.

This means that the cost of holding that cryptoasset (rather than equivalent fiat) should be minimal (since there is no basis risk). This would then facilitate further transactions as well as investment. Consequently, network value should be strictly greater than this assumed equilibrium.

Lessons of the Velocity Problem

All of this is to say that the velocity problem for payment networks is built on largely faulty assumptions. Velocity will not be arbitrarily high, working capital holdings will be non-trivial, mechanisms like staking can reduce velocity (and increase network value) in addition to their many other benefits, and P*Q reflects total economic activity rather than just cost of validation.

It is not to say that these arguments are without any merit. I think that Pfeffer’s framework in particular is insightful in many ways and highlights some key issues to think about. For example, his arguments do generally hold for a utility token, where the equilibrium value should reflect the cost of computation, and this does dramatically dim their potential valuations.

Additionally, while it won’t be unbounded, velocity will almost certainly be higher than current fiat velocities due to lower frictions and increased efficiencies. The question is how much higher? And with what implications for valuation? To answer these questions, we would ideally have a way of properly contextualizing velocity in comparison to real world analogues.

Payments and Velocity

Before we dive into our velocity framework, it is worth first considering the implications if our arguments are incorrect, namely that velocity can be unboundedly high. As we saw in the previous section, the implication is that M must be large enough to equal the cross-sectional value of simultaneous network transactions denominated in the cryptoasset.

These extreme assumptions highlight why payments are the most promising functional use case of DLT. First, as highlighted above, the value of the transaction itself is denominated in the cryptoasset, not just the cost of processing the transaction as in most utility tokens. This means that, all else equal, a payment token will have a much higher P*Q than a utility token.

But all else is not equal! Payments is truly gargantuan, with tens of trillions of transaction volume and trillions of dollars in fees paid to intermediaries. The payments market is a major pain point globally and an ideal application of DLT technologically, and there is good reason to believe that decentralized networks can capture much of the market. If global payments are $35tn per year, even if average holding period is 1 hour (corresponding to an absurd velocity of almost 9,000), the value accrued to payments networks would exceed $4bn. As outlined above, it is unrealistic to expect holding period to dip below one day under any circumstances, and it will likely be significantly higher.

Even if a stablecoin denominates all transactions⁴ and processing fees are reduced 90%, payments processing would still be a $200bn value annually and rapidly growing (i.e., pure utility P*Q with no transaction component). Velocities of 100 or more would still yield billions of dollars of network value.

There are no other use cases that have this same upside profile. Niche subclasses of payments dwarf entire other use cases by orders of magnitude, even those that have garnered the most interest like decentralized computation and smart contracts. All of this is to say that even assuming the worst case scenario for velocities, payments-focused networks still can accrue huge value, while utility networks have more questionable risk-reward profiles.

So how can we contextualize this upside? Pfeffer rightly points out that it is not appropriate to compare the potential of a payment network to the enterprise values of Visa, Mastercard, and American Express. Enterprise value reflects recurring revenue, which is not analogous to DLT economics. The accurate comparison is to single year payments revenue, which reflects the aggregate cost to end users to facilitate transactions. As linked previously, global payments revenues are projected to hit $2tn by 2020, but this is an inefficient market. The vast majority of that revenue is earned by banks, with payment processors like Visa taking, on average, 15–25 basis points⁵ per transaction, which is relatively small.

So a plausible upside for a payments network value M is the total revenue of Visa and Mastercard,⁶ and total transaction volume for P*Q.

To be clear, this is not to claim that a payments network will replace these companies right away, or even eventually — the point is that these metrics are the apples-to-apples comparison to the real world.

This insight is the basis for our framework to contextualize velocity.

Velocity as Implied Transaction Fee and Holding Period

We previously pointed out that 1/V is equivalent to the average holding period of currency in the economy (in the same units of time used to measure P*Q).

1/V can alternatively be viewed as the effective “transaction fee” taken out of transactions processed on the network (not to be confused with the network transaction fee). This is equivalent to what Visa or Mastercard makes per transaction. Stated otherwise, Visa processes T dollars of transactions per year and charges fee F, earning total revenue R. Recasting P*Q = T, F = 1/V, and R = M, we recover the EoE model for a payment network: M = (P*Q)/V.

But the advantage Logos has is that 1/V is an implicit fee that isn’t actually taken out of a transaction, while Visa takes a real cut. That means that, for the same (implied) transaction fee, a decentralized network like Logos has a massive competitive edge over a centralized network.

This observation gives us a useful way to compare velocity and holding period to real world analogues and build an intuition for what ranges of values are reasonable.

Some points of comparison:

According to their most recent 10K statements, Visa and Mastercard earned a combined $31bn in revenue in 2017 (excluding any revenue unrelated to payments processing), on average charging about 15–25 basis points in per transaction fees.⁷ That translates to V of around 500.

On the flip side, the USD M1 velocity is 5.6 and with a peak of 10.6, which translates to a 10–18% implicit fee.

Bitcoin currently has an observed velocity of just over 2, but this includes tokens that are outside of the liquid float (“hodl’d”). Chris Burniske estimated the liquid Bitcoin velocity to be around 20. This corresponds to an implicit fee of 5%.

Stablecoins, which are probably the closest approximation for what a mature decentralized payments network might look like, have velocities of around 20–50. There is no reason to hodl a stablecoin, so these velocities give us the full picture. These velocities correspond to implicit fees of 2–5%.

The table below shows the correspondence between velocity, implied (or, in the case of Visa/Mastercard, real) fee, and holding period. It also shows what network value a given implied fee and velocity yield if a decentralized network were able to capture the same transaction volume as Visa and Mastercard.

Visa and Mastercard are able to monetize $31bn in value at an implied velocity of 500, corresponding to an average holding period of less than one day. The implication is that, at any reasonable velocity, a payments network can still capture billions of dollars of value.

What does this mean for Logos?

The conclusion is that, even if V explodes to 100 or 500, a payment network like Logos can capture more value than a centralized network by means of a higher implied fee while not actually taking that fee. Seen from the other side, V is unlikely to actually reach these levels as it would imply a sub-day holding period, which is unrealistic, as discussed previously.

Logos has the additional benefits of lower direct fees, more addressable transactions and markets, and natively digital integration with no onboarding required compared to centralized networks. So Logos has the benefits of both worlds: it can capture more transactions than a centralized network at a lower cost to the end user, but monetize value from those transactions at a higher implicit fee.

The examples of Visa and Mastercard show that monetizing just a small part of each transaction can yield massive revenues or, equivalently, network value, all at a massively higher implied velocity than any ever observed in the real world.

When properly contextualized, then, the velocity “problem” is not really a problem at all for a payment network.

[1] We will defer discussion of Pfeffer’s thoughts on potential competition in the payments space for another article.

[2] Of course, such an equilibrium assumes that this is a sustainable, long-term economy where the repeated game played by buyers and sellers is stable. This deals more fundamentally with the usefulness of the specific cryptoasset than the velocity or value framework, so we’ll defer discussion of this point to the future.

[3] This is actually a non-trivial assumption for almost all cryptoassets, due to the volatility problem that we will address in a future post. Provided that a network is designed to break the volatility feedback loop (high volatility because there are no real economic transactions on the network, and there are no real economic transactions on the network because of high volatility), then such an equilibrium is what one would expect economically.

[4] While stablecoin use will likely dominate in the short and medium terms, stablecoins always have some cost (implicit or explicit) and a natural equilibrium would push transactions onto the main token as volatility moderates.

[5] 1 basis point = 0.01%

[6] American Express is not really comparable to Visa and Mastercard since it acts as both bank and processor, with an average fee of 2.5%.

[7] We averaged over fixed and variable fees by dividing revenue by total transaction volume.

If you’d like to keep up with what we are doing:

Follow us: Twitter | Discord | Telegram | Reddit

Read: White paper | Logos website

--

--