On the Velocity Problem for Cryptoasset Value

Wilson Lau
Thoughtchains
Published in
9 min readFeb 24, 2018

There is an emerging consensus that velocity is a confounding problem in establishing cryptoasset value for single utility tokens.

The widely accepted M = PQ / V model popularized by Chris Burniske illuminates the effect of velocity on cryptoasset value quite obviously: increasing velocity decreases the value of the asset base in a linear fashion, and, therefore, the price of utility tokens (given a fixed token supply).

While there have been a lot of great discussions about why this model is effective for valuing this type of cryptoasset and quite a few valuations have been done based on it, we continue to ignore the massive white elephant that is the impact of velocity assumptions on these valuations. It is a variable that potentially throws all of these valuations out the door because we may very likely be underestimating velocity by several magnitudes if we just assume that people simply may not hold any of these tokens for any time.

There are quite a few people who agree that this is a problem for token value:

  • Vitalik Buterin in On Medium-of-Exchange Token Valuations: “…the market cap of an appcoin depends crucially on the holding time H [which is inversely related to velocity]. If someone creates a very efficient exchange, which allows users to purchase an appcoin in real time and then immediately use it in the application, then allowing sellers to immediately cash out, then the market cap would drop precipitously… Protocol tokens using this model may well be sustained for some time due to irrationality and temporary equilibria where the implicit cost of holding the is zero, but it is the kind of model which always has an unavoidable risk of collapsing at any time.”
  • Ryan Selkis in 95 Crypto Theses for 2018: “Most utility tokens, then, will go to zero, regardless of team quality and execution. You simply don’t need to hold them but for momentum and greater fool investing. When the market lacks “higher order” investors for speculators to flip to, assets will unwind. Viciously.”
  • Kyle Samani in The Blockchain Token Velocity Problem: “In the case of a proprietary payment token that nobody wants to hold velocity will grow linearly with transaction volume… Transaction volume could grow a million-fold and network value could remain constant. Almost all utility tokens suffer from this problem.”
  • Alex Evans in On Value, Velocity and Monetary Theory demonstrates this in a somewhat interesting ‘proof’ where he represents velocity as a cost minimization function for transaction costs. What is most important is that he shows demonstrates the likely relationship between lowered transaction costs and increased velocity and the impact of that increased velocity on an example token.

In summary, velocity acts as a confounding variable for these token economies that decouples the network value from any individual token’s utility value. As transactional costs and friction decrease, especially as the purchase and sale of these tokens becomes basically instantaneous or concurrent, velocity increases significantly and collapses the utility value of the token.

For investors, the uncertainty around this confounding variable should destroy the speculative investment opportunity for the majority of single utility tokens, unless they are depending on sustained transactional friction, irrational HODLing, or the next greater fool in those token markets.

While there seems to be little dispute about this, I’m surprised to see less concern and discourse about this amongst the community of investors. Instead of continuing to try to value assets that suffer from this problem, I think there are two more important questions we should turn our attention to:

1. Which tokens are exceptions to the velocity problem and why?

I initially posed this question to myself thinking I would argue that certain tokens like Filecoin and BAT would prove to be exceptions (just because I like them) because of certain dynamics like…

  • Attractive demand-side dynamics: Basically, that there are people who actually want to purchase these tokens for real money because the utility is valuable to them. (The number of projects that do not have this quality is appalling, but that is an article for another time)
  • Natural dynamics that reduce velocity: Filecoin requires people to continue to fund storage using Filecoin over an extended period of time, and will lock up tokens in escrow. BAT might have natural dynamic where advertising dollars turn over in 30-day cycles, thereby reducing velocity naturally.
  • Artificial incentives to lower velocity: There are a few questionable examples of this like PROPS and STEEM, where users are incentivized to hold tokens for reputation or for other reasons related to the usage of the platform.

… but I ended up concluding that two fundamental forces continue to pose a problem for velocity even for tokens with the above qualities. I’m inclined to believe that no single utility token is immune to the problem.

A. The Effect of Speculation on Velocity
If we assume that there is a healthy market for any speculation at all, we should assume that there will be an impact on velocity that adds volatility to the token price.

Looking at blue chip stocks (AAPL, IBM, etc), approximately 1% of all shares change hands once a day. (Average daily trading volume as a percentage of total shares). More speculative stocks (for example, ones that have IPO’d in the past few years like APRN or SFIX) have 5 — 10% of their shares change hands every day. If we use this as a comparison, this factor alone increases velocity by approximately 20–40 (5–10% token supply at 365 velocity).

One could argue that the gradual establishment and reduction in trading volume then increases value due to the decreasing velocity (to the order of 5–10X returns), but I repeat, are investors really willing to speculate on propensity to hold as opposed to fundamental value?

B. The Effect of Usage on Velocity
There also seems to be this idea circulating around that a large percentage of people HODLing the token will reduce velocity.

However, having more people HODL the token simply reduces the liquid token supply, but does not actually reduce the total number of transactions that occurs on the platform. It simply concentrates those transactions over a smaller liquid token base.

If we can just understand how weighted averages work, as long as total transactions continue to increase and the token base stays the same, velocity still increases by the same amount. The only exception would be if that lack of liquidity actually decreases the transaction volume, which then just means it’s impeding the growth of the network.

Therefore, increasing usage is directly correlated with increasing velocity (assuming a stable total token supply) which then decreases utility token value. And no, more HODL’ers is not a solution here.

Velocity will continue to be a problem for single utility tokens.
We need to change the underlying token model.

2. What token models can we propose as a solution to this problem?

I will start by pointing to one model that I find very interesting — the Gnosis / SpankChain model— but I’m hoping this discussion will point me in the direction or more novel ideas.

For Gnosis / SpankChain, there are two tokens — a ‘mint’ token and a ‘utility’ token. The ‘utility’ token is used in the marketplace, is burned to pay for platform fees, is pegged to $1USD, and has a variable supply based on token usage in the previous period. The ‘mint’ token is staked and generates tokens to meet the target supply. More details on how this model works here.

Assuming a somewhat stable platform fee percentage and a steady state usage of that platform, over a long holding period, the tokens generated for ‘mint’ tokens is equivalent to the revenue associated with the platform fees plus the increase in token supply. The problem is that in high growth situations over short time periods, the change in token supply can affect returns significantly. For speculative venture-capital-like investment purposes, this should be acceptable because, nonetheless, returns are directly related to growth in platform usage with no confounding variables.

You can take a look at how this works in this Google Sheets model.

While imperfect, this two-token model gets us much closer to a cleaner, investable cryptoasset model where the ownership of the mint tokens correlates well with market value growth, without any confounding variables. It even has cash flows and yield you can calculate from it, so you can apply a standard DCF to come to a opinion on value. We would no longer need to rely on the PQ = MV model.

Ultimately, instead of trying to value single utility tokens, we should be looking for a token model that allows for investability where the value is directly correlated with network value with no confounding variables. These models will definitely resemble securities much more than a utility token — which is likely why a lot of these projects are running towards the default single token model — and we will have to contend with the SEC in the future, but that’s a discussion for another day.

A post-writing note: Kyle Samami presents two further models in New Models for Utility Tokens, which answers questions I posed here. However, the Factom ‘BME’ model differs from the Gnosis model in one important way:

  • In the Factom model, the mint token earns a predictable and stable supply of tokens based on some function determined on a project by project basis. The utility token is both used and speculated upon.
  • In the Gnosis model, speculation and utility value are separated. Speculation is done on the mint token, whereas there should be no speculation on the utility token as it is pegged to USD.

I believe that the Gnosis model allows for a cleaner investment vehicle that separates utility from speculative value. The instability of value in the Factom model still makes the token unattractive as a vehicle for those who want to use the platform but not speculate, which likely increases velocity of those tokens, which would then decrease value of the token.

Additions from further feedback:

1. Further explanations of the GNOSIS / SPANKCHAIN model:
It’e easiest to understand by looking at a spreadsheet:

https://docs.google.com/spreadsheets/d/1A8slg8F_53z4_TxVGL7fIjuyPMWIRirx8gA3ys6z7Lw/edit?usp=sharing

How much of the utility token is minted for any given period is dependent on two sets of factors: 1) Transaction fees in the last period x the target supply multiple (set at 20 for both) — which sets the target supply; 2) Current supply less transaction fees for the period. The difference is the amount that is minted (or it can be 0).

In a stable, no growth state, token generation is equivalent to fees, which formed a lot of the basis for my argument. I think just to have token generation equal to transaction fees (and to have a mint token to generate those tokens) is the cleanest token model, and I’m would imagine there’s a pretty obvious way of doing that.

The imperfections around this model come when growth or decline is introduced. When the target supply multiple is set at 20, SPANK being staked during periods of growth are rewarded disproportionately (2X+) to the transaction fees because of the growth in the target supply — see the difference between generation and fees in periods 6–20. (And vice versa when transaction fees are in decline — see periods 21–35). The degree of disproportionality of this is actually a function of the target supply multiple — if it were <5, the problem would be much less pronounced.

More importantly, in aggregate, over any given period, transaction fees are still approximately equal to the token generation, with the balance being the difference between initial supply and final supply. (See the total line — but of course, over shorter periods or periods of high growth or decline, this difference can be significant). This just means that over a longer time frame (20+ periods, because it’s all relative to supply) the disconnect between transaction fees and token generation is also less pronounced — there’s just going to be more tokens generated over more time relative to the increase in token supply.

Overall, I do think the model is workable (and still the most investable model so far) if they tweak the target supply multiple down to reduce the magnitude of disproportionate returns for growth. It’s not going to be a perfect analogue of revenue, but all companies suffer from varying net profit margins or dividends during periods of growth and decline. This is no different.

2. Problems with the Peg
The risk related to breaking the peg is significant.

I think the company has two levers for controlling this: 1) They can play with the target supply multiple to get it to a more appropriate equilibrium (which I keep going back to as the most important variable for multiple reasons); 2) They should maintain control of a meaningful percentage of the mint tokens and sell the utility tokens at the peg price.

As long as supply is in the right range, I think there’s not all that much incentive for the service providers to sell for anything less than the peg. Then the company’s supply should act as the minimum price to disincentivize selling at higher. I think it will also help a lot if they have a lot of great, simple mechanisms to facilitate that exchange on the platform for both sides that make it very easy to buy and sell at the peg.

But all of these markets are a bit zany and the only answer is we won’t really know what will happen and the peg breaking really screws with the model.

--

--

Wilson Lau
Thoughtchains

Software Engineer at Mercari, Entrepreneur and Indie Hacker. Based in Tokyo. www.wilsonplau.com