Blockheight is not deterministic

Bitcoin Elf
4 min readMay 16, 2020

--

Abstract: If SF is deterministic, all of the search for cointegration is futile and the only way to maintain the long-term-connection between SF and MC is to argue (as phraudsta argues), that MC is also not following a random walk. This article argues that we could indeed view SF as nondeterministic but that the error distribution reduces uncertainty to very little.

Contents

1 Introduction
2 Discussion
2.1 Blockheight as an AR(1) process with a unit root
2.2 Blockheight as a trend-stationary process
2.3 What’s driving the errors
3 Conclusion

Introduction

The determinism debate was triggered by Sebastian Kripfganz’s talk at the main stage of May 2020’s Value of Bitcoin Conference. He argued that the halvings could not be said to be stochastic jumps in SF, as they are pre-programmed. This article discusses (non-empirically), if we should view the SF variable as a random variable or not.

Discussion

There are a few different definitions of SF. I give an overview of them in my article “The Ammousian SF model”.

For the a priori definition this is very clear: since Satoshi published his code, SF in that definition is set in stone and a piecewise linear function of time. There is no stochastic element to that and consequently there cannot be cointegration.

For the other definitions one has to argue a little more. Let’s first think about the underlying processes: since the blockreward is a deterministic function of blockheight, the only possible source of randomness comes from the number of blocks mined. Thus, I shouldn’t say ‘stock’ and ‘flow’, but ‘blockheight’ and ‘new blocks’. In both definitions, flow is a derivative of stock. That is, the only source of randomness is blockheight. If blockheight were to be a truly first-order-integrated process, that would refute the critique of determinism.

Blockheight as an AR(1) process with a unit-root

And blockheight tomorrow is height today plus about 144 blocks and some error. Formally speaking:

Eq. 1: Blockheight seen as an AR(1) unit-root-process

With εₜ an ‘error’ term with a mean of 144 and some variance but its distribution is such, that it cannot go below 0. If this were the true data generating process of Blockheight, that would make it a true I(1) variable, as a random walk (as far as I know) doesn’t require the error term to have zero mean or some specific distribution.

Blockheight as a trend-stationary process

But are we not overlooking something here? The errors are not iid(independent and identically distributed). If we fall short of 144 blocks, the difficulty adjustment kicks in, adjusting difficulty downwards in order to get us back at 144 blocks per day. Wouldn’t a more suiting representation then be

Eq. 2: Blockheight seen as a trend-stationary process

But actually, no. The difficulty adjustment acts exactly in the opposite direction: it does not adjust difficulty, such that we get back to the original path but just such that in the next period the blocktime is closer to 10 minutes than in the last. That means the deviation of ε from 144 stays in the time series in a non-decaying way. That resembles more the random-walk-representation of blockheight. But when we write this representation as a random walk with drift and an error that is distributed around zero (Eq. 3), then we see that the drift component (which is deterministic) soon dominates the whole series, which is kind of obvious: Blockheight now is very close to what it was yesterday and in two weeks I will not be very surprised by the blockheight.

Eq. 3: Blockheight as a random walk with drift and an error with zero-mean.

What’s driving the errors?

It is useful to note that the errors εₜ are obviously driven by fluctiuations in the hashrate and those in turn are driven by miner cost-benefit-analysis. Thus when Bitcoin declined in value for a while, miners were forced to shut their rigs down or maybe in a global economic downturn energy becomes abundant, reinstating miners with (comparatively) worse hardware.

If we look at daily data, the errors will be correllated to each other. But still, looking at blockheight that way makes it a true random variable that is integrated of first order. Further research could, thus equipped, strip away the deterministic part of the stock variable by looking at blockheight and subtracting the drift of 144 blocks per day.

Conclusion

Blockheight, and thus SF, is clearly not a trend-stationary process. That is just not how the difficulty-adjustment works. But the randomness in the alternative random-walk-with-drift-description of the process is quite small in comparison to the drift. Thus we might be inclined to view SF as a variable with only very little randomness. Future research could investigate the properties of a variable, one might call Dev(Blockheight)ₜ which takes the drift out of the time series of blockheight and only looks at the random-walk part. It’s performance in an SF model might be quite significant but the result of reversed causality: the fact, that MC is higher than usual triggers higher-than-usual mining and thus more blocks than usual. The relation is thus suspected to be a negative one. But that is further research.

--

--