The Gamma Monstrosity & the Probability Deception

For some time now, I have allowed people to continue with this foolish and unscientific notion of gamma in the Selfish Mining fallacy. In this rather extended article I intend to finally detail the key issues that surround this form of pseudo-mystic science. If we can call it science at all.

A colleague from nChain and myself published a proof of the Small World nature of bitcoin [1]. This is important in demonstrating the propagation method used within bitcoin. The so-called Sybil attacks are modelled falsely on the notion of being able to inject additional hops within the bitcoin network. Unfortunately, the pseudo-academics behind this notion never bothered to test their limited hypothesis.

Gamma, the probability deception

What is Gamma?

The first problem people have is jumping on the bandwagon without understanding the terms that they are seeking to address. This is a common Proof of Social Media ploy. So, to begin, we shall look at the definitions of gamma propositioned within the selfish mining fallacy:

γ = 0, the honest miners always publish and propagate their block first, and the threshold is at 1/3. With γ = 1/2 the threshold is at 1/4. Figure 3 shows the threshold as a function of γ.

The first issue is the hypothesis falsely assumes a mesh-based network structure with many hops. It also treats this as a probability calculation when it is in fact a likelihood model that can be negative. Negative probability is the probability between zero and one that the opposite will occur. Negative probabilities do exist, but they are generally discouraged because the real meaning of a negative probability is an error in the model. Unfortunately, the authors of the erroneously deceptive selfish mining attack fail to understand the basics of how bitcoin functions.

The authors have failed to account for a likelihood function concerning loss. This is, the selfish miner has an economic and time cost processing transactions and blocks. If this succeeds the ability to push their own block time, gamma, becomes negative in their model. A better way of putting this would be simply stating that the model is flawed.

In the misconstrued version of how bitcoin works, the authors treat bitcoin as a loose mesh. In an earlier version of this form of mathematical analysis of bitcoin [2], the authors acknowledged that bitcoin could be vulnerable if the network distance was greater than d=3. The authors analysed bitcoin mathematically reporting the assumptions and stating that bitcoin could be floored if the network was a loose mesh. The inverse scenario from this leads to Bitcoin being secure if the distance of the network hops is under d=3. This work [3] is scientific in nature. It provides a hypothesis and a means of testing with the assumptions listed.

The selfish mining paper on the other hand ignores all these assumptions and treats bitcoin as a mesh without any validation.

We see the form of Gossip model assumed in the selfish mining proposition in the figure below. The authors of [3] investigated several propagation models in this research and if Bitcoin was remotely like this form of network, then the concept of gamma could be argued to apply. This would occur as a discovery from one node would impact others sequentially.

Flooding attacks in loose mesh networks differ to Bitcoin

In the group of Sybil’s proposed within the selfish mining hypothesis, the authors see the formation of a giant node managed and constructed by the “dishonest” selfish miner. In this, any communication from an honest node would likely propagate to the selfish miner who would be highly connected in the form that bitcoin assumes.

The authors of the selfish mining hypothesis have in effect assumed that most bitcoin miners will act independently and outside of their rational economic interests in ensuring that a system that is not highly connected evolves. Simultaneously, they presume the assumption of a giant node formation created by the dishonest miner. In this, the selfish miner forms a network of Sybil nodes. This network of Sybil nodes is constructed in a small world topography. The reason for this is that the selfish mining pool will control access to all of the nodes and be highly connected to all of these. The proposed consequence is that honest nodes maintain a loose mesh structure and remain oblivious and in active and unresponsive as a giant node or small world cluster of selfish mining Sybil machines infects the network and acts to become the primary backbone.

The mesh that Bitcoin never was

Many pundits within the Bitcoin community seem to have a strong affinity to seeing Bitcoin as a mesh network. In such a mesh with many hops, the majority of nodes could be Sybil’d if a group of densely connected machines in a small world topography acted as a propagation backbone. This is in effect, the scenario proposed in the small world hypothesis.

Unfortunately, no statement concerning an assumption of mesh-based network structure is ever made nor did the authors investigate what might happen if the network is not a loose mesh. As we have already demonstrated, Bitcoin forms a near-complete graph and as the system becomes commercially more and more valuable it becomes closer to a complete graph.

At scale, Bitcoin is a densely connected system

Bitcoin forms into a complete graph as the miners are incentivised to do this. It is not enough for a miner to find a block; a minor is rewarded in effect when all other miners build upon that miner’s block. This is ensured in bitcoin using a waiting period. On finding a block, the miner then must wait for a long period of time before the block is valid and can be spent. Consequently, miners not interested in mere block discovery, they are interested primarily in blocks that other miners build on.

It is this requirement to have blocks that other miners accept that makes Bitcoin a system incentivised to scale through densely connected graph structures. The misconception that non-mining nodes add any importance to Bitcoin block propagation or security is a social media or attack and is irrelevant to the structure of Bitcoin.

In a new complete system, no Sybil machine adds value. If we take a thought experiment about the optimal selfish mining structure in bitcoin as a new complete graph we can see the following results:

1. An honest node discovers a block and propagates it to all connected neighbours,

2. In a single hop, all major mining nodes now have a copy of the honest block,

3. As the honest mining nodes start building on this block, the selfish mining Sybil system has propagated, enacted a verification scheme and then reacts after all other nodes.

a. An important point to note here is that the Sybil network acts as a separate giant node structure. The process is:

i. Accept a candidate block from the honest miner,

ii. Validate this block as any spy mining or other unvalidated release of a selfish block would invalidate the mining strategy for the selfish mining pool,

iii. Propagate the competing block following a validation delay.

4. In all instances, there is a marked validation delay for the selfish miner.

The authors of the selfish mining hypothesis have clearly never investigated the integration costs associated with managing a large botnet. Such systems are managed using command-and-control servers. The management of such systems requires constant interaction to stop their discovery being made public. This cost of course is ignored within the selfish mining hypothesis.

Even ignoring this cost in the economic constraints, it imposes, the introduction of a second giant network structure as an overlay to bitcoin imposes delay.

In a complete graph, the introduction of Sybil nodes does not introduce any benefit to the attacker. Even ignoring the cost of these nodes, what we see is the introduction of a delay or lag in the propagation of selfish mining blocks.
This delay is a negative impact on the likelihood of a block being mined. Consequently, this is part of the flawed nature of the selfish mining hypothesis that was never tested.

The Selfish miner lags the honest strategy

Models and Science

At this, I shall diverge to introduce the concept of science to a crowd that seems to not understand the basic premise of science. That you test a hypothesis. There is a vast distinction between science and the so-called mysticism that some people ascribe to the pseudo-science that has continued into this modern time.

“so-called ‘refutations’ are not the hallmark of empirical failure, as Popper has preached, since all programmes grow in a permanent ocean of anomalies. What really counts are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. [4]

To be science, it is necessary that a hypothesis predicts an event that can be tested. To have science, you do not merely speculate, you set up a hypothesis to be tested and you gather evidence to support or refute it. This leads to a theory. Before this system is created that may be empirically tested, there is not even a theory. Science is never proven. What occurs is that we create better models of the Universe.

Newton was not “wrong”. Even now, his theory of gravity is used for many calculations over Relativity even though we know that Relativity is a better and more accurate model.

Why?

The Newtonian calculations are simpler. Even though we can obtain more accurate results when using the Relativistic Newtonian formula, the Newtonian formula suffices for many calculations. We only need to use the more accurate (and more difficult) equations when it is warranted. For instance, in satellite deployment the time drift from the velocity differential is enough to make vast errors in the GPS system. These need to be calculated relativistically. That stated, all we use to place a man on the moon is Newtonian calculations.

More, we often choose to use the lower grade older model as it IS better. We know that Newton was approximately correct, and that Einstein was closer to the truth, but we cannot always measure the accuracy to enough level to warrant the changes. Sometimes, we cannot calculate all the variables in the model we know to represent the best model of truth as we know it and the only option is to use the older model.

Science is a process of modelling the “truth”. This is not who made the Universe. It is not is there something “before” time [5]. It is reality as we perceive it. Kurt Gödel in 1931 with his incompleteness theorems demonstrated mathematically that only the simplest of arithmetic calculations can be complete [6].

Science is a model of the world. We create better models over time, and we replace some models and keep others with reminders of their inconsistencies even though know we know they are not “true”. The reason comes when they offer a solution. Science is an incomplete model. We do not solve it and we cannot make a hypothesis scientifically about things we cannot test.

What science does offer is a means to see through mysticism and pseudo-scientific quackery.

Formally, Gödel’s theorem states, “To every ω-consistent recursive class κ of formulas, there correspond recursive class-signs τ such that neither (ν Gen τ) nor Neg(ν Gen τ) belongs to Flg(κ), where ν is the free variable of τ” (Gödel 1931).

Why do we do simple models at all?

We love to make simplified models. As I noted above, we still use Newtonian models and there is reason. They work most of the time. Even this falls over and we cannot calculate a generalised three body problem of gravitational attraction as put forth by Newton now. If we tried this using Relativistic equations, well we do not have the computational power with all the computer systems on earth and a few lifespans to do that. There is an economic cost to all calculation.

Back in 1887, mathematicians Ernst Bruns and Henri Poincaré demonstrated an elegant generalised system that offered proof showing that there is no general analytical solution for the three-body problem when defined using by algebraic expressions and integrals. This does not say that one could not exist, but that it cannot be completed using the mathematics we have at our disposal.

In this, they demonstrated that the motion of three bodies is generally non-repeating, except in special cases. Right now (and as last I know of) we have a total of 16 specific solutions to the three-body problem. The last 13 of these only in the last years (http://arxiv.org/abs/1303.0181).

These are great and have a wonderful purpose, but we need to remember the world is bigger and more complex than we can understand.

Models are just that. When we lose sight of this, we start to lose sight of what we can achieve. But, importantly, models need to be tested against the real system. When a paper comes out claiming a need to change a system (such as Bitcoin) and offers a model, we should not react. Before we even start to consider the model, we need to look for evidence that may support it.

Many models of reality are based on Euclidian space (geometry). The Friedmann–Lemaître–Robertson–Walker metric is an exact solution of Einstein’s field equations of general relativity. From it and the general relativistic formula, we find that space is only approximately flat. A good approximation for most purposes, but flat it is not. To really model the world, we have to start with CAT(k) spaces, Hadamard spaces, and constructs such as Hilbert spaces in the Quantum mechanical world.

For the most part, the error rate is small, and the calculation cost is such that we use a classical model. This does start to fail in modern applications. For example, the time system on the GPS we need to us a relativistic calculation as the time difference experienced is significantly affected by the differential velocity of the Earth to the satellite. The result would be a large error that continued to grow with the use of a classical model.

Science is all about models. We like to believe we can know it all, but this grasp of unbounded knowledge something that will always lie outside our grasp. Gödel proved that.

So, where does this leave us?

We must always test a hypothesis. And, it is important to note that there are always unintended consequences.

Whenever we start going down a path of implementing changes to a system, we need to think about the consequences. This is both the seen and the unseen.

It is perhaps most importantly the unseen. We cannot say that an intervention has achieved the best outcome and that is better than something when we have not actually compared it. This means when we look at a risk reduction process in information technologies or implement new economic policy or for that matter any other intervention we need to investigate it fully.

In business, good project managers and more importantly portfolio managers will investigate the various results and compare these against what is in effect a null hypothesis. That is they will compare a sample project against the status quo. When doing this we need to also contemplate all the alternatives. This is where many people fail.

More importantly when we are doing correlational studies and investigations that do not allow for experimentally controlled trials it is critical that we investigate causation in a rigorous process. Double blind tests are the gold standard in science. These are not always feasible, and the costs and benefits should always be considered, but in all instances, change should not be conducted for change sake and a hypothesis is never more than mere speculation without testing.

Austin Bradford Hill (1897–1991), was a British medical statistician whose great contribution to science was to leave us with a set of minimal conditions that are required to establish a causal relationship between two events. Hill’s criteria has become the basis of modern epidemiological research. It is one of the mainstay methodologies that I use in my research both in evaluating economic effects and when investigating malicious software and other security controls. It is useful in evaluating the human aspects of information security and risk. It is not just epidemiology but other fields such as economics that can benefit from this approach.

Hill’s criteria is the basis of good scientific research where we are seeking to establish a causal relationship amongst social phenomena and in particular ones where we cannot engage in controlled trials. In some instances, it is in fact better than a controlled trial as the process of creating a controlled trial changes the environment and creates a bias in many of the results. Although it is true that this controlled trial has provided the best answer to a problem is not always true that we are investigating the same problem. One example would be looking at studies of irrationality. The University controlled trials testing the reactions of students generally biased the results. In selecting risk trials for instance, we take selective forms of risk that bias the results towards male or female risk takers in the study. Later studies have now shown that these original studies into irrationality have been the result of poor methodology with both women and men exhibiting similar levels of risk. What was demonstrated is that the forms of risk taking differ between men and women but overall the levels of risk a similar.

If we are to make a claim that population growth results in poverty or that capitalist governments cause poverty in developing nations or even that Keynesian spending results in the long-term growth them to be scientific in our approach we need to demonstrate causal relationship. Hill’s criteria provide one of the ways in which we can do this.

For example, as an economic investigation we can formulate a strategy and hypothesis based on welfare-based systems such as a guaranteed minimum wage.

It seems a good process and we have created a safety net. Again, an issue is that we are also not thinking of the unseen events. This is what will occur if there was no welfare. In starting such a welfare system, the differential incentive to work decreases. This is not hard to explain.

Basically, it is not poverty but a differential between income that people see as the greatest disparity. Even in Western nations where there is no need to be poor and in fact where most of the poor are wealthier than the middle-class 100 years ago what we see is a desire for more against those others in society.

So, the result of such a policy is to have more people enter welfare. The result, those earning need to pay more for the increased welfare state. Less incentives for productive work. The differential decreases further making welfare more attractive and incentive’s more onto state support. The result more welfare… A degrading cycle of creating more and more welfare.

This takes us back to Hill’s criteria. Here we have nine criteria to measure any correlation effects against. These are:

1. A temporal relationship where the cause always precedes the outcome. If there is some factor that is believed to cause an event, and this must always necessarily precede the event. This first criteria is the most critical and essential of all of Hill’s criteria. If this one is not true, then that we have a correlation alone and no causal effect.

2. Next, we need to consider the strength of the relationship. This is a statistical measure of the strength where the factors are highly related. We can look at the Pearson number for correlation as a means of testing this value.

3. Next there is a effect response relationship. This is a measure of input. As we increase the amount of one factor the other must also increase. For instance, if we put more time into training people in security awareness then naturally for this to be causal in relationship we would have to have improved security. The improvement is not required to be linear and we may find that each incremental expense returns less but it must return something more than it would’ve if it wasn’t there.

4. The fourth relationship is consistency. The results need to be replicable and repeatable. They should apply in different population groups and samples.

5. Next, we look at plausibility. The association that we are purporting to exist needs to be supported by a valid theoretical basis. There needs to be some phenomena that can act in a manner that causes the result or event.

6. The sixth criteria is that we consider alternative explanations. Many so-called scientists fail here. They just assume a relationship matches with their understanding. It may be true that we can dismiss many arguments out of hand as they have already been investigated and shown to be false, but this does not mean that we do not consider alternative explanations. We must always consider multiple hypotheses prior to making any conclusion a about a causal relationship between events we seek to explain and investigate.

7. Experimental evidence is also important. Even though we cannot expect to completely re-create an event we should be able to implement an appropriate experimental regime that supports our causal argument.

8. Next there is a requirement that the causal effect is specific. This is one of the weaker criteria and we can demonstrate causal effects without it. The absence of specificity does not negate a causal relationship but the existence of specificity between associations does add additional support to the existence of a causal relationship. Here is important to always examine specific causal relationships with in a larger systemic environment.

9. Lastly, we have coherence. Ideally any association we are purporting to exist should fit within the body of existing theory and knowledge. There are ways of course to introduce new theory and Thomas Kuhn referred to these changes to the excepted theoretical basis of science as a “paradigm shift”. To reject the existing theoretical basis of science we need to have a particularly good and strong proof and evidence supporting our new claim of causality.

The third of Hill’s criteria, the effect response relationship is one that seems to be missing in much of the so-called science we see. For instance, in carbon studies we should see a related increase in atmospheric CO2 leading to corresponding increase in global temperatures (all other things being equal). We should also see a corresponding and commensurate decrease in global temperatures as atmospheric CO2 levels decline. This has been rarely investigated in scientific literature, which still relegates much of the climate study to pseudoscience.

In a couple of my publications for instance we looked at the effects of economic sanctions on criminal groups involved cyber crime. Two of these papers are:

“Criminal Specialization as a Corollary of Rational Choice” (2012)

and

Territorial Behavior and the Economics of Botnets” (2012)

In demonstrating the economic effects of a policy designed to reduce cybercrime we need to investigate all of Hill’s criteria. In this instance what we find is that cyber criminals are rational actors. Like most other people in society when they’re acting individually they act in their own rational interest. When they are offered opportunities that provide better returns for low risk they will take these over the antecedent of a poor return or one with high risk.

Of course, the current answer to this and the mainstay of many Keynesian economists that propagate government circles is to argue that what is rational for the individual may not be rational for society. They argue that irrationality comes of collective rationality.

Of course, what they are saying is not that the collected actions of the many are in fact irrational, but that people choose things that they did not desire. This is a typical political fallacy that is designed to appear scientific. Many economists make very poor scientists.

This is of course rational and itself. For when we see government favouring Keynesian aligned big government economics and rewarding those who support the idea of big government we also see incentives given by government to those who promote big government. So, the individual behaviour of these rational economists is to support in irrational policy itself. It is of course rational to support a biased policy when you gain from it at the expense of others.

What could be termed irrational is a libertarian policy that exists with little support and certainly none from government. As a libertarian one must fight harder and do more. But here it again comes to what is a subjective value. When comparing rationality, we need to look at the values of the individual. In my case my greatest value is freedom and that cannot be given through big government and subjugation. In this case economic constraints and an acceptance of lower benefits comes at the cost of upholding one’s values.

One further part of this rant is that we need to start accepting that it is not irrational to hold one’s values. In fact, the only rational choice is to uphold one’s values. In this we start to see where one’s values lie.

Proof of Social Media

Before we allow ourselves to be swayed by an unpublished paper on a topic without evidence, we should always stop and think through the scenario proposed.

Who benefits, and why?

References:

[1] Marco Alberto Javarone, Craig Steven Wright (2018) “From Bitcoin to Bitcoin Cash: a network analysis” CryBlock 18, June 15, 2018, Munich, Germany 5 pages, 3 figures, 1 table, ACM Proceeding CryBlock’18 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems 2018

[2] Moshe Babaioff, Shahar Dobzinski, Sigal Oren, Aviv Zohar “On Bitcoin and Red Balloons” (2011) https://arxiv.org/abs/1111.2626

[3] Shree Om, Mohammad (2011) “Using Merkle Tree to Mitigate Cooperative Blackhole Attack in Wireless Mesh Networks” https://www.semanticscholar.org/paper/Using-Merkle-Tree-to-Mitigate-Cooperative-Blackhole-Om-Talib/55f26a17d256ac510d61ac6676b714eddfd8092a

[4] http://www2.lse.ac.uk/philosophy/about/lakatos/scienceandpseudosciencetranscript.aspx

[5] By definition there can be no “before” to the start of the Universe as time is a function of the Universe. It there is a prior to the universe and something that we have “derived” from — it is not a function of time per se.

[6] http://www.amazon.com/Incompleteness-Proof-Paradox-Godel-Discoveries/dp/0393327604

For more on Hilbert Space see:

--

--

Craig Wright (Bitcoin SV is Bitcoin.)

My opinions are my own. Eternal student & researcher; plugging Bitcoin from as long as it was lawyer, banker, economist, coder, investor, mathematician, & stats