An Analytic Model of the Performance of a Forked Bitcoin Blockchain with Two Block Size Limits
tl121
54

I am sorry to inform you that your state table is not correct.

The assumptions do matter. The introduction of verification time and node behaviour changes this model significantly.

If you are modelling p and q with both large and small blocks, then the probability rates are wrong. It is not rate p, it is rate p given that the block is under that accepted by q. As soon as p mines a block that q does not accept, the fork will drop all future q as well.

Epochs modelling is an approximation. The differences to the Negative (and Standard) Binomial are not large, but when run over and over, these add up. That small difference in the graph below is OK for one or two blocks, but it soon makes a difference when compounded. Those very small values in the figure below add up.

So, you need to use the Negative Binomial. What is the probability that m x q blocks will occur before n x p blocks where n < m. Do the sum of all values. I have not done this, so I cannot say what difference there is.

That is before you add the validation and propagation time. These are small, so most people dismiss them, the trouble is, the effect is statistically significant.

At p=3 the probability is incredibly low. Possible but low. by p=12, q=6 for p=2/3; q=1/3 we are talking black swans and more likely past the difficulty change and thus safe. So, you are also better looking at how long this would be prior to the chain being effectively safe. For instance, we have our q=1/3 system, it has 6 blocks, p has 12… what rate will q manage to catch up to p at?

At that point, you have a likelihood of 0.002058 of catching up if the two systems start at the block difficultly change. If you do this later, and closer to the difficulty change,

The other assumption, a log linear increasing difficulty also make a change that cannot be discounted.

So, it is a model, but not accurate enough. Bitcoin is similar to an SEIR-C. The times do matter and you cannot assume them away.

Most critically, network distribution matters.

The minority forms a pruned sub-graph.

Once the fork happens, the propagation rates change. The majority remains a giant or supernode and the other aspects are on a limited transaction graph that is still connecting to the other networks, but rejecting.

A question… how often does a q=2/3 miner get 6 blocks in under 90 mins? 150?

How often does a p=1/3 miner get 6 blocks in under 180 mins? 150?

They are also not independent queue models. I know we like to simplify these all the time, but they are time censored Poisson models and the maths for the simplified system doe snot equate to the actuality of it.

I am never a fan of simplified models …

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.