Merry Christmas! (My wife tells me I should call this “The Christmas Tree Rule”)

A method to reduce Rule 30 computational effort

Alex Parker
Dec 2 · 10 min read

“DOES COMPUTING THE NTH CELL OF THE CENTER COLUMN REQUIRE AT LEAST O(N) COMPUTATIONAL EFFORT?” — Stephen Wolfram

This question is one of three challenges posed by Stephen Wolfram on the website at https://rule30prize.org/. The problem domain is Elementary Cellular Automata, of which there are 256 unique basic rules, with the computational effort of the generation of values according to Rule 30 being specifically put to the test.

Elementary Cellular automata, broadly speaking, are rules which operate on a row of cells in discrete iterations. These rules are visualised as follows:

(source: https://writings.stephenwolfram.com/2019/10/announcing-the-rule-30-prizes/)

In each iteration, each cell’s direct neighbour along with its own value are compared with the available patterns in the rule in order to compute the next value of the cell.

(source: https://writings.stephenwolfram.com/2019/10/announcing-the-rule-30-prizes/)

By computing each iteration in this fashion, the computational effort required is a function of the desired number of iterations and the number of comparisons required for each iteration. This approaches O(n²) complexity.

The apparent irreducible nature of this rule has been approached from many angles, which are well summarised on the web page that accompanied the announcement of the challenge.

The method outlined herein takes an algorithmic approach to reducing the total complexity of Rule 30

Edit — This approach is not O(n)

If I graph the output of different configurations, while I can achieve a constant factor reduction in effort, the trend is still exponential:

That said I will leave the post up, the algorithm still provides significant reductions in computational effort and can calculate row(n) for large numbers of iterations with significantly less effort than the standard and “rhombus” approach.

Determining baseline effort

To generate row(n) in rule 30 from the basic patterns, one must have already computed row(n-1). Each row requires n * 2 + 1 comparisons with the previous row to compute its values.

Row 1 requires 3 comparisons, Row 2 requires 5, then subsequent rows require 7, 9, 11, etc.

To fully compute any target row n matching the standard rule 30 patterns, the number of comparisons is therefore n * (n+2)

Row 10,000 would require 100,020,000 comparisons against the standard ruleset to generate the final row.

Can we exploit patterns to reduce this effort?

There exist repeating triangles in the data of all sizes, starting at any point. Given any arbitrary sequence of data of any length from any row, any subsequent recurrences of that pattern always result in the same “triangle” extending downwards, with all cells within the highlighted area able to be safely and accurately calculated from repeated iterations of Rule 30, per Figure 1:

Figure 1 — Arbitrary sequences repeat downwards in a so-called “safe triangle”

While it is not yet clear if it is possible to predict where specific patterns will appear, the contents of these triangles is predictable across subsequent iterations wherever the topmost input string appears.

It therefore follows that if we have a lookup table of all possible patterns for a given length, we can generate the output of rows multiple steps ahead of the current iteration, and skip entire blocks of iterations per each step.

Cost savings of row skipping?

What if I could generate only each 5th row instead from a special set of rules that guarantee the generated output matches the expected value at the equivalent row?

In steps of 5, the output would look like this:

Figure 2 — Generating stepwise rows of n+5

In that case, I’d have only calculated n=5,10,15,20, 25 and 30, with only 216 pattern matches needed.

We can figure out how much computation is needed with the simple formula:

Computation(step n) = n * (step + 1) + step

Lookup Table Effort Calculation

By exploiting the downward triangular patterns we can calculate a lookup table that enables us to generate iteration n+stepsize.

For each pattern in the lookup table, data within the matching “safe triangle” can be calculated through application of rule 30 on the seed string, so if referring to Figure 1 we wanted to generate a lookup table value for string “001110011” we can iterate Rule 30 4 times.

There are several obvious ways to approach creating a lookup table. The basic approach would be to pre-generate a fixed-size lookup table, which will act to divide the overall effort to generate iterations by a significant factor but will have largely diminishing returns as (n) approaches infinity.

Another approach would be to dynamically generate a lookup table with increasing orders of magnitude as more data is generated. This could result in a less linear overall improvement over time, however it would also require each generated row to be analysed for new entries, and complicated logic to back-fill and pre-fill cells.

For simplicity and proof of concept I decided to proceed with the basic pregenerated table.

A pregenerated lookup table can be constructed if we define our desired maximum step size (or use an algorithm to calculate the most efficient table size given the number of values we wish to generate.

As the step size increases, so does the number of neighbouring cells we must examine, and therefore the length of our pattern. For example, for steps of size 1, we need to examine 3 cells to obtain the subsequent row. For steps of size 2, we must examine 5 cells, then 7 for steps of size 3, and so on. This can be calculated as:

Lookup Pattern Length = Maximum Step Size * 2 + 1

Using this equation for a step size of 5, we would require a lookup pattern length of 11. There are 2¹¹ (2048) possible binary sequences, requiring us to generate a range of patterns from 00000000000b to 11111111111b mapped to their direct descendents and ultimate final value. This means we have to calculate patterns for all steps in between n+1 and n+5, with an effort that can be determined using:

Lookup Table Effort = (2 ^ Lookup Pattern Length) * (Step Size * (Step Size + 2))

(this value is also equivalent to the memory cells required to store the lookup table patterns)

Infilling the centre column

Of course, the centre column is missing for the rows we skipped, which is needed if we’re to be successful in obtaining a seamless string of Rule 30’s centre column digits, so we would need to at least fill in those squares:

We can also ignore most steps and only calculate infill to determine the value of a cell we are interested in on a step that doesn’t line up with the desired iteration, such as in cases where the desired iteration (n) is not cleanly divisible by the step size, per Figure 5 below

The number of Infill Comparisons required for a step remains constant regardless of the current step or iteration and does not grow with the size of the iteration’s data.

To calculate the comparisons required to infill every step, you can multiply by the total step count if it is known in advance and add this to the Total Required Comparisons (stepwise) in order to determine the computational effort required prior to executing the generation sequence:

Infill Comparisons (All steps) = (Step Size — 1) . stepCount

Total Combined Effort Calculation

Combining the effort (based on comparisons) for step generation, infill generation (assuming infill for all steps as a worst case) and lookup table generation, the following calculation can be used:

Total Combined Comparisons

= Step Generation Effort
+ Infill Effort
+ Lookup Effort

Or…

= (n . (stepCount+ 1) + stepCount)
+ (Step Size — 1) . stepCount
+ (2 ^ Lookup Pattern Length) * (Step Size * (Step Size + 2))

Predicting performance

The following tables predict effort with variations in these values, demonstrating the effect of step size on the break even point between lookup table generation and stepwise iterations when compared to the baseline standard Rule 30 approach.

For smaller iteration numbers, smaller step sizes are more efficient, while for larger numbers of iterations the additional effort generating a larger lookup table significantly reduces the effort.

Low iterations, small step size

Low iterations, large step size

Many iterations, small step size

Many iterations, large step size

Based on these tables, there are some significant correlations and relationships to take note of.

The stepwise method under ideal circumstances as (n) approaches infinity tends to approach a complexity or effort of (100 / Step Size).

For example, the effort (via comparison operations) for the stepwise approach tends toward 10% of the standard approach for a step size of 10, and 20% for a step size of 5.

As step size increases, the memory and computation increases exponentially, so in realistic terms, memory limits will restrict the size of the lookup table (note, computing the generation of the lookup table is included in the complexity calculations).

The prototype below is not well optimized but in a like for like comparison works well for the predefined step size of 5 over 10,000 iterations.

Prototype Testing Results

An initial prototype generated the following results, showing the calculated final generation after 1000 iterations using the standard Rule 30 approach and 200 iterations using the stepwise approach (equivalent to 1000 iterations with a step size of 5).

The final output row of the stepwise approach was identical to the standard Rule 30 iteration, and the combined generation of the lookup table and stepwise iterations was significantly less than the effort recorded for the standard Rule 30 generation.

In terms of timings (in ms)

  • Lookup Table Time + Stepwise Iteration Time = 24.5049 + 4723.30 = 4,747.80ms
  • Standard Iterations Time = 157893.109ms
  • Difference: 0.03

The stepwise approach took 3% as much time as the standard Rule 30 iterative approach.

In terms of measured “comparison” operations (effort):

  • Lookup Table Effort + Stepwise Iteration Effort = 44,584 + 20,012,000 = 20,056,584
  • Standard Iterations Effort = 100,020,000
  • Difference factor: 0.2005 (predicted ~ 0.2)

The stepwise approach measured 20% less “effort” — which is in line with predictions

Notes & limitations

  • The prototype does not yet support infilling however this is a minor fraction of the required effort for large numbers of rows (~4 lookups per row, or ~2% of the expected effort over 10,000 records)
  • As infilling has not yet been implemented, only iterations that are directly divisible by the step size can be correctly calculated. This does not invalidate the underlying approach, and should be a reasonably simple improvement should the core concept prove valid upon review.
  • Some operations performed in code may be affected by JavaScript’s built-in JIT compiler and other optimisations, resulting in faster overall execution (page caching, etc)
  • Some operations in code may not directly translate to the abstract concept of “effort” based on simple comparisons. From testing, it seems however that these figures are near enough approximations to outline general trends.

Execution Environment

All code was executed in Google Chrome via the JSFiddle website on a consumer-grade gaming laptop comprising am 8th generation Intel Core i7–8750H CPU @ 2.20GHz, 15GB RAM, Windows 10 Home (64 bit) and an NVIDIA GeForce GTX 1060 (not used by the script).

Note finally the script is very processor intensive and may appear to hang while it performs its computations.

Links to sample code

The code is available on jsfiddle and github on the following URLs

Regarding the prototype in its current version

  • Graphical output is not rendered to the browser directly, the resulting calculations can be viewed in the browser console (F12 in chrome)
  • The script is very processor intensive and may appear to hang while it performs its computations. Just let it keep running (make sure your console is open to capture output)
  • As it is computing rules by comparing text strings rather than binary data, there is a potential amount of unnecessary overhead — the same algorithm can be adapted to operate directly on binary data with bitwise operations for faster performance, the calculated effort and comparisons should remain consistent.

Conclusion

The prototype script correctly calculates the output of Rule 30 where n = 10,000, far more quickly (in my testing 3% of the time) and with far less complexity (20% fewer comparison operations for a step size of 5).

While there are still some refinements to make to the code, for example to introduce infilling and to operate more closely on binary data rather than strings, the underlying algorithm performs in accordance with the projected calculations.

Based on the correlation between the calculated predictions and performance of the prototype this prototype demonstrates the approach and efficiency improvements of this approach to calculating Rule 30 output by skipping rows.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade