Conway: A Cellular Automaton Tribute

Douwe Osinga
The Startup
Published in
10 min readOct 21, 2020

On April 11, 2020 John Conway died of Covid. Conway made contributions to many branches of mathematics, but is most famous for his invention The Game of Life. Reading about The Game of Life in the Dutch popular scientific magazine “KIJK” made a big impact on me and was one of the factors that lead me to pursue a career in computer science. So when I heard about Conway’s passing I decided to create a little cellular automaton as a tribute. This post is about that tribute.

Game of Life

A cellular automaton consists of a regular grid of cells, each in one of a finite number of states, Each cell changes state based on a set of fixed rules and its neighborhood. The neighborhood of a cell is made up of the cells nearby and in particular of the states of those cells.

The brilliance of the Game of Life is its simplicity of rules that power configurations that show highly complex behaviors. There are only two states: on and off. Only the eight direct neighbors are taken into account. As for the rules, a cell that is off, remains off unless three of its neighbors are on. A cell that is on, remains on only when two or three of its neighbors are on. That’s it!

Take for example this configuration, where the blue cells are on, the white ones off:

The middle cell has exactly two neighbors that are on, so it will remain on. The two blue cells on the left and the right though have only one neighbor that is on, so they will switch off. The two white cells just above and below the middle cell have both three neighbors that are switched on and so they will switch on themselves.

With the result:

Of course now the roles are reversed and in the next generation the cells above and below will switch off and the cells to the left and right will switch on again.

Using these primitives you can create all kinds of complex mechanisms. A popular building block is the glider, a group of cells that ‘moves’ in a certain direction by changing its shape a few times until it reappears just the same, but now one cell down and more to the right:

Glider in action

It doesn’t stop there; people have created guns that produce new gliders every so many generations, discovered or designed hundreds of space ships and even build systems that behave like computers which proves that the Game of Life is Turing Complete; anything that can be calculated, can be calculated using the right game of life setup (though the boards might be very big indeed).

As always, the XKCD post of the day of his death does it best, showing, using the rules of the game, the shape of a person that then dissolves into a glider which moves out of the screen:

1937–2020
RIP John Conway

Conserving Mass

One thing that always bothered me a little about these types of cellular automata, is that they don’t stick to the law of conservation of mass, energy and I guess cells. A gun produces an infinite number of gliders, while the drawing of Conway dissolves into nothing. That’s probably not how our universe works. So I started wondering, what if we had rules that determined movement rather than switching on and off behavior? A Game of Life without dying if you will.

Introducing the conservation of mass isn’t all that complicated, but as it turns out, not that interesting either. With a rule that says cells are happiest when they have two or three neighbours, the cells organize themselves quickly in stringy groups that stabilize almost immediately with the cells organizing themselves in pleasing lines that look vaguely like alien writing:

We can change the rules and play with the number of neighbors our cells find ideal. If we make cells prefer four or even five neighbors, we end up with more dense outcomes, while if they like two over three neighbors, everything becomes more evenly spread out, but nothing very unpredictable.

Things do become a lot more interesting if we introduce different types of cells. We can then simplify the rules. Instead of counting the number of neighbors, every color gets assigned random attraction factors for every other color. So red might want to be close to green, while green really prefers to be next to other greens and orange just wants to be left alone.

The result is usually a playful dance ending after a while in a stable configuration with some sort of structure. In the example below we see clusters of green creating cores surrounded by bits of red and purple, with yellow bridging between these clusters with blue and orange being free agents.

A stabilized multi-color simulation

This is pretty cute. The attraction matrix or the genetic information for the simulation determines what the outcome looks like, but not the exact outcome — that depends on also on the randomly chosen starting position. Each time the page above opens, it picks a set of random genes and therefore a random simulation. It encodes those genes in the url (the bit after the #) so reloading that page produces the same sort of simulation, while clearing the bit of the url after the # creates a new one. Give it a try!

We can reverse the process, too. Instead of letting the system pick a random set of genes for us and keep the ones that look interesting around, we can actively design the genes. The github repository associated with this post contains a small python script that does just that. The script has a bunch of simple methods that all create a gene matrix and then encode that gene matrix as a string. A gene matrix is a matrix of N rows and N + 1 columns, where N is the number of colors that will appear in the simulation.

Each row describes the likes and dislikes of one color. The first entry contains a number that says how much this color likes to be next to an empty square; the subsequent entries are about how much it likes to be next to a cell of that color. For example:

Attraction matrix for two colors

says that red cells want to be next to red cells and blue cells want to be next to blue cells and both of them are ok with being next to empty cells and would rather not be next to the other color. The resulting simulation sorts colors in medium sized groups:

Two color self sort

There are a number of other experiments in that script implementing a variety of ideas and it should be easy to quickly test other ideas; let me know if you find something interesting! no-love-5 seems trivial and it is. There is no love lost between any of the colors and everybody just tries to stay away from everybody leading predictably to a social distanced result:

For most randomly picked set of genes we see a similar pattern in terms of movement over time. There’s a lot of movement at the beginning when cells are still trying to find each other and then somewhat suddenly things start to crystallize and the total amount of movement drops quite quickly. As more and more cells find a suitable spot a few stragglers keep moving around until they also find a home. Plotting this relationship shows a nice sigmoid shape. It also begs the question, can we find a set of genes that does not easily stabilize?

There are probably many classes of genes that do this, but an interesting and straightforward group is the set of circular attraction. For example, with a circle of three, red wants to be next to blue, blue wants to be next to green and green wants to be next to red. Any other color combinations don’t work. With just one cell of each color you can easily get in the following loop where all colors chase each other:

Circular attraction in action

This also works in a greater, more chaotic simulation; locally sometimes things seem to crystalize, but then a disturbance from elsewhere breaks up the stability and everything moves again. Something similar can be seen with a circle of 5 colors, but not with 4. At first sight that seems expected; a “circle” of four in a grid is just a 2x2 set of colors that can be stable in themselves. But when you run a 4 cycle simulation those 2x2 stable building blocks almost never show up.

It goes to show that there are some interesting puzzles in this simulated world we put together with no easy explanations. Not at all as brilliant or simple as the original Game of Life but maybe good enough as a tribute to John Conway.

How it technically works

At the core of this sits a processings.js app. Processing(js) is a great option to show animations in the browser if you don’t want to be bothered with the fineties of doing this purely native. I’m using typescript, but plain javascript will do as well, of course. The processing app sets up a bunch of stuff, but the most important thing is the draw call back. This method gets called on every frame and here we draw all the cells at their current position. Simplified:

p.draw = function () {
p.background(51);
p.noStroke();

for (let cell of world.cells) {
p.fill(world.colorForCell(cell));
p.circle(x_offs + x * cell_size,
y_offs + y * cell_size,
cell_size * 0.7);
}
}

The processing script also takes care of the mapping of cell types to colors and the scaling and the handling of cells that move out of the screen — the left side of the screen is connected with the right side and the top with the bottom; this way there are no hard boundaries. This means that if a cell is sticking out on the right side, we also need to draw it on the left side coming in and vice versa.

The real work of the simulation is done in the World.ts file containing eponymous class World. This class keeps track of the location and types of all the cells you can see. The main work is done in the tick method. For each cell we look at each of the five moves it has (four directions plus staying where it is) and for all moves we fetch the neighbors of the spot where the cell would be after the move.

Remember, the genetic information is a N x (N + 1) matrix so at this point we look up the row according to the color of the cell and the column according to the neighbor; 0 for an empty neighbor, otherwise the color of the neighbor plus one. Whichever of the five moves yields the highest score, is the move the cell will make.

The actual implementation adds an element to this simulation glossed over the in article above: temperature. In machine learning and simulations the amount of random noise added is often called temperature and here we do something similar, but slightly more extensive:

  1. The speed of the simulation. When a cell moves from one field to another, we don’t make it just jump to that new field. Instead we produce a number of interpolation frames. The higher the temperature, the fewer the number of interpolation frames, the faster the simulation runs.
  2. Random movement. On top of the deterministic score for each move, we add a small amount of random movement. This amount decreases as the temperature drops and when it hits zero, there is no more random movement and the simulation becomes deterministic.
  3. Number of neighbors considered. Mostly we consider the four direct neighbors of a square, but as long as the temperature is higher than zero, we also weigh neighbors one step further afield.

The simulation starts with a high temperature and slowly cools down to zero. The reason we have this temperature component is to “shake” a simulation and give it a chance to find a more globally optimal solution before settling for a local maximum.

Have a look at the source code to explore further and drop me a line if you have a question or better, an idea to try.

--

--

Douwe Osinga
The Startup

Entrepreneur, Coding enthusiast and co-founder of Neptyne, the programmable spreadsheet