In looking at all of these different applications, I continuously wonder, why are we not integrating biology into AI? I don’t just mean observing or amplifying biology using artificial intelligence, but also structuring AI after biological phenomenon!
A great example of this is a genetic algorithm. They simulate gene pools and evolution to solve optimization problems. Basically, they use survival of the fittest to find what the best possible answer to a question is! If you want a more technical understanding of what a genetic algorithm is, I wrote short research paper on it that you should definitely check out!
However, today, we’re not going to be talking about genetic algorithms or evolutions, but rather an algorithmic network that’s completely different!
But first, we need to understand the concept itself.
Take Sam the Salamander:
I got carried away with sharp scissor cutters and chopped Sam’s arm off, but I wasn’t worried, and neither was he.
Because Sam can easily just regenerate his limbs!
Not to get too violent, but if I gutted Sam, and removed his organs (which would be horrible), he would still be able to regenerate his organs, flesh, and body parts to nearly full effect.
Now though I’m not going show you that, I am going show and tell why my friend Sam is practically invincible (if you’re like Thor and you don’t go for the head the first time).
So, let’s look a little into Sam’s would before he just grows his arm back.
As you can see 👆🏾 what’s different is that salamanders, like reptiles, are evolved, so they don’t really just clot a wound and sow it up by regenerating skin like we do with our platelets, coagulation, etc. etc. In fact, the platelets in them, which are small disc-like cell fragments that are involved in helping to clot blood, have evolved many times and become out of use for them. You can think of it like this:
We apply a biological band-aid, whereas some amphibians and reptiles generate a biological prosthetic. Their capability to regenerate is way more effective than ours.
The reason why is because they leverage the regenerative potential of stem cells. At the site of the wound, you probably saw that there was a bundle of stem cells. Stem cells are special types of cells that can replicate, and become any type of cell through a process called differentiation. Depending on what types of bio-substances they interact with, the stem cell will become another thing, like muscle and skin cells, for example.
Because Sam has a bunch of replicating (or proliferating, if you want to be scientifically worded and sound smarter) stem cells at the place where he was wounded, the cells are signaled to start turning into bone, muscle, and skin cells to regenerate the wound.
Eventually, enough cells are create to constitute a small arm that will continue to grow to its proper size over time.
And they can do this for any type of organ by leveraging different types of stem cells, like myosatellite stem cells, for muscle for example (you’ve probably heard a lot about these from me if you’ve been following my work).
However, there’s something cooler that a salamander can do, and that’s leaving its tail behind when its getting attacked by a predator so it can move faster.
This is a characteristic called autotomy, and its really interesting. I can give you an example.
If you were to cut off Sam’s tail, and transplant it to another part of his body, say his head, his neural connection to his cellular composition would mean that the tail begins to take the shape of whatever body part should be there, and the tail befins to regrow in the correct area and orientation. It’s like the biological growth is self aware.
So there, that was a little bit of a biology lesson 😎. But there are some sick applications of modeling this technology that I’ve been working on.
And it’s called…
Now, genius Hungarian John Vonn Neuman invented this computational model relating to the automata theory.
Automata theory describes the idea of how self-tuning/self-functioning and correcting machines could solve really hard problems. Because of the principles of Sam, we can use math and neural networks to solve all sorts of problems, and make self classifying networks.
The whole idea is that they can correct themselves, and reorient based on inputs. And this technology isn’t limited to actual cell simulations, but that gif up top is called Gosper’s Glider Gun, and it’s apart of a glider-shoot game (like the ones at the arcade) called the Game of Life (a.k.a. Life or Conway’s Game of Life — it goes by many names). The glider gun uses cellular automation to shoot out the perfect glider orientation and amount. Notice how the gif loops perfectly!
Automata also means that the network can be self understanding, meaning an image could potentially classify itself!
As exciting as my life would be if the algorithm I built could actually do this, my current use of cellular automata is still pretty exciting, and does something very similar (so that’s why you should keep reading 😉!).
Anyway, using this type of computational technique could mean that the thing being built could determine its current and topical (geometry and spatiality after being shapechanged) structure at any point in time, just like Sam with his tail.
This would also mean that the objects generated could also reform and reshape when something was off!
Super cool 😎.
So how do cellular automata work?
Great question (if you actually asked it 😒😂)!
Cellular automata begin as grid of cells, where each cell can exist as one of several states. In this case, we’ll use the alive or dead states.
Then, as time progresses, a new generation of cells is created based on a predefined update rule that accounts for if the cell is dead or alive, and whether or not said cell is surrounded by breathing or slaughtered comrades (not literally lol). The rule is the same for all cells, and all cells update at the same time (most times). The update rule is possible because each of the colors (which are states) are given a numerical representation. In this case, dead/blue = 0, and alive/red = 1. After it updates 👇🏾
The program is then able to just continue updating indefinitely and you get something cool, like this alive-dead cellular autonoma cube!
Ok, but now for the fun part.
What if we apply this to machine learning?
What if we can get neural networks (AI modeled after the brain using linear algebra 🤯) to learn the update rule function?
Well, that’s how we get
Neural Cellular Automata
With neural cellular automata, we combine both computational principles of neural networks and cellular automata by having the neural network learn the update function that corresponds to a structure its trying to build.
But, more importantly, how does this even work?
Well, here’s what the model looks like:
So, let’s break this down.
In normal cellular automata, there’s a grid of cells that function and regenerate based on the update rule.
However, neural cellular automata basically takes an inverse approach where, as you can see in the diagram, it learns the update rule based on what we what to generate (which in this case, in Sam, and later a pretzel).
In the model, each pixel of the image represents a cell, and each of the different colors is a state. With color, we have different channels of color based on the primary colors. In this case, there are the red, green, and blue channels, and the lightness and darkness channels, which totals to 16 channels for the picture of Sam.
From there, these color values create a perception vector, which looks at the change in the color (or state) of nearby cells in the x and y direction to a given cell (a.k.a. it looks left, right (x), and up, and down (y) around the cell).
Once you have the perception vector, its fed into a neural network, which [supposed to] updates the cells so they can proceed to the next step!
The color intensity is coded within the channels, along with an ɑ channel that codes whether or not a cell is 💀 or not.
🚨 Note: the other parts of the perception vector aren’t actually given any meaning. It’s really the 3 main color channels and then the alpha channel!
Also, another thing to keep in mind is that the update is stochastic, meaning its done randomly. This also means that the cells don’t update at the same time! The reason for this is to enhance the realistic nature of a self-organizing system, because everything won’t and doesn’t happen synchronously. Just think about it: do all of the cells in our body update at the same time? No.
Also, there’s live cell masking in this model, meaning the computation growth starts with a single cell that multiplies into the image, and there are no hidden cells contributing to creating the photo.
This is all done through the following code:
sobel_x = [[-1, 0, +1], [-2, 0, +2], [-1, 0, +1]]
sobel_y = transpose(sobel_x)
grad_x = conv2d(sobel_x, state_grid) grad_y = conv2d(sobel_y, state_grid)
perception_grid = concat( state_grid, grad_x, grad_y, axis=2)
return perception_griddef update(perception_vector):
x = dense(perception_vector, output_len=128)
x = relu(x)
ds = dense(x, output_len=16, weights_init=0.0)
return dsdef stochastic_update(state_grid, ds_grid):
rand_mask = cast(random(64, 64) < 0.5, float32)
ds_grid = ds_grid * rand_mask
return state_grid + ds_grid
alive = max_pool(state_grid[:, :, 3], (3,3)) > 0.1
state_grid = state_grid * cast(alive, float32)
Now, let’s run it!
According to the model’s training regimen, the optimal generation is achieved.
However, look what happens after we do that:
After the loss is applied, the model goes nuts, and Sam dissolves, while all of his friends blow up or dissolve as well!
Accounting for Shape
The reason this happens is because we didn’t teach the model well enough. We told it to reach the desired image, but what we didn’t teach it was that it should stop when it reaches the image. This is why the model continuously generates pictures of Sam and his friends, even after it’s done, which causes the images to distort and dissolve!
The way we solve this isn’t by telling the model to stop, but rather by turning it into an attractor. If you take or have taken chemistry, you could think of this as the ground state of a compound, where the compound is trying to stay at the energy level that uses the least energy.
In this case, the model tries to stay at the attractor (optimal image) instead of multiplying past it.
To this, we can correct the error by introducing the model to error. We can give it these images:
These images are obviously incomplete, but they are grounds for the model to work off of. So, these nonsense images serve as a starting point for the model to generate the proper image.
So, we then get this:
The method works!
But beyond generation, we want the algorithm to regenerate.
We can train this by having the model regenerate a missing piece of the image, which on the base model, doesn’t work so well.
The model isn’t doing so hot.
But, if we give it something to work off of with the same method we did before, then the algorithm starts to get it!
It’s not perfect by any means, but it is MUCH better than the iteration before. In fact, I personally think Sam the Salamander is doing pretty awesome!
Ok, but what if we want it to orient spatially as well?
Accounting for topology and space
The way we can get the generated image to rotate is by changing the perception.
If you recall, before we base the perception of the cell on the other cell around it. We then estimate the alive-ness or dead-ness gradient (or overall change) for each of the cells around it, through something called Sobel filters.
Basically we edit the features (kernels) to reflect the angle at which the image is being rotated by to allow the cells to proliferate into the image at any orientation. Though in the real world, its unlikely we’ll know exactly how many degrees the object has been rotated by, computers make this easy 🙃.
Because of how easy computers make life, we can generate images like these, where Sam looks like he’s climbing a tree, walking forward, just awkwardly staring, and more.
Though this method is technically quite a hefty assumption (as its unlikely we’ll know exactly how much we want a desired image to rotate by), using an exact measurement to circulate about the axes is a pretty good fix for now!
And there you have it! By combining these methods in our model, we can generate Sam and his friends no problem! 👇🏾
But remember how I said we never used the previously remaining channels for everything?
Well, if we want to make numbers that classify themselves, that’s exactly what we’re going to need to use.
Conscious MNIST Digits
I’ll get this out of the way right now:
- MNIST Digit: Modified National Institute of Standards and Technology; just the most common handwritten number digit that is used for all things machine learning and CNN when it comes to numbers and classification :)
Ok. Moving on.
So we had 10 whole channels that we didn't use on Sam, and Sam likes numbers, so he wanted us to save the digits to be used on his friends (named 0 through 9).
We use these 10 channels that were unused to create labels for all 10 MNIST Digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The labels represent each of the 10 numbers that the generated digit could be.
And then, to train the model, the label with the highest correctness probability will be used to label each of the cells!
However, this time we don’t have predefined convolutional weights (just image stuff to work off of), and we instead allow the network to define parameters for weights on its own (independence 💪🏾).
Another difference is that in the last one, both the dead and alive cells were doing work, but this model is more realistic, and the dead cells are actually dead, and don’t preform computation. Therefore, when updating, we only consider the alive cells for the update.
So, after training, the model gives us this:
Isn’t it just so fun to see our numbers pass the boundaries of mutation and undergo a midlife crisis?
I think so.
Anyway, some of the 1s are trying to be 2s and the 3s are trying to be 8s, so we need to quickly optimize this model.
The reason this is happening is because when the model predicts the value of each cell, it generates a probability inside of the last 10 spots of the perception vector which helps to label the cell.
However, the cross entropy loss function being used doesn’t do so hot when two neighboring cells with the same state to have different probability numbers, because entropy = disorder/randomness. This means that some of the cells can’t stabilize into their predicted identity, and the midlife crisis ensues. If you want to, you can look at the graph, and see for yourself!
So, the obvious solution to resolve this disagreement would be to just swap out the loss function for something better, right?
So by changing the loss function to L2 loss, this disorder error no longer occurs, and we get a more stabilized result like this one:
The cells finally converge into a label, and (for the most part), decide on who they are!
What the L2 loss actually did is stabilize how large the states were, and minimize the update as the cells start to agree and label as a MNIST value.
By visualizing these internal values, we can actually view this process in action, and watch the dynamical (changing) system destabilize in real time!
And with that, we slap on some nice HTML, and we get an interactive MNIST Cellular Automata Network! Check it out!
Pretty sweet that I can go from 1 ↔ 2 ↔ 3 ↔ 4 ↔ 5 ↔ 6 ↔ 7 ↔ 8 ↔ 9…
Why is this useful?
So aside from making me jump up and down with joy when my model works, what can cellular automata and self organizing neural networks actually even do?
Well, of course they have huge implications in simulating biology, and understanding how regeneration works, and we can even create apps using cellular automata.
Aside form that, they’ll allow us to run even more complex simulations regarding biology, which I’m currently working on.
It involves previous work I was doing with myocytes…
It’ll be some interesting research!
What would you do if you could simulate stem cells?
Leave your comments in the reply section and smash 50 likes on this article if you liked it!
Here are my top picks:
Before you go…
My name is Okezue Bell, and I’m investing my time in researching and developing myself (in the super interesting applied biology space)! Be sure to contact me more if you want to collaborate, invite me, have an opportunity, or talk more (or any other engagement with me you can think of!):
Personal Website: https://www.okezuebell.com