Growing and Evolving Soft Robots to Walk in New Environments

It’s official. Humans are going to Mars.

Hannah Le
9 min readJan 31, 2019

Humans have been dreaming about stepping foot on the mysterious “Red Planet” decades before we even stepped foot on the moon.

The good news is, your next vacation might not be the sunny Maldives or the snowy Rockies, it might in fact be on Mars!

Recently, NASA issued its most detailed plan to date for reaching the Red Planet, and it details five phases along the road to Mars. It is expected that humans will reach Mars by 2033, and SpaceX aspires to cut this deadline even shorter to 2024. This is insane!

The futuristic vision of what life on Mars may look like

But before you get excited about booking a rocket ticket to Mars one day, you probably want to ask yourself: how the hell are we going to survive in Mars environment?

And by that I mean 95% of CO2 (the gas you must exhale to survive on Earth) and -81 degrees Fahrenheit (the temperature of your freezer). Imagine living in the Artic… but 100 times wore. Did I also mention the two dangerous sources of radiation that would literally change your DNA, potentially causing diseases like cancer?

…except this is what you may look like after spending a few months chilling on Mars. Yikes.

Right now, astronauts on Mars have to deal with the Sun, which releases streams of solar particles, occasional bursts, and giant explosions. On top of this, energetic particles coming from the galactic cosmic rays, often known as GCRS, would accelerate to near the speed of light and shoot into our solar system from other stars in the Milky Way or even other galaxies.

The results?

Atoms in metal walls of spacecraft, habitats, and vehicles are knocked apart. Even human DNA runs the risk of being destructed, leading to fatal conditions and diseases.

What this means is for humans to potentially live on Mars one day, it is incredibly important that we tackle these problems.

But how are we even supposed to conduct years-long scientific breakthroughs on Mars if it is so deadly to be on the planet surface, even for a short period of time? 🤔

Scientists have come up with a variety of fascinating approaches, one of which is developing soft robots for space exploration. As opposed to hard robots with rigid structures and often only operate well in programmed environments, soft robots have the flexibility that makes them well-suited for moving on rough terrains in extra-planetary environments. Depending on the specific applications, the best shape of such robot for such motion can vary substantially.

An example of a multigait soft robot undulating under a glass obstacle

Having dreamt of coming to Mars ever since a kid myself, I was really inspired by this problem and decided to dive deeper to see how we can use evolutionary algorithms in Machine Learning, specifically CPPN-NEAT to evolve a soft robot to have the best shape so it can walk efficiently on land. After spending numerous hours digging into two research papers, Evolving Neural Networks through Augmenting Topologies by Stanley et. al and Evolution of Soft Robots by Novelty Search by Georgios Methenitis, I was able to reproduce some pretty cool results!

In the short video below, you can see the evolution of a soft robot happening right in front of your eyes. In Generation 0, the soft robot didn’t really do anything. It simply bounced up and down like jello. However, after 250 generations, the robot “evolved” 4 legs and started walking (Woohooooo! I was literally jumping up and down on my bed when I saw this happen haha 😆)

The Evolution of a Soft Robot: Cracking the Space Exploration Problem 🚀

If an engineer were to design a soft robot to walk in a completely new environment like space, it would probably take him or her days or even weeks. The problem is since soft robots have an infinite degree of freedom, it is hard to predict and control its motion.

There are too many possible shapes for soft robots. We can’t just copy TechCrunch features, design an octopus and hope that it would work on space. Life doesn’t work that way.

Many experiments need to occur before the engineer can finally obtain the optimal shape for the robot, and the cycle repeats every time we change the environment.

With a genetic algorithm, specifically NEAT, we can evolve the robot to have the best shape to walk in various environments a whole lot more efficiently. It’s like asking Mother Nature to figure out the solution to your optimization problem!

On a high level, NEAT, or NeuroEvolution of Augmenting Topologies, was originally inspired by Charles Darwin’s theory of natural selection.

The algorithm mimics the process of natural selection where the fittest individuals are selected to produce offspring of the next generation. Instead of 100 short-necked giraffes competing with one another, we now have a pool of artificial neural networks (ANN). These networks start off very minimal, while their weights and architecture (i.e. the number of layers and edges) change throughout many generations to be more complex.

This evolution of neural networks is called NEUROEVOLUTION. As you can imagine, in contrast to an ANN with fixed topology, if we can simultaneously change both the architecture and the weight of the neural network, we can potentially come up with radical solutions to new problems.

The NEAT algorithm works in four neat steps:

  1. Genetic encoding
  2. Crossover
  3. Mutation
  4. Speciation

Genetic Encoding: The Key that Encodes our Soft Robot

In genetic encoding, each genome includes a list of connection genes, each of which refers to two node genes being connected.

Each connection gene specifies the in-node, the out-node, the weight of the connection, whether or not the connection gene is expressed (an enable bit), and an innovation number.

As you can see above, we defined a class of softbots to work with. orig_size_xyz refers to a tuple, containing three values (x, y, z), which defines the original 3 dimensions for the cube of voxels corresponding to possible networks outputs.

Mutation: Mix and match ANNs to find the optimal one

Mutation in NEAT can change both connection weights and network structure. A new connection can either be added or deleted. As the physical structure of the network changes, its genetic code also changes.

Structural mutations, which expand the genome, happens in two ways. In the add connection mutation, a single new connection gene is added connecting two previously unconnected nodes. In the add node mutation, an existing connection is split and the new node placed where the old connection used to be. The old connection is disabled and two new connections are added to the genome. This method of adding nodes was chosen in order to integrate new nodes immediately into the network.

In the snippet above, We need a method in the soft robot’s Class called def mutate, which takes in the following arguments:

  • num_random_node_adds = 5: The number of nodes added at random
  • num_random_node_removals = 5: The number of nodes removed at random
  • num_random_link_adds = 5: The number of links added at random
  • num_random_link_removals = 5: The number of links removed at random
  • num_random_activation_functions = 100: The number of activation functions
  • num_random_weight_changes=100: The number of random weight changes.

As the code implies, we would also need to write helper functions for add_node, remove_node, add_link, remove_link, mutate_function, mutate_weight.

Crossover: Combining the fittest parents

When two random neural networks are crossed over, it’s easy to lose important information. For example, say you want to cross over the hidden layers between two NN, each of which contains 3 nodes.

For example, [A, B, C] x [C, B, A]. The number of possible combinations is 3! = 1 x 2 x 3 = 6. Now, imagine you have N hidden layers, the number of permutations becomes N!, which can be a very large number. This is not to mention that when a combination is chosen at random to be an offspring, such as [A, B, A], you don’t know if your best “gene” (say C) is included.

Imagine yourself trying to create all combinations of 100 books, you would have 100! possible combinations. This can take months or even years, and you don’t even known which combination has most of your favourite books.

Image result for the competing convention problem

This problem of potentially losing key information is known as the competing convention problem. The problem can be further complicated with differing conventions, i.e., [A, B, C] and [D, B, E], which share functional interdependence on B.

Stanley and his research team solved this problem through something called historical marking. Basically, whenever a new gene is added, it is given an innovation number. Then, when two individuals are crossed over, their genotypes are aligned in such a way that the corresponding innovation numbers match and only the different elements are exchanged.

Protecting Innovation through Speciation

Adding a new structure to a network usually initially reduces fitness. However, NEAT speciates the population, so that individuals compete primarily within their own niches instead of with the population at large. This way, topological innovations are protected and have time to optimize their structure before they have to compete with other niches in the population.

The more disjoint two genomes are, the less evolutionary history they share, and thus the less compatible they are. Thus, we would be able to measure how similar neural networks, or species, are by computing the compatibility distance. Basically, it just means the linear combinations of the number of excess and disjoint genes, plus the average of the weights of the neural network. NNs that are similar to each other likely have similar weights.

CPPN: A variation of NEAT

CPPN, or Compositional Producing Patterns Network, is an extension of the NEAT algorithm. While networks in original NEAT only include hidden nodes with sigmoid functions, node genes in CPPN-NEAT include a field for specifying the activation function.

When a new node is created, it is assigned a random activation function from a set like Gaussian, sigmoid, and periodic functions. In this case, our compatibility distance would include an additional argument that counts how many activation functions differ between the two individuals.

Thanks to the diverse set of activation functions, it can help increase the diversity of our population and the probability of finding the best-fit individual!

Genetic algorithms like CPPN can help us efficiently evolve robot morphologies, allowing them to evolve and adapt to new environments like space. Unlike Wall-E who was left lonely on Earth, soft robots can lead the generation of space exploration in the next 10–20 years and move humans forward to living on space. As the technology improves and we can understand Mars much much better, maybe one day we would finally send humans there, and I will be able to live my life long dream of going to Mars.

If you like my article, don’t forget to 👏 and share it with your network.

Feel free to subscribe to my monthly newsletter to know what I’m up to 😉 I promise it’s going to be good! Follow me and be prepared to snack on knowledge in A.I., longevity, and more. You’ll likely explore things that you would never have encountered in your daily life!

--

--