We began here.
This single dot alone in space, a cell surrounded by the primordial oceans of the past.
Over billions of years that cell blossomed into all living things on this planet, including us.
What brought us from that primitive state was mother nature’s greatest engineering achievements, evolution.
Evolution is nature’s way of perfecting the art of creation itself. The iterative design of evolution forms a robust yet elegant way to craft living things. And throughout history, this beauty has attracted scientists alike, such as Charles Darwin, who’ve built pillars in the understanding of how living things came to be. But recently it has attracted the attention of another kind. Computer programmers.
Evolution is simple by nature and highly robust in practice. What’s most important is its ability to solve an inherent optimization problem.
The Darwinian evolutionary process is now being used to develop complex computer systems namely genetic algorithmic programs for machine learning, that base their power on the iterative processes of evolution.
Why should we care?
You see there’s a small problem with our classical approach to machine learning optimization. The problem being with the programmers themselves. Hyper Parameters such as learning rate, neural connections, and a number of layers are all hard-coded by the programmer. What if those values aren’t optimized? Backprop theoretically will never truly converge to a global minimum of a loss function. We can’t maximize the potential of our learning models.
That’s a problem.
Evolving The Brain.
What researchers found was a potentially groundbreaking concept of using evolution to train models instead of backpropagation. What if the computer can decide how complex their neural architecture is through evolution? Introducing NEAT’s (neuroevolutionary augmented topologies).
On a high-level, NEAT’s mirror the basic properties of evolution broken into 4 main structures. Genesis, evaluation/ natural selection, crossbreeding, mutation.
Initially, multiple agents will have different neural networks. This is called Genesis stage
Evaluation + Selection
Then, each neural network is tested and those who fail the test will be killed off. AKA natural selection.
Then the winners cross their genetics with each other. This can be done by finding the cross product of weights.
We then introduce randomness through mutations, these can potentially help the agent or kill the agent. Just like in real life.
The cycle repeats and through multiple generations (cycles) until a good model is seen fit.
In this entire process, the agent can decide how complex their neural network (aka brain is). This is what classical back-propagation couldn’t accomplish.
Initialize population. We have 50 agents, to begin with. Each agent will have their own attributes. These attributes help make decisions in the game and thus we loop through all the 50 agents with random attributes.
Evaluation, aka natural selection, tests the agent in the game and logs how they perform. The make next move(), function plays the game.
Evaluate the next genome function finds the BEST PERFORMING agent and lets them breed.
Then we crossbreed the top agents with each other by choosing random attributes to use to recreate the new agents. This is called the make child function. Notice how we also add in mutation. This process mixes both crossbreeding and variation.
Then we implement evolve function where all of the components are implemented. The code is super readable.
The rest of the code for the Tetris game is all graphics related. You can check it out in my GitHub.
You can check it in action on my website :)! Click here!
- Backprop doesn’t always work
- Genetic algorithms can also help approx neural networks
- Genetic algorithms can do the job of backdrop but replicating nature.
Before You Go
Connect with me on LinkedIn
Feel free to reach out to e-mail me with any questions: email@example.com
And clap the article if you enjoyed 😊