Neural Evolutionary Algorithm — NEAT’s

Peter Ma
Peter Ma
Feb 26, 2019 · 4 min read
Life begins here.

We began here.

This single dot alone in space, a cell surrounded by the primordial oceans of the past.

Over billions of years that cell blossomed into all living things on this planet, including us.

What brought us from that primitive state was mother nature’s greatest engineering achievements, evolution.

Evolution is nature’s way of perfecting the art of creation itself. The iterative design of evolution forms a robust yet elegant way to craft living things. And throughout history, this beauty has attracted scientists alike, such as Charles Darwin, who’ve built pillars in the understanding of how living things came to be. But recently it has attracted the attention of another kind. Computer programmers.

Evolution is simple by nature and highly robust in practice. What’s most important is its ability to solve an inherent optimization problem.

The Darwinian evolutionary process is now being used to develop complex computer systems namely genetic algorithmic programs for machine learning, that base their power on the iterative processes of evolution.

Why should we care?

You see there’s a small problem with our classical approach to machine learning optimization. The problem being with the programmers themselves. Hyper Parameters such as learning rate, neural connections, and a number of layers are all hard-coded by the programmer. What if those values aren’t optimized? Backprop theoretically will never truly converge to a global minimum of a loss function. We can’t maximize the potential of our learning models.

That’s a problem.

Evolving The Brain.

NEAT’s

Flow Diagram Of A Evolutionary Approach

Genesis

Evaluation + Selection

Crossbreeding

Mutation

The cycle repeats and through multiple generations (cycles) until a good model is seen fit.

In this entire process, the agent can decide how complex their neural network (aka brain is). This is what classical back-propagation couldn’t accomplish.

My code

Initialize population. We have 50 agents, to begin with. Each agent will have their own attributes. These attributes help make decisions in the game and thus we loop through all the 50 agents with random attributes.

Evaluation, aka natural selection, tests the agent in the game and logs how they perform. The make next move(), function plays the game.

Evaluate the next genome function finds the BEST PERFORMING agent and lets them breed.

Then we crossbreed the top agents with each other by choosing random attributes to use to recreate the new agents. This is called the make child function. Notice how we also add in mutation. This process mixes both crossbreeding and variation.

Then we implement evolve function where all of the components are implemented. The code is super readable.

The rest of the code for the Tetris game is all graphics related. You can check it out in my GitHub.

You can check it in action on my website :)! Click here!

Key takeaways

  • Genetic algorithms can also help approx neural networks
  • Genetic algorithms can do the job of backdrop but replicating nature.

Before You Go


Data Driven Investor

from confusion to clarity not insanity

Peter Ma

Written by

Peter Ma

A.I Enthusiast | Passionate about Astronomy

Data Driven Investor

from confusion to clarity not insanity

More From Medium

More from Data Driven Investor

More from Data Driven Investor

More from Data Driven Investor

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade