On Genetic Algorithms: Why Novelty Search is important

Niranjhana Narayanan
6 min readJan 25, 2018

--

Before discussing Novelty Search, let us have a quick overview of what Genetic Algorithms currently mean.

Novel? Addition of an anti-freeze fish gene to strawberries give it this blue hue

Heuristics and Evolutionary Algorithms

Heuristics are techniques that use shortcuts to solve time-limited and complex problems quickly, yielding a good enough solution, that may not be optimal. In Computer Sciences and related fields, this means algorithms trade complete, accurate or precise results for speed.

Metaheuristics are higher level frameworks or procedures designed to find, develop or select a heuristic based on the optimization problem. Genetic/Evolutionary Algorithms are metaheuristics - they are strategies that guide the heuristic search process, inspired by the subtler processes like natural selection.

A typical Genetic Algorithm implementation would require —

(i) a genetic description of the solution space, and

(ii) a fitness function to evaluate it.

The solution space is the initial population. The genetic description involves representing each member of the population as a set of parameters with binary string encoding, called a gene.

The fitness function is what is applied on the members, or on the solution space to rank how close the member is to solving the original problem.

Members are selected based on their fitness functions (more fit, more probability of getting selected), and their parameters or genes are made to crossover to generate an offspring, and this next generation member is also added to the solution space.

How is this a heuristic? Where is the shortcut?

Okay, consider evaluating the fitness function of a member takes one minute. If we had 6 parameters of evaluation and each of the parameters had 6 different configurations they could be in, brute-force would take 6⁶ minutes (or, 32 days) to compute all of the combinations the members could have. But if we only had to evaluate the best 60 members of each generation, and say there were 10 generations, then our computation would take 600 minutes, or 10 hours. This is a rough estimation — the best 60 is not always chosen, some random 10 from the first generation can be bred consecutively, etc. Nevertheless, you can see the overhead cost that’s being avoided. The accuracy of the results from a genetic algorithm depends on the fitness function, number of generations, defined parameters, etc — but good enough results have been obtained to call this a successful heuristic.

So the algorithm is made to run for a pre-defined fitness function, and the number of generations to iterate for. And the population for each generation is also specified, so members with less fit functions are killed off in the solution space with each passing generation. Sound cruel? Hmm.. but this is how we can get to the best possible solutions in lesser time, right? Not always.

The importance of Novelty Search

Novelty Search is not something new or unseen, but it is not that widely adopted. And here’s why it should be.

From approaches in Machine Learning to defining the fitness function of GAs, one can identify something fundamental — that all of them are objective-based. It matters to be more fit according to a bunch of parameters specified by a certain objective. It matters to come up with efficient learning algorithms that improve upon a defined objective. And while this seems to work best, questioning this approach and saying we shouldn’t have an objective at all will seem almost profane.

Kenneth Stanley, a pioneer of Novelty Search, gives solid explanations with proofs of why objective-based search will doom us in the long run: https://www.youtube.com/watch?v=dXQPL9GooyI

Novelty search does not reward progress as defined by objectives or performance, rather it rewards being different. It is evolution without objectives. It believes complexity growth in natural evolution is not a byproduct of optimizing fitness, but rather a consequence of nature’s open-ended propensity to discover various ways to meet challenges of life. And this has lead to perpetual discovery of novelty, and increase in complexity.

Consider this comparison on a biped locomotion:

Fitness best may eliminate bending in the first generation as an undesired parameter, but novelty best would keep rewarding this difference in each subsequent generation, until a stability with the right amount of bending is achieved.

Novelty search mitigates deception — in which seemingly promising fitness candidates fall into local maxima, and thus can never attain global maxima causing evolutionary algorithms to fail. Deception is unavoidable with objective-based search.

It is important to note that novelty is not a random walk kind of situation — novelty rewards diverging behaviors, thus creating a constant pressure to do something new. One might argue this is just replacing one objective function with another — one that maximizes novelty instead of fitness. But look closer, because they are conceptually different:

Fitness creates a gradient towards the objective — maximizing fitness is done with the intent of bringing the search towards a goal.

Novelty creates a gradient of behavioral differences — maximizing is done without any intent of search termination or direction.

And since there is no point of aim, there is no point of comparison, for example it would not make sense to compare the novel characteristics of different generations and say one is better. Each is different and that’s that.

You can read Ken’s full paper here — https://pdfs.semanticscholar.org/e49d/1ee1bddea0922faca358f3fd42474baad300.pdf

You can even use few implementations from the NEAT software catalog here — http://eplex.cs.ucf.edu/neat_software/#novelty

Different kinds of Novelty Search algorithms

There are a number of extensions to Novelty Search since it’s initial inception in 2008 by Joel Lehman and Kenneth Stanley. Two of them are named Minimal Criteria Novelty Search (MCNS) and Novelty Search with Local Competetion (NSLC), where both focus on maximizing novelty through different approaches. I can think of one more way to go about implementing these algorithms, I call it the Novelty Network approach:

The catch in defining what novel is, and what kind of differences you are looking for in your solution space can get quite tricky. And this isn’t a drawback — it is desirable. You shouldn’t want to look for something, and in fact divergent differences lead to creativity. And there’s proof of that all around us — natural evolution, human innovation, etc. Vacuum tubes led to computers — but if we had told people back then we were looking for computers, we would neither have vacuum tubes nor computers.

So instead of focusing on defining novelty, another way to maximize novelty would be by rewarding undefined novelty. Putting it in more concrete terms, let us suppose we have run our novelty search algorithms on a sample space. And we have few generations where we point out novel characteristics and reward them, to encourage more selections to diverge similarly. After a few recursions, one would be able to say which members can be considered novelty best according to different situations. Now imagine we had a network — connections between all these situations and the amount of novelty. Now based on the context, using this information gathered by running through different generations, we would be able to identify modifying what gene or parameter will lead to more novel behavior, and what kind too.

This approach still resides in non-objective based search, but with the help of an enabler function in the form of a network.

Social Network of Genes — tool called ‘Mashup’ to identify relationships between genes

Mashup is a gene-analysis tool, which uses a similar network approach, to highlight important relationships between genes and offers insights into disease, treatment and gene analogs across species.

Well, if you made it this far without getting bored, here’s something cool to watch — https://www.youtube.com/watch?v=pgaEE27nsQw&t=8s&index=8&list=PLVtr2p50ylCfUKjmMJQ8z6vGPJ3z2m1qN

“ Abandoning objectives is often the only way to outperform the direct search for the objective.” — Kenneth Stanley

--

--

Niranjhana Narayanan

Love all kinds Physics & Computing — Quantum, Meta, Astro, AI, Crypto, OS, FOSS