# Optimization of the Mutation on Genetic Algorithms

## Study based on the example of the Knapsack optimization problem

The goal of genetic algorithms is to find a quick solution for a complex problem. **In this article we’ll study how you can optimize the mutation rate**.

If you don’t remember how this machine learning algorithm works, you can check my previous tutorial:

One of the trickiest parts is to set the constants that optimize the algorithm. Those constants are the following:

*population size*: number of individuals in a generation*number of generation*: number of cycles.*mutation rate*: the chance that an individual’s DNA mutates.

The computation time depends on the first two parameters. Thus we have to limit them to fasten our genetic algorithm.

We created mutations in order to decrease the population size. Limited population size means a limited number of DNA samples. Without mutations, a new allele can’t appear by itself. So if it doesn’t exist in a population, it can’t appear in the next. The mutation processes modify alleles at each new generation. So, if an allele doesn’t exist in a population, it can appear later. We, therefore, reduce the risk of missing a good allele.

**In this study, we are going to test this hypothesis.** We are also going to find a way to optimize the mutation rate in the algorithm.

If you want to follow the explication with the code, you can check my git repository. It’s in the file *‘KnapsackProblem_Mutation’*.

# The Knapsack problem

The example we will study is the Knapsack problem. In this problem, there is a set of box, each of this box has a value and a weight. On the other hand, you have a bag. It’s **your** bag. It has a limited weight capacity. Of course, you want to stuff it with as much value as possible. To sum it up, the problem is: *what box should we chose to maximize the total value of the backpack*.

**The genetic algorithm:**

- Item set:

It’s the name of the collection of boxes, it’s an array of boxes. A box has two parameters: a weight and a value.

- Individuals:

Each individual is an array of boolean, the size of the item set. For each item, the corresponding value is true if the item is within the bag. On the opposite, it’s false if the item isn’t in the bag.

- Selection and reproduction:

For a population, we select the best individuals and random individuals. After the selection, each couple of breeders creates 5 children. Children’s DNA is created randomly from parents’ DNA.

- Fitness:

For the fitness function, there are 2 cases. If the individual is bigger than the backpack capacity, the fitness is null. Else it’s equal to 2 times the value less the weight of the item. It is empirically the best solution I’ve found.

- Mutation: intragenic

After each reproduction, each individual has a probability *“mutation rate” *to mutate. When an individual mutates, the value of one of its gene is changed.

# The bond: “population size” and “mutation rate”

## Study of the trade-off

Thus, we will try different values of: *“mutation rate”* and *“population size”*. Then we are going to compare their performances. What is the performance of a genetic algorithm? It is its ability to find a solution as optimized as possible during a given time. In conclusion, we are going to give it limited time and look at the solution found. In order to see that, we will draw a 3D map of their results. The library used for 3D plot will be **mplot3d**.

# Results

## First step: *influence of the population size*

Let’s understand the influence of *“population size”* on the efficiency of the whole algorithm. It’s important to understand that because the goal of the mutation is to decrease the population size. So let’s see the performance without mutations first. We will then compare the results with the mutation. *The constants of this first study are the following:*

- number of items: 500
- mutation rate: 0%
- number of repetition for each value: 50
- time limit: 0,05 s

The result is appealing: it isn’t positively correlated with the population size. On the contrary, we can see, that there is an optimal population size. The results are decreasing according to a square function on both sides of this extremum.

How can we explain that? A high “population size” implies lots of computation at each generation. So: the higher the population size, the fewer the number of generation. A big population size implies fewer generation and it limits the efficiency of this algorithm.

In order to verify this result, I tried to increase the time limit. If our hypothesis is correct, it shouldn’t change the appearance of our graph. But it should change the optimal population size and shift the graph.

This experiment tends to confirm our theory. There is an optimal “population size” for the problem without mutations. This optimum depends on the limit of time. The more time available for the algorithm, the greater the ideal population size.

## Second step: influence of mutation rate and population size

Now, we are going to vary both parameters and try to find their joint influence on the strength of the algorithm.

Here is the algorithm used to print the 3D graph. Beware, there are lots of import, here is the documentation used to install *matplotlib.*

*Constants:*

- number of items: 500
- mutation rate: from 0 to 100
- population size: from 5 to 100
- number of repetition for each value: 50
- time limit: 0,05 s

Those graphs are quite startling. On the one hand, the mutation seems to affect the influence of the population size. On the other hand, the mutation seems to have a negative impact on the efficiency of the algorithm.

There is a potential explanation: we are trying an intragenic mutation. That means only one gene can be modified at each generation. But each individual is composed of 500 genes. This disproportion might explain this disappointing result.

In order to test this hypothesis, we have to modify the implementation of the mutation. We are going to try intergenic mutation.

## Last step: influence of intergenic mutation and population size

Intergenic mutation is different from intragenic mutation for a simple reason. In intragenic mutation, for each specimen, we draw dices to decide if it must mutate. Then we change the value of exactly one of its genes. On the opposite: in intergenic mutation, we take separately each gene of this individual. Every one of those genes can mutate. The potential total number of mutated genes is greatly higher in the intergenic case. Let’s hope it gives more influence to the mutation. *Moreover, it’s how mutation works in real genetic.*

*Constants:*

- number of items: 500
- mutation rate: from 0 to 100
- population size: from 5 to 100
- number of repetition for each value: 50
- time limit: 0,05 s

This graph isn’t plain so, I’ll print different slices in order to stress the results.

So, the mutation still has a negative impact on the algorithm. It is even worse apparently.

# Conclusion

Apparently, in the specific case of the Knapsack, mutation isn’t something useful for the algorithm. Thus, it should not be conserved. (I’ll let it on my git repo to let you see how it was implemented).

**Caution:** the result of this study are not usable at once. You can’t start a problem thinking that mutation has a negative influence on your algorithm. This might not be the case in your problem.

The optimized values in the context of a Knapsack problem aren’t optimized in another context. The value of the couple “*mutation rate*” “*population size*” must be optimized for your own algorithm. This article only aims to teach you how to choose these two values. *I’m really surprised by my results, if you have any counter examples or explanations, I would be glad to receive them.*

**Did you like this article? Feel free to comment, or ****contact me****.**