Optimisation: Chromosomes Versus Flies

“You’re in a pub on an island and your boss who is strangely in a good mood comes up with an idea that that he’d give a raise to anyone who could solve the following problem:
Each one of you is given 10 cards, numbered 1–10. You’re asked to divide the cards into two groups where the sum of the numbers of the cards in the first group is as close to 36 and the product of the numbers in the second group is as close as possible to 360.”

Above is the optimisation problem outlined to solve for this week’s lab using genetic algorithms (GA) and dispersive flies optimisation (DFO). Both algorithms stem from the branch of natural computing, however GA derives from an evolutionary approach and DFO from a swarm intelligence approach.

For this problem, the candidate solution for each chromosome or fly is encoded as a binary representation for ease in determining which numbered card belongs to which group. Group one stands as the addition sum (represented by 0) and group two stands as the multiplication product (represented as 1).

The candidate solution serves as an array with 10 values, each corresponding to its numeric equivalent. An example candidate feature is shown below.

 1  2  3  4  5  6  7  8  9  10
[0, 1, 1, 0, 0, 1, 0, 0, 1, 1]
Group 1: 1+4+5+7+8
Group 2: 2*3*6*9*10

Using a binary representation creates a search space size of 2¹⁰, allowing for 1024 possible combinations of the ten values.

Genetic Algorithm

A more detailed explanation of GAs can be found in my previous blog post, “Genetic Algorithms and The Travelling Salesman Problem”.

The binary representation of this problem serves as the genotype. A GA’s genotype is the representation of the solution that the program can easily understand and manipulate. The phenotype stands as the real-world depiction of the solution, in this scenario this is a numbered set of cards from one to ten divided into two groups to satisfy each group’s condition.

Upon initialisation, a population of size N is created with each chromosome generating a randomised candidate solution of the genotype representation. Roulette wheel selection is then used to create bias towards choosing fitter chromosomes as parents to create offspring, and a mutation rate of 0.15 is used to possibly swap two binary values in the offspring’s solution.

I have compared two modalities of GA, steady-state and generational. As mentioned in my previous blog post, steady-state comprises an elitist approach where the best chromosome of the population is carried over to the next. The generational variation does not feature this method and so typically converges to a solution at a much slower rate than steady-state. A population of 25 is used with a mutation threshold of 0.15 for both modalities.

The graphs above demonstrate the fitness of the best chromosome of the population over the course of one test. For the generational run, the algorithm took a total of 583 iterations to reach the solution and for steady-state, 543. The visualisation shows a drastic difference in the role of best chromosome over the course of each variation. Steady-state ensures that the best candidate solution found during the whole process is not lost as it is passed over to successive generations until it is surpassed by a stronger solution. This often means that the best chromosome will stay the same for a long period of the algorithm run-time. As this feature is not present in the generational modality, finding the solution relies solely on the progressive genetic change through successive generations. However, the crossover and mutation phase may evolve a candidate solution that was once classified as the fittest solution to instead a much weaker suggestion. This causes the unpredictability and rapid change demonstrated above. To improve the performance of the generational modality, the No Free Lunch theorem applies to determining the best crossover operator for the specific problem to be optimised. Designing an operator that ensures offspring will have progressively stronger candidate solutions will help to avoid the rapid and uncertain change of best chromosome.

Dispersive Flies Optimisation

A more detailed explanation of DFO can be found in my blog post, ‘Dispersive Flies Optimisation’. For this problem, each fly’s feature vector is composed of 10 continuous values in the range of 0 to 1. A constraint is set to ensure that each feature of the fly remains in this range as this problem lies within a discrete search space. Each feature value corresponds to a numeric equivalent in the same manner as the genetic algorithm where a value of 0 corresponds to group one and 1 to group two.

I have tested two variations of DFO, the elitist approach and conversely a more egalitarian approach. For ease of writing, I will describe the latter approach as the egalitarian approach however this is not a formal definition in swarm intelligence like the elitist approach is. The process of updating each fly’s feature vector is determined by both the specific fly’s best neighbour as well as the best fly in the swarm is the elitist approach. This centralised form of communication is not present in swarms or flocks in nature and so this variation strays from the conventional method of communication in swarm intelligence algorithms. However, communication can be altered in DFO to be present only between neighbouring flies and disregard a best fly in the swarm to form the egalitarian approach.

Elitist Approach (Left) and Egalitarian Approach (Right)

The graphs above demonstrate the fitness of the best fly upon each iteration. Conversely to the GA modality comparison, there is not an incredibly drastic difference between the two variations. This suggests that there is not a huge difference in performance between the two approaches, and that perhaps the egalitarian approach is only advantageous when trying to locate multiple optima as discussed in previous lectures.


The chart below shows the minimum, maximum, and average number of iterations taken to find the global optima from 50 tests of each variation of mentioned above. Each algorithm uses a population size of 25, and a mutation rate or disturbance threshold of 0.15 to offer the same level of stochastic interference upon each iteration.

As shown, DFO clearly outperforms GA in optimisation of this problem. Difference in maximum and average number of iterations needed to reach the solution is extremely notable. However, due to the relatively small size of the search space, there is a chance for an individual in either algorithm to generate a candidate solution that is extremely near to the global optimum upon initialisation.

DFO’s outperformance may be due to the component of communication of the the flies in comparison to the crossover component of the GA. Each fly will update it’s features based upon the features of it’s best neighbour and/or the best fly. As a result, flies will always move in the direction of a perceived optima, global or local, and that assumes every new iteration will offer a position for each fly that is stronger than its last. Crossover in GA however, despite calculating bias to select the fittest chromosomes as parents, may still select weaker chromosomes due to the probability aspect of roulette wheel selection. This suggests that GA does not consistently direct the population into a more optimal state as opposed to DFO’s behaviour of constant gradual progression towards stronger solutions. The method of crossover in itself takes a portion of the first parent’s solution and a portion of the second parent’s. This does not guarantee that even if the best chromosomes are selected as parents that the offspring will inherit stronger solutions.