Particle Swarm Optimization (PSO) — Improved Algorithm

ibrahim kaya
The Startup
Published in
7 min readJul 6, 2020
Photo by Annie Spratt on Unsplash

Particle Swarm Optimization (PSO) is one of the heuristic optimization methods that use swarming rules of the birds/insects that we see in nature. The main idea is to follow the leading member/leading group of particles of the swarm who are close to the goal (most probably the food) of the team. I will not give detailed information about PSO here, anyone can find enough info in the open literature among very great studies, some of which are given in the references in the end of this post. One only thing that I should add is, the PSO that is given here is an improved one that has added parameters with some coefficients which have been found with Genetic Algorithm. This version given here outperforms the standard PSO almost at all of the universal test functions given here. Some detailed information is given in the upcoming sections.

The very first thing to do is to import necessary modules to our python environment. The pygame module is used for visualization of what is happening through the iterations. It can be taken to “off” if not necessary.

Import Necessary Modules, at first

Swarm Class

The swarm class given below consists of sub-routines what is needed for PSO.

The init function is the main body of our class where we define the basic features of the swarming particle objects. We input the function that is going to be optimized, as well as the lower and upper bounds that are obligatory. Some (“egg” and “griewank”) of the universal test functions defined in the “Universal Test Functions” section given after, are predefined inside the init function with their lower and upper boundaries, however, in real world cases, these arguments should be input with the function itself by the user.

The pygame screen where the user can visualize what is happening inside the swarm is in the “off” mode by default but can be made “on” by making the “display” argument “True”.

The migration is inherently “on” if nothing is done but can be closed by changing the “migrationexists” argument to “False”. The migration probability is set to 0.15 by default and can be changed if necessary. The default number of particles inside the swarm is kept at 50 but can be altered, either.

The functions “coefficients”, “evaluate_fitness”, “distance”, “migration” and “optimize” are the functions where the main PSO algorithm is run — that the user does not need to consider about — . The values given in the “coefficients” function were obtained from MoGenA (Multi Objective Genetic Algorithm) — that I have published in my Github Repo a few months ago and will also write a post in Medium in near future — and should be kept unchanged for the best performance, that is why any user calling swarm class will not see the values in normal usage. There will be migration with a probability of 15%, where the worst 20% of the swarm is going to be changed with new particles. Some of these new particles will be placed to the domain randomly while the rest will be placed according to some predefined algorithm that is promoting the best particles. This migration algorithm developed here increases the rate of finding the global optimum value by decreasing the chance of being stuck at a local minima.

The iterations are done by calling “update” function where the default iteration number is given as 50.

Swarm Class

Universal Test Functions

Universal test functions are used to evaluate/compare the performance of the optimization methods. They are generally very though functions to optimize with lots of local minima that are very close to global minima. The mostly used ones are given in the address here and some of them are given in the next section, below.

Some of the Universal Test Functions

Generate Particles

The first thing to do is to generate the swarming particles randomly inside the boundaries of the function domain. The example below is done with “egg” function predefined inside the swarm class with its lower [-512,-515] and upper [512, 512] boundaries. The number of particles is given as 100, which means we will have a swarming group that has 100 particles inside.

The default display mode is off but since we want to visualize what is happening, the display mode changed to “True”. There will be migration in the swarm and its probability parameter is kept as default.

The “egg” function is given as follows.

The global optimum point is at (512,404.2319) with a value of -959.6407.

Eggholder Function
Generating a Swarming Group with 100 Particles
Iterate

The results of the iterations seen below are the last 10 of the total 310 iterations. What we see during an iteration is a brief situation that gives information about the best value and positions ever and also at that iteration.

iteration no       :  301
migration : False
best_particle : 65
best_position : {'0': 484.55488893931, '1': 435.47531421515777}
best_value : -951.633848665597
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 302
migration : False
best_particle : 86
best_position : {'0': 512, '1': 403.88478570058084}
best_value : -959.5040834781366
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 303
migration : True
best_particle : 55
best_position : {'0': 512, '1': 404.3716709393827}
best_value : -955.6986659036597
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 304
migration : False
best_particle : 12
best_position : {'0': 439.7348848078938, '1': 451.8887247292044}
best_value : -893.5687522656307
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 305
migration : False
best_particle : 17
best_position : {'0': 512.0, '1': 405.4521743065799}
best_value : -957.9344736771322
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 306
migration : True
best_particle : 78
best_position : {'0': 512, '1': 403.6452268849142}
best_value : -936.2364185513111
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 307
migration : False
best_particle : 32
best_position : {'0': 475.53568091537545, '1': 426.0044490589951}
best_value : -937.0178639753331
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 308
migration : True
best_particle : 29
best_position : {'0': 512, '1': 401.8206558721925}
best_value : -951.5497677541362
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 309
migration : True
best_particle : 73
best_position : {'0': 512, '1': 403.94636462938774}
best_value : -926.525509762136
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 310
migration : False
best_particle : 3
best_position : {'0': 512, '1': 404.9632235948374}
best_value : -957.9416518266312
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------

The plot below shows the global best value obtained during total iterations. What is seen below is, the convergence rate is quite fast which means the global optimum point has been found at a very early stage of the iterations.

Optimization Plot

The Results

When the results are investigated, followings can be inferred:

  • The best result is obtained at the iteration 101, which is not a meaningful improvement when compared with the result obtained around iteration 30.
  • Almost the best result is obtained just in 30 iterations although the total number of iterations is 310, that shows that the rate of finding global optima is quite high which is so crucial in engineering applications.

The detailed codes can be found in my GitHub Repo given below, anyone can use it with no hesitation.

References

  1. Particle swarm optimization — Wikipedia

https://en.wikipedia.org › wiki › Particle_swarm_optimization

2. “Particle Swarm Optimization”, Maurice Clerc

3. E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm Intelligence: From Natural to Artificial System. Oxford University Press, New York, 1999.

4. J. Kennedy, R. C. Eberhart, and Y. Shi. Swarm Intelligence. Morgan Kaufmann, San Francisco, CA, 2001.

--

--

ibrahim kaya
The Startup

#machinelearning #deeplearning #reinforcementlearning #CNN #datascience #anomaly detection #AI