Search Algorithm Series: PSO

Particle Swarm Optimisation

Terry Clark
5 min readDec 18, 2017
PSO

Introduction

PSO is a branch of meta-heuristic search optimisation. This means that the algorithm makes fewer assumptions about the search space, based on lots of smaller processes.

This algorithm uses a combination of individual and environmental factors to find the overall best solution. It does this by keeping a working memory of its personal best results along with the global best. The personal bests act like anchor points where the swarm will flow and collapse between where the global best will be, as suggested, the overall best solution.

Proposed back in 1995 there have been many variations which make adjustments to the topology of the particles along with how the velocity is calculated. For example, using inertia weights, a scalar to reduce the overall impact of the velocity, preventing the phenomenon of particle explosion! This is when the particles are moving too quickly to that the global best doesn’t locate the solution.

Another variation takes advantage of the communication between the particles. Ring Topology, connects the last in the particle array with the first. Fully Connected Topology, connects all particles to every other particle. And finally the Adaptive Topology, which connects a number of neighbours and itself of its personal best. All of these variations share the current particle’s personal best result along with the stored global best. This reminds me somewhat of how SDS performs when disseminating information to their fellow miners.

A known issue with vanilla PSO is that once the particle collapse towards the global best the velocity will become closer to 0 meaning that for continuous spaces this algorithm as it is, wouldn’t perform very well and will become static once a optimal point was found in the search space.

To get around this I have modified the algorithm slightly by adding a behaviour much like DFO where a disturbance threshold will effectively send out probes in check for more areas of the search space which could be optimal. This stops the particles from getting trapped into local optima.

Algo Recipe

For this experiment I will use a similar example as my other blogs as to find the brightest pixel within a continuous search space (video).

1. Randomly Initialise

Of course we will begin with a randomly initialised population of particles to capture a range of the search space.

2. Evaluate fitness and personal best positions.

We assume this is a minimisation problem and therefore we aim to get a 0 fitness once we find the solution. We also evaluate the personal best fitnesses in order to make the later comparisons.

3. Compare

Although it does not help oneself to compare against others in life. In the case of PSO we will need to see if the current fitness is better than the personal best. If it is the current position should then be stored if not the personal best will stay where it is.

Also we will nee to check if the personal best is better than the global best then the global position should also update to the personal best position;

4. Next comes the all important update method

The update I am using comes from the inertia weighted variation as described above.2

The coded version:
// Recommended values
X = 0.72984;
c1 = X * (4.1/2);
c2 = X * (4.1/2);

p.vel = X * p.vel + c1 * R(0, 1) * (pBest — pos) + c2 * R(0,1) * (gBest — pos)

Then update the position with pos += vel

c1 and c2 are usually a combination resulting to a max of 4, this has been based on observations made overtime with the algorithm. c1 gives more importance to the personal best result, this is where the particles will have more pull towards the blue dots in the video below. Whilst the c2 gives importance to the global best or social influence, where the particles will gravitate towards the best in the swarm.

PSO overall felt very familiar to DFO in the way that it can either use the elitist approach following the best in the swarm or a best neighbour approach utilising the more social approach. Also here I added a disturbance threshold much like DFO in order to create a working PSO algorithm for the continuous search space ( a moving light across the screen). This is due to the diversity of the algorithm where because there is a lack of stochastic process the swarm become stuck in a local optima.

I found that when reducing the c1 & c2 values together equally the particles would hover around an area and then when the disturbance kicked in the particle would send out a prob check to see if this was a personal best and then the full swarm would gravitate towards that point.

My example of this is below:

Natural Camp Activities:

What is the best thing(s) you have learnt in the course?

  • I learned to read academic papers and find interest in hard to grasps topics (This has always been a life fail of mine so far that I wanted to achieve…), deciphered the notion for swarm intelligence algorithms and feel more confident with thinking conceptually about the code.
  • A majority of problems can be broken down into smaller sections to then combine later to find a solution (also a life moto…)
  • I felt that this course was unexpected in the content covered and maybe I was a little unprepared for the uphill struggle I experience. Given that I felt that by the end of this course I was more confident overall with understanding the basics of swarm intelligence.

References:

--

--