Particle Swarm Optimization

Goksselgunduz
5 min readAug 9, 2023

There are many optimization methods in the literature. Genetic Algorithms (GA) is a population-based evolution algorithm developed by taking the genetic inheritance of living things as an example. It aims to reach the best in each generation. The Differential Evolution Algorithm (DEA), genetic crossover process, was developed based on individual differences. This crossover is very effective in achieving the best.

Particle Swarm Optimization, on the other hand, is based on social information sharing between individuals. The search process is done by the number of generations like genetic algorithms. Each individual is called a particle, and the population of particles is called a swarm. Each particle adjusts its position towards the best position in the swarm, using its previous experience. PSO is basically based on approximating the position of individuals in the herd to the best positioned individual in the herd.

It is a heuristic search algorithm that is used to find the optimal solution to a given problem. For example “The Traveling Salesman Problem (TSP)” involves finding the shortest route that visits a set of cities and returns to the starting city. As the number of cities increases, solving TSP using traditional methods like exhaustive search or dynamic programming becomes computationally infeasible and may take years to solve. This is where Particle Swarm Optimization (PSO), which employs a heuristic approach, can provide a potent solution.

Fig 1. PSO steps

There are many kinds of PSO algorithms. Standard PSO, Constriction Factor PSO, Adaptive PSO, Differential Evolution PSO (DEPSO) and etc.

Let’s see a basic example

Consider a mathematical function, such as the Rosenbrock function. Which is:

f(x,y)=(a−x)2+b∗(y−x2)2

Fig2. Rosenbrock Function

Assume that a=1 and b=100 in the function

Fig 3. The Rosenbrock Function Instance to Optimize

This is the function to optimize, with the goal of finding the values of x and y that minimize it.

PSO Steps:

  1. Initialize a swarm of particles with random positions and velocities within a defined search space.
  2. Evaluate the fitness (objective value) of each particle based on the Rosenbrock function.
  3. Update each particle’s personal best if the current fitness is better than its previous best.
  4. Update the global best based on the best fitness value among all particles.
  5. Update the particle velocities and positions using the PSO equations, incorporating personal and global best information.
  6. Repeat steps 2–5 for a specified number of iterations or until a convergence criterion is met.

And let’s apply these in

import numpy as np

# Our Rosenbrock function
def rosenbrock(x, y):
return (1 - x)**2 + 100 * (y - x**2)**2

# PSO parameters
num_particles = 30 # Imagine this like the ants which will down the function hill
num_dimensions = 2 # the function has two variables: x and y. These variables represent the dimensions of the search space
max_iterations = 100 # Number of Iteration loop. (Fig 1.)
c1 = 1.5
c2 = 1.5
w = 0.7
"""
The variables c1, c2, and w are parameters that control the behavior and convergence of the algorithm.
These parameters are typically referred to as cognitive coefficient (c1), social coefficient (c2), and inertia weight (w).
Adjusting these parameters can influence how the particles explore and exploit the search space during the optimization process.
"""

# Initialize particles and velocities
particles = np.random.uniform(-5, 5, (num_particles, num_dimensions)) #At the initializing, particles are distributes randomly
velocities = np.random.uniform(-1, 1, (num_particles, num_dimensions)) # Their velocities also distributes randomly
personal_best_positions = particles.copy() #First value of particles is best at initializing
personal_best_values = np.array([rosenbrock(x, y) for x, y in particles]) # Fitness function for determining best personel

global_best_index = np.argmin(personal_best_values) #Location and index of global best
global_best_position = personal_best_positions[global_best_index]
global_best_value = personal_best_values[global_best_index] # And value of global best

# PSO optimization loop. Here we beginning
for iteration in range(max_iterations):
for i in range(num_particles):
# Updating velocities
r1 = np.random.rand()
r2 = np.random.rand()
#The cognitive_component calculates how much the particle should adjust its position towards its personal best position
cognitive_component = c1 * r1 * (personal_best_positions[i] - particles[i])

#The social component represents the influence of the global best position (the best position found by any particle in the entire swarm)
#on the movement of an individual particle.
social_component = c2 * r2 * (global_best_position - particles[i])
velocities[i] = w * velocities[i] + cognitive_component + social_component

# Update particle positions
particles[i] += velocities[i]

# Evaluate new position
new_value = rosenbrock(particles[i][0], particles[i][1])

# Update personal best if necessary
if new_value < personal_best_values[i]:
personal_best_values[i] = new_value
personal_best_positions[i] = particles[i]

# Update global best if necessary
if new_value < global_best_value:
global_best_value = new_value
global_best_position = particles[i]

print(f"Iteration {iteration+1}: Global Best Value = {global_best_value:.6f}")

print("\nOptimization complete.")
print(f"Global Best Position: {global_best_position}")
print(f"Global Best Value: {global_best_value:.6f}")

(For better observing, there are different PSO optimization examples in my Github repository check out for link below)

Of course this was a basic example for usage of PSO. It can be used in Machine Learning, Neural Network Training, Multi-objective Optimization and Scheduling and Resource Allocations.

The Summary of PSO variant used in the code:

  • Variant: Global Best PSO
  • Velocity Update: Each particle’s velocity is updated using both its personal best position and the global best position.
  • Particle Movement: Particles are influenced by their historical best (personal best) and the best solution found by any particle in the swarm (global best).

Github Repository:

References

1- Particle Swarm Optimization — James Kennedy and Russell Eberhart

2- Particle Swarm Optimization in Python

--

--