Take a step back in history with the archives of PragPub magazine. The Pragmatic Programmers hope you’ll find that learning about the past can help you make better decisions for the future.

FROM THE ARCHIVES OF PRAGPUB MAGAZINE, OCTOBER 2019

Swarm Algorithms: Programming Your Way Out of a Paper Bag

by Frances Buontempo

PragPub
5 min readOct 1, 2021

--

https://pragprog.com/newsletter/

Imagine that you have some particles in a paper bag…

I wrote a book about genetic algorithms and machine learning.

In addition to genetic algorithms and other aspects of machine learning, it also includes some swarm algorithms. I thought PragPub readers might be interested in what these are and when you would use them.

While a genetic algorithm mixes up potential solutions by merging some together and periodically mutating some values, swarm algorithms can be regarded as individual agents collaborating, each representing a solution to a problem. They can collaborate in various ways, giving rise to a variety of swarm algorithms.

Photo by Huey Images on Unsplash

The so-called particle swarm algorithm can be used to find optimal solutions to problems. It’s commonly referred to as a particle swarm optimization, or PSO for short. PSO is often claimed to be based on the flocking behavior of birds. Indeed, if you get the parameters right, you might see something similar to a flock of birds. PSOs are similar to colony algorithms, which are also nature inspired, and also have agents collaborating to solve a problem. So let’s take a look at a particle swarm algorithm.

You’re not much of a programmer if you can’t program your way out of a paper bag, so that’s my go-to problem: getting out of a paper bag.

Imagine that you have some particles in a paper bag, say somewhere near the bottom. If they move about at random, some might get out of the bag eventually. If they follow each other, they might escape, but more likely than not, they’ll hang round together in a gang. But if you provide them with encouragement in the form of a fitness function, they can learn — for some definition of learn. Each particle can assess where it is, and remember the better places. The whole swarm will have a global best too. To escape a paper bag, we want the particles to go up. Inspecting the current (x, y) position, we can let the fitness score be the y-value. The bigger, the better. For real-world problems, there can be many more than two dimensions, and the fitness function will require some thought.

The algorithm is as follows:

Choose n Initialize n particles randomly For a while: 
Update best global position
Move particles
Update each particle’s best position and velocity

The particles’ personal bests and the overall global best give the whole swarm a memory, of sorts. Initially, this is the starting position for each particle. In addition to the current position, each particle has a velocity, initialized with random numbers. Since we’re doing this in two dimensions, the velocity has an x component and a y component. To move a particle, update each of these by adding the velocity, v, in that direction to the current position:

Since the velocity starts at random, the particles move in various different directions to begin with. The trick comes in when we update the velocity. There are several ways to do this. The standard way adds a fraction of the distance between the personal best and global best position for each particle and a proportion of the current velocity, kinda remembering where it was heading. This gives each a momentum along a trajectory, making it veer towards somewhere between its best spot and the global best spot. You’ll need to pick the fractions. Using w, for weight, since we’re doing a weighted sum, and c1 and c2 for the other proportions, we have:

If you draw the particles moving around you will see them swarm, in this case out of the paper bag:

This is one of many ways to code your way out of a paper bag with genetic algorithms and machine learning. When a particle solves a problem, here being out of the bag, inspecting the x and y values gives a solution to the problem. PSO can be used for a variety of numerical problems. It’s usually described as a stochastic optimization algorithm. That means it does something random (stochastic) to find the best (optimal) solution to a problem.

About Frances

Frances Buontempo is the editor of ACCU’s Overload magazine. She has published articles and given talks centered on technology and machine learning. With a Ph.D. in data mining, she has been programming professionally since the 1990s. During her career as a programmer, she has championed unit testing, mentored newer developers, deleted quite a bit of code, and fixed a variety of bugs.

--

--

PragPub
The Pragmatic Programmers

The Pragmatic Programmers bring you archives from PragPub, a magazine on web and mobile development (by editor Michael Swaine, of Dr. Dobb’s Journal fame).