# Two (very different) Algorithms for Generating Permutations

As I wrote a couple weeks ago, I really appreciate the many ways that you can iterate with JavaScript. In this article I’m going to review two different algorithms that use very different iteration strategies for generating all of the permutations of a given array or string. A permutation of a set of values (or characters) is one possible way of arranging them. In the most basic cases, you can only arrange a set containing one element one way, `[1]`

, and two elements two ways `[1, 2]`

`[2, 1]`

. While this seems pretty reasonable so far, the number of possible permutations grows factorially with the number of elements in the set. By the time you have 10 elements, there are more than 3.5 **million** permutations!

## Complexity

When discussing algorithms, it is common to discuss how fast they are using “Big-O” notation, where you use a simple algebraic function to describe how the amount of work changes as the data set grows. This is also known as “time complexity”. Algorithms that take the same amount of time no matter the size of the data set are said to operate in “constant time”, which is noted as `O(1)`

and displayed as the magenta line at the bottom of this graph. If the amount of work grows at the same rate as the amount of data, the algorithm operates in “linear time”, `O(n)`

, which is represented by the green line. As you may have guessed, algorithms that grow factorially are `O(n!)`

and are represented by the black, nearly vertical, line at the left side of the graph. When experimenting with factorial time algorithms, you will quickly discover that your computer is unable to compute more than the first dozen or so cases in any reasonable amount of time.

# Brute Force!

For my first attempt at a permutations algorithm, I thought I would try to use a simple recursive algorithm to construct the permutations. At least I *thought* it would be simple when I was pseudocoding it. I suppose that that is a perhaps ill-deserved sentiment about recursion generally. It can be difficult to reason about and understand if you’re not used to it, though the core idea is quite simple: a function that calls itself. The basic structure of a recursive function is a base case that will end the recursion, and another case that will call the function on some subset (or an altered version) of the input. While recursive algorithms can be more complicated, even this simple version is certainly sufficient to get us into trouble.

So what was my plan? I would write a function that accepted an array, recursively generate its permutations, and return them as an array of arrays. The base case is an an input array containing fewer than two elements. They will be immediately returned, wrapped in another array. If passed an array containing two or more elements, we start by iterating over those elements. For each element, we call our function on the sub-array containing all the *other* elements. Since our function is going to return an array of arrays, we then loop over that returned outer array and add the current element to the beginning of *each* of the returned sub-arrays. We then take this newly minted permutation and add it an output array that we have initialized outside the outer loop and continue with the next element in the input array. Once we have finished with our outer loop, return that outer array. See, I told you it would be simple! Here’s the code:

Ta-daaa! If you’re struggling to understand how this works, you should try it out for yourself. If you check out this instrumented version, you’ll see how the function breaks the array down into smaller and smaller pieces until it reaches a single element. Then it starts to assemble the permutations, always returning nested arrays, which are then concatenated to build the final permutations.

# A Little Cleverer

While my first attempt was certainly fun to put together, it is a bit of a slouch. Python’s `itertools.permutations`

computes 10-element permutations in about a second and 11–element permutations in about 25 seconds on my (admittedly aging) computer. In comparison, my recursive algorithm takes more than a second to compute 9-element permutations, and about 15 second for 10 elements. I haven’t been able to compute the permutations for 11 elements with it as it runs out of memory… after about 20 minutes. Can we do better?

Another technique is to generate all of the permutations in lexicographical order. For example, if you have an array of numbers, you first order them from least to greatest. Then you generate the next lexicographical permutation by changing the order so that you increase the number (if you squished them together into one number) as little as possible. You just need to repeat this step until you have created the highest number possible with the set you have, at which point you will have created all of the permutations. Since you can sort characters (by their character codes, for example), this technique will work with strings or character arrays as well. Note that if there are duplicate elements in your input, this technique will not return duplicate permutations for the different orderings of the identical elements.

So how do you implement this? To find the next lexicographical permutation from a given sequence, you first need to find the longest non-increasing suffix. You iterate over the elements from the tail end until you reach an element that is equal to or less than the element you checked before. Every element after that one (from the one you found to the end) will be considered the suffix. Next, you iterate over the elements from the tail again, this time stopping at the smallest element that is larger than the non-increasing element you found before. You then swap those two elements. Finally, you reverse the order of the elements in the suffix. While this may seem like an arbitrary sequence of actions, It does indeed create the next permutation.

Say you have the sequence

`1,2,5,3,0`

. Walking backwards from the end, the first non-increasing element is`2`

. Again walking backwards, the first element larger than`2`

is`3`

. You switch them,`1,3,5,2,0`

, and then reverse the suffix,`1,3,0,2,5`

. 13025 is the next largest number after 12530 that you can make with those digits.

I implemented this algorithm iteratively, using a series of `while`

loops. Though it produces the same results as the recursive version above, it looks quite different:

Even though this algorithm involves a lot of iterating, it is still significantly faster than the recursive version. It will calculate 10-element permutations in about 2.6 seconds and runs out of memory trying to calculate 11–element permutations after only a couple of minutes… small victories. It is also significant to note that this version must sort it’s input (which is a somewhat computationally expensive operation at complexity `O(n log n)`

in good implementations). However, since the input size is small and the sort only needs to happen once, it has a negligible impact on performance, especially considering that generating the permutations is `O(n!)`

.

# Other Techniques

The two methods that I have outlined here are certainly not the only ways you can go about generating permutations. If you’re curious, you should try out the Steinhaus–Johnson–Trotter algorithm, which implements a pattern that 17th-century bell-ringers used. A more modern take, Heap’s algorithm was introduced in 1968 and is super speedy thanks to its emphasis on changing the array as little as possible during each step. In any case, I urge you to try out a permutations algorithm. They can be surprisingly tricky to get working, but I find the challenge quite satisfying!