Randomness is hard: learning about the Fisher-Yates shuffle algorithm & random number generation

ath
9 min readJul 26, 2018

--

This post & its related materials were prepared for a July 26, 2018 lightning talk event co-hosted by Enigma and Women Who Code NYC!

Slides from the lightning talk are available here.

I’ve often heard about how random number generation was a “hard problem” but never dug particularly deeply into why — I figured, I don’t do crypto stuff, so is understanding randomness that relevant to me?

Almost 3k likes on a Twitter thread about avoiding common programming mistakes even experienced people make? I’m listening…

But then someone shared this Twitter thread by Colm MacCárthaigh — it’s long and wide-ranging, touching on random number generation, shuffling, weighted sets & selection, and tons of stuff that’s still way over my head — and it sent me straight down the rabbit hole. As a frontend engineer, I’ve often encountered interview questions that are about implementing games (rolling a die, or setting up dominoes or blackjack). I’d never thought about what “good” vs. “bad” shuffling looks like, besides some vague sense that I’d heard somewhere that Math.random isn’t that random after all (more on this later!), so it was exciting to realize there was actually a ton to learn and a bunch of terms that I could start looking up.

For shuffling sets of things, McCárthaigh outlines a few different approaches and their pitfalls, and concludes with:

“Fisher-Yates shuffle”? “Super easy”? Guess I should look it up…
Alice falling down the Wikipedia hole — bye gurl

I’ve been trying to come up with hobby projects to help me learn things (most people learn better by doing, right?) so I decided to combine a few of my interests by building a tarot card app to generate three-card spreads using the Fisher-Yates shuffle, React, and Glitch, and to write about what I learned.

What is the Fisher-Yates shuffle algorithm, and why is it better?

If asked to shuffle a deck of cards, I’d probably come up with something like this: iterate through the deck and swap each card with a different, randomly selected card from the deck. Seems straightforward enough, right?

But when you run this code a bunch of times, something strange happens!

Naive shuffle, 300k tries on a 4-item set

You’d expect a “good” shuffling algorithm to generate each permutation in the deck about the same number of times, but that’s clearly not what’s happening here. Why?

In “The Danger of Naïvete,” Jeff Atwood explains that the reason the naive shuffle algorithm is biased (and fundamentally broken) is because it overshuffles the cards in the deck by selecting each card’s swap from the entire deck every time. This means that some cards are getting moved multiple times!

In a deck of four cards, we know that the total number of possible permutations is 4!, or 4 * 3 * 2 * 1 (24 total). But in the naive algorithm, each card attempts to swap with any other card in the deck (thus generating 4⁴ non-unique outcomes, or 256 total) — weird, because we know the total number of permutations is 24 (and 24 doesn’t evenly divide into 256, which explains the lumpiness of the output). This bias only increases the larger the deck is, so for a deck of 5 cards, there would be 5⁵ (3125) outcomes generated to be divided amongst 5! (60) possible permutations, and so on.

If the naive shuffle algorithm can be this biased with a deck of only four cards, just imagine the bias that it would introduce if you were trying to implement a game using playing cards (52 cards in a deck), tarot cards (78 cards in a deck), or even larger sets of things (Scrabble tiles? dominoes?).

Enter the Fisher-Yates shuffle algorithm (also sometimes known as the Knuth shuffle, or the Fisher-Yates-Knuth shuffle):

Iterating through each card (starting from the end of the array and going backwards), it looks for all the positions (indexes) that have not been shuffled yet. All swaps happen in-place at the end of the array, and once a card has been “shuffled” into place, it can’t move again. This limits the possible outcomes to be the same as the number of possible permutations — in a 4-card deck, we would start by potentially swapping any of the 4 cards, then as we move through the deck/array, go from 3 to 2 to 1 remaining swappable cards (so 4 * 3 * 2 * 1).

Why do most implementations of Fisher-Yates shuffle the array from back to front when you could do it from front to back? 🤔

My best guess is that it’s because the iterator can then do double duty as both the index of the current item as well as an indicator of how many unshuffled items remain in the array. At each pass through the for loop, you need to generate a random number within the window of the indices of cards that haven’t yet been shuffled. Starting at the end of the array, this means generating a random number between 0 and i (inclusive). Starting at the beginning means that you’d have to generate a number between i and the total length of the array, which is more verbose and harder to reason about.

I’m also willing to bet that back-to-front was how Knuth described it in The Art of Computer Programming.

Check out the distribution of the output of Fisher-Yates compared to the naive shuffle. Way better, right?

Naive shuffle vs. Fisher-Yates shuffle, 300k tries on a 4-item set

Here’s a visual of how much better the Fisher-Yates shuffle works on a larger set. While a 6-item set might not sound much larger than the 4 items we were just looking at, keep in mind that the number of possible permutations as you increase the size of a set grows faster than exponentially.

Naive shuffle vs. Fisher-Yates shuffle, 1 million tries on a 6-item set

Reasons why the Fisher-Yates shuffle is rad

  • It’s unbiased* (with a caveat: we will dig more into this next!)
  • It uses constant, or O(1) space, because it performs its swaps in place
  • It uses linear, or O(N) time, because it only needs to shuffle each item in the set once
  • Bonus: if you use lodash’s shuffle function, you’re already using the Fisher-Yates shuffle!

For a cool visualization of what the Fisher-Yates shuffle is doing (as well as a visual comparison to other, less-efficient shuffling algorithms that are still “better” than the naive one described above), check out Mike Bostock’s explainer on Fisher-Yates!

Feel free to play around with the code I used to generate these charts:

What’s the catch?

Remember earlier when I mentioned that Math.random might not be so random after all? As it turns out, while the Fisher-Yates shuffle itself is unbiased, bias can still be introduced into its output depending on what you use to randomly select the card you’re swapping with.

Pseudo and true random number generation

There are two types of random number generation: pseudorandom number generation (PRNG), or what you’re using when you use a language feature like Javascript’s Math.random, and true random number generation (TRNG).

Cloudflare’s wall of encryption lava lamps (Photo credit: Atlas Obscura)

The reason that pseudorandom number generation is called “pseudo” is because, while it should generate numbers that are statistically close enough to random to work for most purposes, they are generated from an initial “seed state” and will always produce the same sequence if given the same seed. As you can imagine, there are some scenarios (cryptographic security, or maybe you’re running a casino or something) where this would be a huge security flaw: once someone could figure out how your randomness was being generated, they’d be able to predict what comes next and game your system! 😱

So when pseudorandom isn’t random enough, we use true random number generation, which relies on natural sources of entropy to be ~tRuLy unPreDicTabLE~.

Here are some fun real-world examples:

Okay, so what's up with Math.random?

So we’ve established that Math.random is an example of a PRNG, but how does it work exactly?

Daniel Simmons explains: surprisingly, there is no hard-coded PRNG algorithm that gets shipped with Javascript. The ECMAScript spec specifies how Math.random should work, so the engineering teams who build web browsers can decide what algorithm they want to use to calculate it, so long as it conforms to what’s described in the spec:

Math.random ( )

Returns a Number value with positive sign, greater than or equal to 0 but less than 1, chosen randomly or pseudo randomly with approximately uniform distribution over that range, using an implementation-dependent algorithm or strategy. This function takes no arguments.

Each Math.random function created for distinct code Realms must produce a distinct sequence of values from successive calls.

The TL;DR is that since 2015, basically every web browser now uses the PRNG algorithm xorshift128+ (which, FYI, is not considered to be cryptographically secure!). But before 2015, they all used different PRNG algorithms, with very different results — this is probably why we’ve all heard at some point that Math.random was broken.

Randomly (or “randomly,” I guess) generated noise before and after Chrome V8 switched from using MWC1616 to xorshift128+ (Image credit: Hackaday)

If you’re interested in reading more, this blog post from Betable Engineering provides an awesome deep (emphasis on deep) dive on just how broken things were in a pre-xorshift128+ world, and this Hackaday article explains a bit about how it got fixed in Chrome V8.

TL;DR

  1. Shuffling things is harder than it might first appear! Naive shuffling algorithms, in addition to being inefficient, can often be totally broken, and it can be hard to see this unless you’re working with large sets or repeating experiments many times.
  2. The Fisher-Yates shuffle is only very subtly different from a naive one, but results in optimal outcomes (time/space efficiency, as well as lack of bias)
  3. Random number generation is super complicated and deserves its own deep dive, but Javascript’s Math.random has come a long way!

The next time you get asked to implement a card game in an interview setting, I hope you’re able to score some points by dropping a little Fisher-Yates and PRNG knowledge! 🤓

Learning by doing

If you want to play around with the toy app I made while working on this project, check it (and the source code) out at tarot.glitch.me!

Why a tarot card app?

Lately I’ve been super interested in horoscopes and tarot readings as a helpful framework/lens to spur personal reflection. I don’t think I’m alone in this!

As mentioned earlier, the naive shuffle algorithm becomes more and more biased as the set that it’s shuffling grows in size. A standard tarot card deck is 78 cards, so it felt like the perfect example to explore — plus, there’s something deeply satisfying about the idea of exploring fortune-telling/divination via computer-based randomness. (It also felt way easier to implement a shuffle + three-card draw than a card game with more rules. There’s only so many hours in the day!)

Finally, I really admire sailor mercury’s efforts to make computer science and programming more accessible while also embracing the joy of things that are cute, fun, and femme, and I hope to embody similar energy as I pursue working on my own side projects. 🌸

--

--