Intermediate Sorting Algorithms in JavaScript

Merge, Quick, and Radix Sort

Gianfranco Nuschese
7 min readJan 21, 2020

Recently I walked through some simple sorting algorithms with JavaScript. I use the term “simple” because the algorithms are a bit easier to understand and implement, but the downside is their time complexity is high — O() at worst. In this blog, we’re going to walk through some more complicated algorithms and aim to shorten our sorting time. (If you want to see ruby implementations, I’ve included them at the bottom of this post.)

It’s okay Avril, we’re here to make it less complicated.

Recursion

Merge and Quick Sort use a concept called recursion. Recursion is when a function calls itself inside of itself. Every recursive function needs a base case to return so that the function doesn’t get stuck in an infinite loop calling itself.

Yes, Commander Data, it appears we’re stuck in an infinite loop.

We’ll write a quick recursive function in order to explore how recursion works.

Implementation of reversing a string using recursion

I recommend throwing this into your favorite testing environment so that you can follow the call stack as the function works — remember you can always throw it into a snippet in Chrome and use the “debugger” line to step through the function. Let’s follow what happens if we want to reverse the string “bar”.

Returns broken down for reversing a string recursively.

Merge Sort

Now that we’ve covered recursion, let’s start working on implementing our sorting algorithms. Both Merge and Quick sort take advantage of the fact that arrays of length 0 or 1 are always sorted. Merge Sort works by breaking down an array of elements in halves until left with “sorted” arrays of length 0 or 1, and then merges those sorted arrays together until left with the final sorted array. Let’s break it down.

The most important part of the sort is implementing the function that merges arrays. This function assumes that the arrays are sorted — that we’ll be working with 0 or 1 length arrays.

Implementation of array merging for Merge Sort

Now that we’ve built the merging function, we can use it to build the sorting function.

It seems too simple, but let’s think about our solution: Our base case is to return an array of length 0 or 1 that is sorted. If not, we split the array in half, merge sort the two arrays, and merge the sorted arrays together. Here’s a visual breakdown:

A visual breakdown of Merge Sort

Quick Sort

Quick Sort splits arrays just like merge sort, but instead of finding a midpoint, it uses a pivot point. Once a pivot point is found, we shift all the values that are lesser than the pivot point to the left of the array, and all the values greater to the right. At that step the pivot point is “sorted”. The function becomes recursive when we do the same to each side of the pivot point until our array is sorted.

First pass of Quick Sort

The visual aid and our implementation will always pick the first item in the array as our pivot point. The current pivot point is yellow. The red color represents the array value we’re currently iterating over. Purple means the value is greater than the pivot point, and green means less than. Whenever we find an item that is less than the pivot value we swap it with the current iteration, placing it at the beginning of the unsorted portion of the array. Once we reach the end, we swap the pivot point with the last number we swapped, placing in front of all the values less than, making it sorted and designated by the color orange.

Quick Sort Left Side

We then do the same sort on the left side of the pivot point, eventually reaching our base case — (seen above for 5 and 15) — two arrays of length 1 that are sorted. We’ll then do the same on the right side and end up with a sorted array. Take note that we’re swapping all of these numbers in place — as opposed to Merge Sort where we’re splitting arrays and merging them back together.

Quick Sort Third Side

Let’s get started with our helper methods. We’ll need two this time: a swapping helper method and a pivot helper method. Our pivot method will need to return a pivot index so that when we recursively call it, we know where to start our new pivot.

Pivot Helper Method Implementation

We’ll need to establish default arguments for our Quick Sort method as well, since we’ll be passing in different arguments on every pivot. You should notice that our base case acts a little differently than normal — we’re only calling pivot if our array is unsorted, and simply returning the original array at the end of the function. This is because we’re mutating the array in place, so as long as we pass in the proper numbers to pivot, we’ll eventually return the same array but sorted.

Big O of Merge and Quick Sort

So since we’re splitting arrays up with these sorts, we can expect to have at least O(log n). Remember that log finds How many of one number do we multiply to get another number? —since we’re splitting arrays into one or zero elements, it stands that we’ll have to double the elements n times to get to the length of our original array. Since we’re doing comparisons in both sorts, we’ll be comparing every element in the array, which means we’ll be doing n comparisons log n times. Therefore, at worst our sorts have a time complexity of O(n log n). However, Quick Sort is a bit different.

The efficiency of our sort actually depends on our pivot point. In the best case, our pivot point would be roughly the midpoint of the numbers in the array, leading to an even distribution of splits on either side. Our implementation picked the first element in the array every time. If we had the worst case of an already sorted array, we’d end up splitting the array n times and comparing numbers n times giving us O(). If you remember, this is not better than our simple sorting algorithms. We could get around this by always picking a random element in the array, but on the chance that we picked the highest or lowest number in the array, we’d still end up with O().

Radix Sort

Every sorting algorithm I’ve covered, including the simple ones, have all been based on comparing values. Radix sort is unique in that it doesn’t compare values but sorts them into “buckets” based on their digit placement. Techically, Radix Sort only works with numbers but it can be used with strings and images since they can be converted to binary.

Radix Sort First Pass

We’re going to be working with base 10 numbers in our implementation, but the sort can be used with other numbers. The numbers get sorted into buckets based on their digit placement. The digit highlighted in red is the one that the sort is currently based on. The sort then puts the numbers back in the list starting with the 0 bucket until all buckets are empty. The process needs to be repeated x times, where x is the amount of digits that the largest number in the sort has.

Radix Sort Final Passes

This sort will employ three helper functions — getDigit, digitCount, and mostDigits. We’ll use getDigit to find the digit we need to sort by in our quick sort implementation. digitCount will be used in the mostDigits function to find the max number of digits we need to iterate over. These functions can be written in different ways — like converting numbers to strings and back — but I’ve included the math heavy routes since they add to the understanding of why Radix Sort works.

Radix Sort Helper Method Implementation

Now that we’ve got out helpers, our sort implementation is a little more involved than the ones above. Take note of line 18: after everything is placed into buckets, we’re reassigning “nums” to a new array that uses “.concat” and the spread operator to flatten the arrays in the buckets — isn’t that cool!?

Big O of Radix Sort

We’re iterating over n numbers k times, therefore the worst case time complexity is O(nk) where n is the length of the array and k is the number of digits in those numbers. It’s in linear time, which is faster than merge and quick sort, so it’s a popular choice for large data sets.

Ruby Implementation of Sorts

Ruby Implementation of Sorting

Resources

--

--