There Are (Not?) Thirty-Six Ways to Sort an Array

Comparison-Based Sorting Algorithms

Alice Wang
The Startup
7 min readOct 10, 2020

--

Photo by Sophie Elvis on Unsplash

Introduction

Some interview questions might ask you to sort an array. But other interview questions might ask you to solve a less straightforward problem, with an answer that tests your knowledge of the performance and implementation of different array sorting methods.

This article is designed to give the reader an overview of what different array sorting algorithms are like, and why you would use them.

The implementation examples are written in JavaScript and are borrowed from App Academy with my own modifications and commenting. All mistakes are my own.

Source: Big O Cheatsheet

Comparison-Based Sorting Algorithms

These straightforward algorithms operate by comparing pairs of elements and changing their order as needed. They are best used on relatively small arrays, where the time complexity is nearly negligible, or when the arrays are guaranteed to be nearly sorted.

The highlights of these algorithms are discussed below. The implementation examples reference a swap function, seen here.

Note: In JS ES6 swapping without use of a temporary variable is made possible with destructuring (e.g. [arr[2], arr[1]] = [arr[1], arr[2]] ).

The worst case time-complexity of these four algorithms is O(n²): Bubble Sort, Selection Sort, Insertion Sort, Quick Sort.

The worst case time complexity of these two algorithms is O(n log n): Merge Sort, Randomized Quick Sort.

Bubble Sort

Image Source (Originally from Visualgo)

The inner loop checks every element against its neighbor and swaps them based on which is larger, while the outer loop keeps the inner loop running until the inner loop has made a pass through the whole array without any swaps.

Selection Sort

Image Source (originally from Visualgo)

The outer loop steps through the array one element at a time while the inner loop finds the smallest unsorted element (i.e. the smallest element to the right of i). When the inner loop finishes, the element it started with becomes the minimum value. The outer loop then begins another pass with the element at (i+1).

Insertion Sort

Insertion sort is the most similar to the sorting technique I used at one of my first jobs, which was alphabetizing books on library shelves by author and title. Like selection sort, and unlike bubble sort, it works its way from left to right, moving elements by more than one place at a time into a sorted section. Unlike selection sort, however, the inner loop moves through the sorted portion of the array one at a time to find a place for the element being evaluated in the outer loop. The outer loop moves from left to right while the inner loop goes the other direction, shifting each element one to the right until it finds that the element being evaluated is not less than the next element to the left of it. It then saves the element being sorted into the space of the duplicate element.

Merge Sort

Merge sort is a divide-and-conquer recursive algorithm. For a basic example of recursion, you can check out my article on solving the Fibonacci problem.

Divide-and-conquer is like solving multiple sub-problems, the result of which solves the original, larger problem. In divide-and-conquer, you break down the input until the smallest component: for example, breaking down a non-empty array until it only contains one element. In this case we will call the recursive function on the input until it reaches the base case, and return it so that the function “unwinds” and produces a sorted array.

The merge helper function takes the two sorted arrays and combines them by comparing the their first elements one-by-one.

According to Visualgo:

Contrary to what many other CS printed textbooks usually show (as textbooks are static), the actual execution of Merge Sort does not split to two subarrays level by level, but it will recursively sort the left subarray first before dealing with the right subarray.

That’s it, on the example array [7, 2, 6, 3, 8, 4, 5], it will recurse to [7, 2, 6, 3], then [7, 2], then [7] (a single element, sorted by default), backtrack, recurse to [2] (sorted), backtrack, then finally merge [7, 2] into [2, 7], before it continue processing [6, 3] and so on.

Merge sort runs in O(n * log n) — the length of the input times how many recursive calls we need to make to halve it. Unlike the other sorting algorithms discussed above, which are all O(1), merge sort is not memory efficient. The space complexity is O(n) because we need to create a new array for every element.

Quick Sort

Quick sort, like merge sort, is a divide-and-conquer algorithm that splits an input array into sorted arrays and combines them. The difference in implementation is that quick sort doesn’t split the array in half by their index but by the elements’ value relative to a pivot.

Elements less than the pivot go into the left array, and elements greater than the pivot to the right.

The sorted arrays are then combined and returned. If in the worst case you pick the least optimal pivot, the runtime complexity will be O(n²) due to the recursive calls to filter (which is O(n) or in this case O(2n)).

The implementation of quick sort shown here is O(n) space complexity, because like merge sort we are creating a new array for every element in the input, but if you swap the elements in-place the space complexity is O(log n).

Randomized Quick Sort

If you know your input arrays are already mostly sorted, running them through the quick sort implementation we defined above is not a good idea. The first element will be one of the smallest and so you will be stuck sorting a long array. Picking a random pivot solves this problem.

And that’s not all…

Non comparison-based sorting algorithms will be covered in a future article. They are faster than comparison-based sorting algorithms. The worst case time complexity for counting sort is O(n+k) and O(w *(n+k)) for radix sort.

Further Resources

--

--