Sorting Out The Basics Behind Sorting Algorithms

Vaidehi Joshi
basecs
Published in
12 min readMay 8, 2017
All of the sorting algorithms, without any of the debilitating fear!

We’ve covered a whole array (pun totally intended) of data structures in this series thus far, and so far, I’ve really enjoyed exploring all the different ways that programmers and the software that they build lean on those structures to help them solve interesting problems. But today’s post is special, because we’re finally going to get into some of the more science-heavy stuff (if we can even call it that) that makes up the world of computer science.

Okay, I won’t subject you to a big build up; let’s get right to it. We’re going to finally get into algorithms — hooray! Over the next few weeks, I’ll be diving into some of the most common and widely-referenced algorithms that are used in computer science.

If you don’t come from a computer science background (or if you’re fairly new to programming), the prospect of an algorithm deep-dive might sound pretty bad. Or perhaps outright terrifying. But fear not — we’ll get through this together, and come out of the whole experience as algorithm experts! So let’s start with the very basic stuff first. Namely: what even is an algorithm? We’re going to learn a lot about them, so we should probably have that definition straight in our heads, right?

Well, as it turns out, an algorithm is a really fancy name with a bad rap. They’re not nearly as scary as they sound. An algorithm is just fancy term for a set of instructions of what a program should do, and how it should do it. In other words: it’s nothing more than a manual for your code. That’s it. (Really!) Some of the most-referenced algorithms in the world of software are generally a subset of sorting algorithms, or algorithms that provide a set of instructions for how a program or system should go about organizing a set of data.

Why sort oneself out?

Before we can get into the different ways that algorithms can sort data, it’s important for us to have a strong grasp on what sorting is, exactly, in the context of computer science, and why it’s so important.

We’ve all had to deal with some variation of sorting different items in our own, everyday lives. For example, I moved recently, and even though I think I did a pretty good job of packing all my stuff up, there were a lot of things that were fairly disorganized. I had a bunch of papers and writing in the form of a manuscript that I had tossed into moving box in a hurry. Lo and behold, when it came time to unpack, everything was a total mess, and all the pages were disorganized, and it was pretty terrible. I had to sort them by page number manually in the process of unpacking them. If you read enough about sorting algorithms, you’ll notice that sorting a deck of cards, sorting books, or sorting a collection of numbers are all commonplace examples of sorting algorithm implementations.

Sorting is particularly useful for two reasons

These very examples are extensions of what happens in our code and our applications all of the time! Sorting is particularly helpful in the context of computer science for two reasons:

  1. From a strictly human-friendly perspective, it makes a single dataset a whole lot easier to read.
  2. It makes it easier to implement search algorithms in order to find or retrieve an item from the entire dataset.

In my example, without having sorted all the pages in my manuscript, you can imagine how painful of an experience it would be to try and find a single page in a mixed-up pile of 200 pages.

So, I guess sorting makes sense on a practical, human level. But why do computers care about sorting? They can do things pretty fast, right? They don’t need things to be as easy-to-read the way that people do. What good does sorting do for them?

Well, from a computer science perspective, sorting plays a particularly important role because the set of data that a computer, a program, or an application has to search through can oftentimes be massive. Like, beyond what most human beings can really fathom searching through kind of massive! We’ll come back to this in a moment.

First, let’s figure out what exactly we mean when we talk about sorting in the context of computer science. Sorting is the organizing of a set of a similar items in a collection by some property. There are two important things to note here:

Sorted lists are permutations of their original lists
  • We can sort a collection of items in either increasing or decreasing order by any one property, and that property by can really be anything. By size, lexicographical (alphabetical) order, numerical order, date, time — you name it!
  • We can only sort a dataset where the items are homogeneous, or are of the same type. In other words, we couldn’t sort a dataset with both words and numbers, because that dataset doesn’t have a shared property that we could actually sort by.

In the example above, there are two homogeneous lists that we have sorted: list_one and list_two. All of the items are of similar types, and we’re sorting them in increasing order. It’s also worth mentioning that the sorted versions of these lists (list_one_sorted and list_two_sorted, respectively) are permutations, or reorderings, of their original, unsorted lists.

Okay, now that we have the definitions and rules out of the way, let’s come to the most important question: how do computers leverage sorting? Why are sorted things good? Similar to how sorted data is more readable for humans, sorted data also makes things a lot simpler for machines, too.

Let’s take a look at what life would look like without sorting. Spoiler alert: it seems pretty bad.

Life without sorting?

In the example above, we have a tiny list of unsorted numbers. Imagine that we wanted to find the number 33, but didn’t know where it was. Or, in an even more terrible scenario: imagine we want to find a number that isn’t even in the list! In the worst case, we’d have to look through every single one of the numbers, until we got to the end, and realized that the number wasn’t even in the list! In other words, it would take linear time. Using our example dataset, the amount of time it would take us to look through all of the numbers would be pretty small.

But what if it was 100 million numbers? Or 1 billion?

Okay, don’t panic! We’re not going to try to sort through 100 million numbers, don’t worry. Let’s look at what finding one number in that same dataset would look like when it was sorted.

Life with sorting!

If the example above looks familiar to you, that’s because it’s our good old friend, binary search! Finding one number in the exact same example dataset (even if the number doesn’t even exist in the list) never takes more than logarithmic time. Hopefully, you remember that this is all because binary search is implemented by eliminating half of the possible search results with each comparison.

This is exactly how our programs leverage sorting! Binary search is a great way to insert, delete, or read and retrieve information from a large dataset. And even if the dataset was 100 million, this would be easy to do for our code to search through — as long as the list was sorted.

All sorts of classifications

Now that we’re all on board with sorting as a concept, it’s time to break down the different types of sorting algorithms that are commonly used.

Different ways to classify a sorting algorithm

There are many ways to classify a sorting algorithm, but we’ll focus on just six of them: time complexity, space complexity (or memory usage), stability, internal or external, recursive or non-recursive, comparison sort or non-comparison sort. These classifications end up being important factors for programmers when they are writing a sorting algorithm or choosing which one to implement.

It all comes back to a theme that we’re already familiar with at this point in the series: different tools each have their own benefits and drawbacks. As we delve into some common algorithms over the next few weeks, we’ll use these classifications to help us decide what those benefits and drawbacks are for any given sorting algorithms — and when that algorithm might be helpful (or maybe not so helpful!).

Let’s take a look at each of these classifications in more detail:

1. Time complexity

The easiest way to classify an algorithm is by time complexity, or by how much relative time it takes to run the algorithm given a different input size.

Time complexity

Since we’re already pretty familiar with the concept of time complexity (or, Big O as we often refer to it), this is how we’ll break down all of our algorithms as we start learning about them in more detail.

2. Space complexity/memory usage

Different algorithms requires a different amount of space, depending on their space complexity. Another way of thinking about space complexity in the context of sorting algorithms is by answering the question: How much memory will this algorithm need to run?

Space complexity

There are two types of classifications for the space complexity of an algorithm: in-place or out-of-place.

An in-place algorithm is one that operates directly on the inputted data. The danger with this is that the data is getting completely transformed in the process of transforming it, which means that the original dataset is effectively being destroyed! However, it is more space-efficient, because the algorithm only needs a tiny bit of extra space in memory — usually a constant amount of space, or O(1) — which can be helpful if you don’t have enough memory to spare.

In contrast, out-of-place algorithms don’t operate directly on the original dataset; instead, the make a new copy, and perform the sorting on the copied data. This can be safer, but the drawback is that the algorithm’s memory usage grows with input size.

3. Stability

Oftentimes, a single dataset will have more than one element that has the same “sort key”; in other words, multiple items in a list could be considered equal in the way that they could be sorted.

Stability

In the example shown here, there are two number 4’s, one of which is green, and the other which is red.

There are two ways that this list could be sorted, given the fact that there are two elements that could be sorted “equally”: the green 4 could come first, the way that it did in the original list, or the red 4 could be sorted first.

This is exactly what defines the stability of a sorting algorithm.

A stable algorithm is one that preserves the relative order of the elements; if the keys are the same, we can guarantee that the elements will be ordered in the same way in the list as they appeared before they were sorted. On the other hand, an unstable algorithm is one where there is no guarantee that, if two items are found to have the same sort key, that their relative order will be preserved.

4. Internal vs. external

Because our machines can sort through large datasets fairly easily, it’s common to have some applications that have to sort through huge collections of data. In some cases, this can actually amount to more data than can be maintained in the machine’s main memory (or RAM).

Internal vs. external

The way that an algorithm has to store data while its sorting through records is yet another way that we can classify sorting algorithms.

If all of the data that needs to be sorted can be kept in main memory, the algorithm is an internal sorting algorithm. However, if the records have to be stored outside of main memory —in other words, stored in external memory, in either a disk or a tape — the algorithm is referred to as an external sorting algorithm.

5. Recursive or non-recursive

Some algorithms do their sorting by dividing and conquering: that is to say, the split up a large dataset into smaller inputs, and then recursively call a sort function upon all of those smaller inputs. This is referred to as a recursive sorting algorithm.

Recursive vs. non-recursive

In contrast, a non-recursive sorting algorithm is — you guessed it — one that doesn’t implement recursion! This is to say, they don’t sort through a smaller subset by calling the same function upon those smaller inputs.

It’s worth mentioning that most algorithms don’t have to be implemented recursively; they can be written and implemented iteratively. But whether or not a sorting algorithm is recursive or not ends up being an easy way of classifying one set of algorithms from another.

6. Comparison sort

Finally, it’s possible to classify a sorting algorithm based on how it actually does the job of sorting elements.

Comparison sort

Any sorting algorithm that compares two items — or a pair — at a time in the process of sorting through a larger dataset is referred to as a comparison sort. This subset of algorithms use some type of comparator (for example: >= or <=) to determine which of any two elements should be sorted first.

Sorting algorithms that do not use any type of comparators to do their sorting are referred to as non-comparison sorts.

For our purposes, nearly all of the algorithms we’ll be looking at in this series will be classified as comparison sorts. However, it’s important to remember that not all sorting algorithms require a comparator to defining the ordering and sorting of a dataset!

Sorting, sorting, everywhere you look!

Okay, we’ve exhausted our list of classifications. Hopefully, by now, you can see that sorting algorithms are a big deal in computer science! People have spent a huge portion of their careers researching and writing about sorting algorithms! And there’s a good reason for that.

Sorted data powers (almost) everything!

Sorted data powers (almost) everything. Think about it: whenever you interact with a web or mobile application, you can sort a large set of data by some property. Databases also have sort and search through huge sets of data, using something called Btrees. They sort through so much data that we, as humans, can’t even wrap our heads around!

We already know about binary search and how often it is used, and how powerful it is. Guess what? Sorted data is a prerequisite for that!

But on a broader level, sorting algorithms are also used in statistics, in order to determine the median of a dataset for example, or in order to find statistical outliers! Sorting algorithms are also used for load balancing on your own individual machine, and is particularly helpful when it comes to processing large amounts of data on multiple processors.

Over the next few weeks, we’ll start to see more of just how crucial sorting algorithms are for computer science. There are quite a few different sorting algorithms, but we’ll cover some of the most common ones:

Common sorting algorithms that we’ll be learning about!

Next week, we’ll try our hand at actually applying a couple of these sorting algorithms to a real-life problem and start exploring a few of these sorts in more depth. Maybe by the time we get to the end of this algorithm deep dive, you’ll change the way that you sort things in your own life. Or perhaps you’ll just never want to hear about sorting anything, ever again. I’m going to go out on a limb and say that there’s a 50/50 chance of either one of those happening.

Resources

There are tons of papers and entire PhD dissertations out there in the wild about sorting algorithms: which ones are best, which ones are efficient, and how to rank and classify them by different categories. You could probably spend years simply reading and learning about them all. Or, you can also just check out these resources if you’re looking for an easy place to start!

  1. Non-Comparison Based Sorting Algorithms, Professor Ananda Gunawardena
  2. Data Structure — Sorting Techniques, TutorialsPoint
  3. Sorting Algorithms, Professor Victor S. Adamchik
  4. Sorting Algorithms — Stability, University of Illinois at Chicago, Department of Mathematics, Statistics, and Computer Science
  5. Sorting Applications, Professors Robert Sedgewick & Kevin Wayne
  6. Introduction to sorting algorithms, mycodeschool

--

--

Vaidehi Joshi
basecs
Editor for

Writing words, writing code. Sometimes doing both at once.