Elucidating the complexity of the Big O

Tre Ammatuna
5 min readJan 4, 2017

--

In today’s world of impatience it’s more important than ever to have efficient code in our applications. Time complexity of the algorithms we develop is a concept that must be mastered to be taken seriously in today’s world of code. So how do we analyze and measure it? Simple… With Big O…

So what is Big O? It’s a notation standard that describes the runtime (or efficiency) of an algorithm related to it’s input size.

The term Big O stems from academia where they actually have three different versions. Big O, Big Theta, and Big Omega. Strangely enough, while the coding world speaks about Big O, our version is more of a combination of Big O and Big Omega. However, we are not going to dive into all those differences here as we aren’t here to confuse, but to elucidate.

So let’s back up for a minute and make sure we have an understand of what it is we are actually notating with Big O.

What could this look like in real world terms?

My favorite real world example of this comes from Gayle McDowell in “Cracking the Coding Interview” (Highly recommend this book to any and everyone looking to get a job in this industry!).

Imagine you have a file on a hard drive that you need to get to your friend across the country. How would you go about this?

Most people, including myself, would gravitate towards email/Dropbox/etc… Which are reasonable assumptions. However, there’s a catch you have to think about. How big is that file you are sending. If it’s under 100 mbs, you’re probably right in your assumption. With today’s internet speeds you could have that uploaded in a reasonable amount of time and sent off to them in an instant after. But… What if that file, or folder of files, took up terabytes of data?

At that point, it could be faster to book a flight, get to the airport, fly across the country, deliver the files personally, and fly back. It might cost you a lot more money, but that’s not what we’re talking about here, we’re talking about time!

What if you couldn’t get a flight and had to drive across the country? Well, if the files were big enough, even that could be faster than trying to upload them.

In the Big O world, you could say that uploading the files would be O(n), or linear speed based off how large the file is. While flying the files over would be O(1), or constant speed because no matter how large the files are you will still get there in the same amount of time.

But wait, what are those O(n) and O(1) things again?!? Let’s dive into that a bit more.

Big O Notations for Common Complexities

While there are many time complexities out there, many of which can be combined to make different scenarios, the most common Big O notations you will run across are:

O(1) — Constant Time

O(log n) — Logarithmic Time

O(n) — Linear Time

O(n²) — Quadratic Time

O(2^n) — Exponential Time

O(n!) — Factorial — Not actually going to get into this one. I look at it as the “O(SHIT!)” of time complexities… Stay away if you possibly can!

So let’s dive into each of these in a little bit more detail with some explanations and visuals.

O(1) — Constant Time

Constant time essentially means that no matter the size of the input the time to complete the function is not effected.

Examples —Accessing an array index, Inserting a node on a linked list, Push and pop on a stack

O(log n) — Logarithmic Time

Logarithmic time is the trickiest time complexity that I will explain here. As the input (n) increases, the faster the algorithm gets. The most common example of this is a binary tree search. Essentially what happens in a binary search is with each “step” you rule out half of the remaining input values. Therefore, this algorithm produces a growth curve that peaks at the very beginning and flattens out as the size of the input data set increases.

Example — Binary search, Many divide and conquer algorithms

O(n) — Linear Time

With a linear operation, the time an algorithm takes to complete directly correlates to to input size, for lack of a better term, linearly. Many will say they exhibit a 1:1 ratio, although while that is a very common case, it isn’t necessarily the only case. While constant time is what most strive for, linear time is the best possible time complexity you can achieve when you have to iterate through every element from your input data set.

Examples — Traversing an array or linked list, Linear search, Comparing two strings, Bucket sort

O(n²) — Quadratic Time

An algorithm with quadratic time will exhibit a complexity that is proportional to the square of the input data set (n). In other words, the time to complete the algorithm will grow in an n*n or (n²) fashion.

Examples — Bubble sort, Nested for loops, Insertion sort, Selection sort

O(2^n) — Exponential Time

With exponential time, for every additional input into your data set, your growth curve doubles. It is a lot like the opposite of logarithmic time. It starts off shallow, but ramps up very quickly as the data set gets larger.

Examples — Tower of Hanoi

What do they all look like together?

Here you can see the big differences in the different complexities.

Wrapping up

So in this post we’ve gone over the common time complexities you will run into and should now have a basic understanding of how to evaluate your algorithms. One great thing to do is think about the complexity of things you do in your day to day activities. What’s the complexity for searching through a dictionary for a word? What about reading a book? Cooking a recipe for n amount of people?

--

--