A story of Big O

Aleks Shineleva
6 min readMay 7, 2018

--

To start from the beginning, Big O notation is used to describe the performance of a given algorithm, aka how long it will take an algorithm to run. In most cases, when we are looking at the Big O notation of a given algorithm, we are looking at the WORST possible case scenario of the execution. If you enjoy nested FOR loops, or fancy methods like .map(), .forEach(), indexOf(), etc, that under the hood just abstract the actual FOR loop, you are about to have a bad time…(get it? bad time..no? that’s ok). Big O helps us determine which algorithm could be best suited for solving a problem. What is Big O though? Big O stands for Order (Ordnung). It was first used by a German number theorist Paul Bachman to represent an “order of”. n generally represents the size of the input data.

Big-O Cheat Sheet has some more wonderful charts and graphs for your viewing pleasure

Let’s start with O(n!). The performance of a given algorithm is pretty bad, also known as terrible. Obviously it is all relative to a problem we are trying to solve, in some cases the worst is the best and only option. A common example to explain the performance of this algorithm is knows as the Traveling Salesman Problem. Any algorithm that will require you to go through every single option/permutation is an O(n!) algorithm. It runs in factorial time, which means: (brace yourself for Wikipedia explanation:) “the factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n”. Imagine trying to calculate the best possible way to get to your friend’s house, who lives across the country. The only way to know it, using a O(n!) algorithm is to check every single possible combination of driving directions ever. Not ideal.

Your friend will most likely die waiting for you

O(2^n), also known as exponential growth algorithm means that for each additional element that gets added to our data set, we double the amount of time it will take for the algorithm to execute. In a small example, it might not look bad, but as soon as we start working with larger inputs, the time we have to spend executing that algorithm sky rockets and shoots straight into another dimension. A common example is a recursive solution to finding Fibonacci numbers. I don’t know why people love Fibonacci so much, but you will see it EVERYWHERE! Essentially a O(2^n) algorithm has a similar performance as the O(n!), just a little bit better, but still pretty horrible, again, relative to the problem we are trying to solve.

The O(n²) algorithm might look like it could have better performance than O(n!) and O(2^n), but at the end also flies off into another dimension with it’s time execution. The O(n²) in the words of Rob Bell: “O(N²) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N3), O(N4) etc.” This algorithm is also known as a quadratic algorithm. Bubble sort is a good example of O(n²) algorithm:

If you are not familiar with a bubble sort: “Bubble sort is a simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted” -courtesy of Wikipedia

Please enjoy a demonstration of it by Hungarian folk dancers.

O(n log n) is a linearithmic time algorithm, which means that the running time of this type of algorithm is as a result of performing a logarithmic operation n times. Whhhat? For example, binary tree sort creates a binary tree (have you seen one of those in a wild? They are beautiful) by inserting each element of the n-sized array one by one. Since the insert operation on a self-balancing binary search tree takes O(log n) time, the entire algorithm takes O(n log n) time.

O(n) is a linear time algorithm. What that means, is that if we imagine that n is some number of items in an array, if we were to print each number in the array, we would have to do that n amount of times. If the number is 20, that seems pretty manageable, but if the number is 2000+, we will still have to print it 2000+ times. A visual always helps:

Image result for linear search
https://www.tutorialspoint.com/data_structures_algorithms/linear_search_algorithm.htm

Enter O(log n) and O(1). If we look at O(log n), think Binary Search. The point is to never work with the entire data. What we want to do instead, is to reduce the amount on every iteration, allowing us to cut down on time dramatically. If you follow through this lovely animation, in a binary search tree with a root of 21, we want to find 27. Instead of going through each node, (which we will not do!), on each iteration we evaluate weather or not our target number is smaller or greater than the node we are currently at, which will send us either left or right. This in turn will allow us to cut down the amount of moves(operations) we would have to make, in half, on each iteration. Pretty neat!

Thank you JUNAIDSHAHID for such a great visual!

Now, let’s talk about O(1) or constant time algorithm. The time (or space) will remain the same(constant), no matter the size of data. Be it 5 or 5 million. Same time. Always. Forever. Dictionaries and Hash Tables are good examples of O(1). For example, if you are working with an object or an array, knowing the key or the index you are looking for, will offer you constant time. One step →constant time. A real life example: You are going to a meet a person by the name of Susan at a party with 2000 people. Cool thing about constant time, is that the amount of people is irrelevant, the only person you are looking for is Susan. No matter if the party is a rager, we don’t care, all we care about is hanging out with Susan. Susan, don’t be late, I hate waiting.

To summarize:

  • O(1) - Constant growth time complexity.
  • O(log n) - Logarithmic growth time complexity
  • O(n) - Linear growth time complexity
  • O(n log n)- Linearithmic time complexity
  • O(n²)- Quadratic time complexity
  • O(2²)- Exponential growth time complexity
  • O(n!) — Factorial growth time complexity

--

--