What is Big O Notation?

Nemo Cee
6 min readJun 25, 2018

--

In a nutshell, it is a way of classifying how long an algorithm will take to compute something. Note that I stated classifying and not calculating or computing.

Think about going to a party, a parade, or a concert at the park. If I want to know how big the event was, I’m not looking for the exact number of 457 participants, or 3,278 people but rather the order of magnitude - was it in the single digits, in the tens (between 10 and 99), hundreds (100–999), thousands, ten-thousands, etc. Another way of writing all of this would be to say the event was in the order of 1, 10, 100, 1000, 10000, etc.

Back to algorithms and programming. Depending on what your program does, certain algorithms get to the answer faster than others. Big O notation is a way to classify algorithms based on how “fast” they can perform.

O(N) ~ Linear

“A person making a checklist in a notebook” by Glenn Carstens-Peters on Unsplash

Suppose you have a list of items (not sorted) and your program wants to find a particular element in this list. For the purposes of classification, let’s assume the worst-case scenario, that is, that the element we are looking for is the very last element on the list (we take the worst-case in order to get a full picture of how long an algorithm can take). In this case, then, the algorithm will take N tries, where N is the number of elements in the list, and we write that the algorithm is classified as being O(N) or belonging to the order of magnitude of O(N).

Being of magnitude O(N) means that as N (the list size) gets bigger, so does the time it takes to compute N checks. So, if the size of the list, N, is 1, then it will take 1 unit of time for the algorithm to find the answer. If N=2, time = 2, and so on. It grows at the constant rate of one to one (if you increase N by 1, then time will increase by 1).

O(1) ~ Constant

Suppose now we have a number, N, of any size and we want to know if it is odd or even. Then all we have to do is divide N by two and check if the remainder is 0 or 1. So, it takes just one check to determine if the number is even or odd, and this is regardless of the size of N. That is, it doesn’t matter how big N is, it could be small, or you can increase to any size you want, and it will always take just one check to determine if it’s even or odd. Belonging to O(1) means that your computation will always take 1 unit of time, regardless of the size of N (unlike our previous example, where as you make N bigger, then the time also gets bigger).

O(N²) ~ Quadratic

Suppose we want to multiply two numbers of N digits each. For example, suppose we have the number 47238 and the number 99361, both with N=5 digits. To multiply them we’d need to multiply each digit from the first number times each digit from the second one. For example, for 47238 (which is 40000 + 7000 + 2000 + 30 + 8) we’d have: 8*1 + 30*1 + 200*1 + 7000*1 + 40000*1, where the 1 comes from 99361. Then we’d have to do the same process for the 6, 3, 9, and 9. In other words, we would need what we just computed but 5 total times, and you see that our sum contains 5 terms, so we would need 5*5 = 25 calculations, which is the same as N*N or . So, being of O(N²) order means that whatever N is, the time needed to compute the calculations will be the square of that N.

O(2^N) ~ Exponential

Ok, step by step. Exponential functions are functions where the variable (the thing that gets bigger) is the actual exponent, and not the base like above. Now, the base is 2 (or another constant number) but the exponent is changing. So as N grows 1, 2, 3, 4, …, the time grows 2¹, 2², 2³, 2⁴, ...

Photo by Dan Freeman on Unsplash

What of an example? Ok. The Fibonacci numbers, are the numbers 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, … you get the next one by adding the two previous ones. Easy right? Suppose you want to calculate the n-th Fibonacci number Fn, well you’d need:

Fn

= Fn-1 + Fn-2

= Fn-2 + Fn-3 + Fn-3 + Fn-4

= so on and so forth

The first line gives you 2 calculations, the second line gives you 4 calculations, the 3rd line is 8, the 4th is 16, and so on. You see that at every successive line it’s 2, then 2*2, then 2*2*2, then 2*2*2*2, etc, so that at the very last line we would have 2*2*2*…*2 n-times, or 2^n. The program would need to compute this many calculations to find the value of Fn.

O(log n) ~ Logarithmic

Last one. There are other ‘buckets’ of order but these are the more common ones to start familiarizing ourselves with. Suppose we have a large list of numbers (e.g. 1, 3, 7, 34, 56, 235, 4567, etc) but this time it is ordered (let’s say in ascending order) and we are looking for a specific element. We could check each element one by one as we did in the first example, but another better approach would be to divide this list into two buckets, right down the middle, and if the number we are looking for is bigger than the first bucket, then we know to ignore that first bucket and focus on the second bucket. We repeat this halving process with this second bucket, dividing it in two and doing the same check. Because we have the worst-case scenario, our number will only appear until the very last halving, when we end up with only one number in the bucket.

Photo by Linh Pham on Unsplash

So, how many times do we need to half-and-check? Let’s say our list has 64 elements. Our first check will create buckets of size 32, then 16, 8, 4, 2, and finally 1. So that’s 6 half-and-checks. For a list of 64 elements it took 6 units of time to complete. What’s the operation that gets you that? Well, if you raise 2 to the power of 6 you get 64, and this is what log(base2)64 is, it means “what is the exponent to which we raise 2 in order to get 64? Thats’ 6.

Hope this helps. It’s worth graphing some of these to see their behavior as N gets very big. I suggest using the Desmos calculator.

--

--