Understanding Big O & Time complexity of Algorithm in 7 mins! — For Beginners

Yasser Shaikh
yasser.dev
Published in
6 min readSep 4, 2022

Introduction

Recently I ended up on this 2012-dated question on stack overflow. I posted this question while I was preparing for my interview at Directi — Media.net, a company known to be very focused on algorithms in their interviews (I did pass the interview and worked there for 5 long years before moving to Agoda where I currently work).

It’s been 10 years since I had posted this quetion, I want to re-attempt to answer the same question to see if I do any differently this time, so here goes nothing 😎.

Photo by Markus Spiske on Unsplash

Big O

Big O notation is a metric used to measure how much time an algorithm takes to run a.k.a time complexity or how much memory an algorithm consumes when it runs a.ka. space complexity.

Whenever we are talking about Big O, we are interested in measuring how well an algorithm performs when the size of input increases.

Big O helps us distinguish bad code from good code and good code from great code. — Andrei Neagoie

Big-O Complexity Chart from https://www.bigocheatsheet.com/

The most common time complexity, you may have heard of or come across would be O(n) , O(n^2) ,O(log-n) ,O(n log-n) etc. I will cover each one of these in the following sections.

O(1) — Constant Time

An algorithm is said to run in constant time if it requires the same amount of time, regardless of the input size.

The following algorithm has a time complexity of O(1) —

public void GetFirstElement(int[] input)
{
return input[0];
}

O(n) — Linear Time

An algorithm is said to have a time complexity of O(n) if its time execution is directly proportional to the input size. This notation is also referred to as “Linear time”.

The following algorithm has a time complexity of O(n), where n is the number of elements present in the array “input”.

public bool IsPresent(int[] input, int searchElement)
{
for(int i = 0; i < input.length; i++)
{
if(input[i] == searchElement) return true;
}
return false;
}

Big O Analysis — Best, Average & Worst cases

Before we continue to the more complex Big Os, now would be a good time to understand the different cases.

Best Case Analysis — Here we calculate the best or minimum time taken by an algorithm to finish.

Average Case Analysis — To calculate this, we need to be aware of all possible inputs our algorithm will receive. We then calculate the time taken by each of these use cases and then divide them by the number of inputs.

Worst Case Analysis — Here we calculate the worst or maximum time taken by an algorithm to finish.

Let’s understand the differences using the example below. Here we have a function that searches for a player named kane from a list of players. It returns true if the player is found else returns false.

const players = ['ronaldo', 'messi', 'kane', 'bhutia', 'giggs', 'henry', 'zidane', 'drogba', 'rooney'];for(int i = 0; i < players.length; i++)
{
if(players[i] == 'kane') return true;
}
return false;

The best case here is that kane is at the very beginning (the first element) and we need to loop the player list only once. The worst case is that kane instead of being the 3rd element, is now present at the very end or worse not present in the list for which our program needs to iterate through every element in the list.

When talking about time complexity we only care about the worst-case scenarios. But why?! 😕

Say your program on an average performs between ok-to-good for certain inputs, but for certain inputs, it takes an unacceptable amount of time. As a programmer, it’s very important to know such scenarios, and take steps to optimize them or even change the algorithm completely. This is why during interviews, people only discuss or care about the worse-case complexity of your program.

O(n²)

In O(n²) as the number of elements increases, the time taken by the algorithm increases exponentially.

To understand this better consider the following examples, where you can see the computation time increases at the pace of n².

  • 1 item = 1 second
  • 10 items = 100 seconds
  • 100 items = 10,000 seconds

Typically (but not always), nested for loops have a time complexity of O(n²).

for (let i = 0; i < len; i++) {
for (let j = 0; j < len; j++) {
// something
}
}

During interviews, it is very common to consider O(n²) solution as a brute-force approach, which often means there can be a better solution in the form of O(n log n) or even O(n)

O(log n)

To understand this, we are first going to look into the mathematics part of logarithms and then we are going to look at how it is related to complexity analysis.

Wikipedia has the following definition for Logarithm —

In mathematics, the logarithm is the inverse function to exponentiation.

A logarithmic function is the opposite of an exponential function.

That means the logarithm of a given number x is the exponent to which another fixed number, the base b, must be raised, to produce that number x.

The binary logarithm uses base 2 (that is b = 2) and is frequently used in computer science. (Why base of 2? explained here).

Now, this definition might be a bit hard to digest at first 😅, let’s try to understand these equations with some examples —

Understanding log(N) calculation with examples.

Now that we know how to calculate log(n) let's keep increasing N and observe the increase in time complexity…

log(8) = 3
log(16) = 4
log(32) = 5
log(64) = 6
log(128) = 7
log(256) = 8
...
log(1024) = 10
log(~1M) = 20
log(~4B) = 32

Clear enough one can see here that when input N doubles, time complexity log(N) increases only by 1. And in comparison to O(N) you can see how much better O(log(N)) performs.

The time complexity of O(log n) vs O(n)

This time complexity is often found in “divide & conquer algorithms” which divide the problem into sub-problems recursively and then merge them in n time. Examples: Binary Search, Merge Sort (O(N log N)), etc.

Summary

Once you know how O(n), O(n²), and O(log n), it gets easier to understand the other time complexities that I have not covered in this post like O(n³), O(n log n), O(n!), etc. Let’s revisit the time complexity graph which I shared at the start of this post. I am guessing back then it wasn’t clear why these lines are where they are, but by now I am hoping you are able to understand why 🙂

Big-O Complexity Chart from https://www.bigocheatsheet.com/

Feel free to drop in questions/comments. Thanks!

References

--

--

Yasser Shaikh
yasser.dev

Lead @ Agoda.com — Full stack Engineering. Gamer, Footballer, Bollywood Buff, Software Engineer, and @stackoverflow contributor. Mumbaikar.