Big O Notation — Keeping it simple.

Abhishek Dwivedi
CodeX
Published in
3 min readMar 24, 2022

The learning journey of Data Structures and Algorithms always starts with Big O Notation. So what really Big O is and why is it a big deal ?

Writing a solution of a given problem is not difficult, the difficult part is writing a scalable solution which works well when the input size increases.

Big O helps us in just that — to tell how great a solution is by measuring its scalability. Big O does not tell us how much time a solution will take, instead it gives us an idea of how the operations will increase by increasing the input.

Let’s see various common Big O Notations —

  1. O(1) — Best.
  2. O(log n) — Better.
  3. O(n) — Good.
  4. O(n²) — Bad.

There are a few more such as O(n!) which is horrible, and we rarely encounter those, hence let us keep those out of scope for this article.

O(1) — Constant time.

In the code below, the number of operations is constant — which means no matter how big the input is, the code will execute a constant number of operations. If the array contains 5 elements, the code will execute once, if the array size increases to 50,000 elements, the code will still execute JUST ONCE. This is an example of O(1).

The point to note here is, if there are say, 3 operations in the above function instead of just 1, the Big O will be O(3). It means, regardless of the input size, the code always executes 3 operations.

A graph showing that the operations are constant regardless of the input size.
The above graph shows the number of operations is CONSTANT regardless of the input size.

O(log n) — Logarithmic time.

O(log n) complexity comes into picture when we do a divide and conquer kind of solutions. A binary search is a great example of O(log n) complexity.

In the below code of binary search, we repeatedly divide in half the portion of the array that could contain the item we are searching for, until we have narrowed down the possible locations to just one.

The graph above shows how the operations increases logarithmically on increase of input size.

O(n) — Linear time.

In the code below, the number of operations is linear — which means, the operation will increase linearly according to the input size. If the array contains 5 elements, the for-loop in the below code will run 5 times, and if the array size increases to 50,000 elements, the loop will run 50,000 times.

A graph showing that the operations increasing linearly based on the input size.
The graph above shows how the operations increases linearly on increase of input size.

O(n²) — Quadratic time.

In the code below, the number of operations is quadratic — which means, the operation will increase quadratically according to the input size. There are two loops in the code. For each item in the input array, two for loops will be executed. Hence, if the array size is 2, the total operations will be 4, if the array size is 3, the total operations will be 9.

The graph above shows how the operations increases quadratically on increase of input size.

Hope you found the article useful, please post your questions/feedback in comments, and I will be glad to respond. If you liked the article, you can consider following me on Medium and connect in LinkedIn.

--

--

Abhishek Dwivedi
CodeX
Writer for

Sr. iOS Engineer by profession. Author and trainer by passion.