A Comprehensive Guide for Time Complexity and Big O Notation

Big O is the language in which we describe the complexity of an algorithm means how fast or how slow it can be.

Rajesh Kumar
The Startup
Published in
4 min readJul 1, 2020

--

Not just time we can also measure the space of an algorithm. You are thinking now what is complexity. When it comes to a technical interview or selecting an algorithm to solve a problem than complexity plays a major role.

AS we know that the speed of an algorithm depends on the processor and CPU power. But Big O helps us to find out how quickly running time grows on a computer. With the help of Big O, we can express that runtime means we can describe the growth of an algorithm relative to the growth of the input get larger. Big O also is known as O, asymptotic upper bound. Upper Bound means a value that is greater than or equal to every element of a set of data.

The complexity of an algorithm:

We measure complexity in two terms:

  1. Space Complexity: calculate the space used by an algorithm
  2. Time Complexity: calculate the running time of an algorithm

When we calculate the complexity of an algorithm then we talk about three cases

  1. Best Case
  2. Average Case
  3. Worst Case

So, let's see how to calculate the Big O

Calculating Big O

Classifying algorithms from best to worst:

  • Logarithmic algorithm — O(logn) — Runtime grows logarithmically in proportion to n.
  • Linear algorithm — O(n) — Runtime grows directly in proportion to n.
  • superlinear algorithm — O(n logn) — Runtime grows in proportion to n.
  • Polynomial algorithm — O(nᶜ) — Runtime grows quicker than previous all based on n.
  • Exponential algorithm — O(cⁿ) — Runtime grows even faster than a polynomial algorithm based on n.
  • Factorial algorithm — O(n!) -Runtime grows the fastest and becomes quickly unusable for even small values of n.

Let’s take some examples to calculate running time in Big O:

Constant Time Complexity O(1):

name = “Your Name”
print(name)

The above example will run in one step, so its complexity will be O(1). O(1) is also called constant time complexity. In simple words, we can calculate time complexity by measuring how many steps it's going to take to complete the task.

Linear Time Complexity O(n):

def print_names(name_list):
for name in name_list:
print(name)

The above example will run n time, so its time complexity will be O(n). Here, the number n depends on the number of the names in the list. For the best case, it can be one, so the time complexity for the best case will be O(1). O(n) is also called linear time complexity.

Polynomial Time Complexity O(nᶜ ):


def print_first_last_name(name_list):
for names in name_list:
for name in names:
print(name)
name_list = [["Name1", "Name2", "Name3", ..., "NameN"], ["Name1", "Name2", "Name3", ..., "NameN"], ["Name1", "Name2", "Name3", ..., "NameN"]]print_first_lst_names(name_list)

It’s a dummy example. There can be n nested list inside name_list and n names. Data depends on given inputs.

In the above example, we used nested loops, so our program will run for n time for the first loop then n times for the second loop, then our time complexity will be n * n = n². This function runs in O(n²) time

And what about if there are multiple loops or multiple single steps

Let's solve this problem by taking an example:

name = "Something"for i in range(n):
print(i)
for i in range(n):
for j in range(n):
print(i, j)
for k in range(n):
print(k)

By calculating complexity for the above example it will be f(n) = (n²) + 2n + 1 When we calculate for Big O we drop the constants and only take the bigger value. For the above example, it's O(n²).

Exponential Time Complexity O(xⁿ ):

def fib(n):
if(n <= 1):
return n
return fib(n-1)+ fib(n-2)
num = int(input("num: "))print(fib(num))

In a program when running time increase with (xⁿ) then we can say it has exponential time complexity. The above example is the recursion method to calculate n Fibonacci numbers, and it’s used exponential time complexity means O(xⁿ). In a recursion program, a function calls itself. In the above example in every function calls, it calls itself 2 times, so its time complexity will be O(2ⁿ)

Logarithmic Time Complexity:

def power_of_2(a):
x = 0
while a > 1:
a = a//2
x = x+1
return x

Mathematically, we can say an algorithm is said to take logarithmic time when T(n) = O(log n). Most divide and conquer methods use logarithmic time complexity. Like binary search in binary search, values are already sorted then it discards a big chunk of values after every step.

When we talk about Big O, then we often talk about the worst case.

--

--

Rajesh Kumar
The Startup

Love to write about life, technology and things that matter.