Photo by Icons8 Team on Unsplash

The Big-O! Time complexity with examples

Varda Paropkari
Published in
8 min readMay 22, 2020

--

The very first thing that a good developer considers while choosing between different algorithms is how much time will it take to run and how much space will it need. In this article we are going to talk about why considering time complexity is important and also what are some common time complexities.

Why running time is so important?

Take an example of Google maps, you would want the shortest path from A to B as fast as possible. Or in case of Data Analysis, you would want the analysis to be done as fast as possible. So, to get desired results from the algorithm in optimum amount of time, we take time complexity into consideration.

How to measure time complexity?

Do we measure absolute time?

No, we consider number of steps in algorithm and input size.

Therefore, time complexity is a simplified mathematical way of analyzing how long an algorithm with a given number of inputs (n) will take to complete its task. The inputs can be of any sizes but, usually we are interested in large input sizes, so we make some approximations i.e. we only consider the factor in our expression that has the greatest impact while ’n’ increases. This is called asymptotic analysis.

There are three types of asymptotic notations used to calculate the running time complexity of an algorithm:

1) Big-O

2) Big Omega

3) Big theta

Big Omega notation (Ω):

It describes the limiting behavior of a function, when the argument tends towards a particular value or infinity. It tells the lower bound of an algorithm’s running time. It measure’s the best case or best amount of time an algorithm can possibly take to complete.

For example: We have an algorithm that has Ω(n²) running time complexity, then it is also true that the algorithm has an Ω(n) or Ω(log n) or Ω(1) time complexity.

Big Theta notation (θ):

It describes the limiting behavior of a function, when the argument tends towards a particular value or infinity. It tells both the lower bound and the upper bound of an algorithm’s running time.

Big-O notation:

It describes the limiting behavior of a function, when the argument tends towards a particular value or infinity. It tells the upper bound of an algorithm’s running time. It measure’s the worst case or the longest amount of time an algorithm can possibly take to complete.

For example: We have an algorithm that has O(n²) as time complexity, then it is also true that the algorithm has O(n³) or O(n⁴) or O(n⁵) time complexity.

We will be focusing on Big-O notation in this article. We have already discussed what a Big-O notation is. Now let us discuss what are the common time complexities described in Big-O notation.

The above table shows the most common time complexities expressed using Big-O notation. Let’s go through each one of these common time complexities.

1) Constant Time [O(1)]:

When the algorithm doesn’t depend on the input size then it is said to have a constant time complexity.

For example, when we have to swap two numbers

function swap(a, b):
temp = a
a = b
b = temp

Other example can be when we have to determine whether the number is odd or even. For all these examples the time complexity is O(1) as it is independent of input size.

2) Logarithmic Time [O(log n)]:

When the size of input is reduced in each step then the algorithm is said to have Logarithmic time complexity. The common example for logarithmic time complexity is binary search. As we know binary search tree is a sorted or ordered tree. The left node is always a lesser number than the right node.

Let us take an example of binary search where we need to find the position of an element in sorted list.

def bin_search(data, value):   n = len(data)        #get length of list
left = 0 #initiate left node
right = n – 1 #make right node as n-1
while left <= right: #when left node <= to right node
mid=(left + right)//2 #divide in two equal parts
if value < data[mid]: #if value less than middle number
right = mid – 1 #then mid–1value in right node
elif value > data[mid]: #if value greater than middle number
left = mid + 1 #then left node is mid+1
else:
return mid #else return middle
if __name__ == ‘__main__’: data = [10, 20, 30, 40, 50, 60, 70, 80, 90]
print(bin_search(data, 8))

Here you can see in the code that we are dividing the input size at each step in two parts, hence we can conclude that the time complexity here is O(log n).

3) Linear Time [O(n)]:

When the time complexity increases linearly with the input size then the algorithm is supposed to have a Linear time complexity. Consider that we have an algorithm, and we are calculating the time it takes to sort items.

Number of items to be sorted

As you can see in the above table, the relation between input size and the time taken is linear, hence we can say that above algorithm has a Linear time complexity.

For example, consider an unsorted list and we want to find out the maximum number in the list

Numbers = [10,20,300,40.5,50]
maximum = Numbers[0]
for num in Numbers:
if num > maximum:
maximum = num
print(maximum)

In this example we need to look through all the values of list and check whether the number is greater than the previous number which is stored in the variable ‘maximum’. Hence, the searching through each value in list makes it a time complexity of O(n), as you are repeating the same action for each number using ‘for’ loop. And inside the for loop it is a checking whether a condition is true or not only once, hence the time complexity is O(1). Therefore, the overall time complexity becomes O(n).

4) Quasilinear Time [O(n log n)]:

When each operation in input data have a logarithm time complexity then the algorithm is said to have quasilinear time complexity.

You can compare this with Linear time complexity, just like in linear complexity where each input had O(1) time complexity resulting in O(n) time complexity for ’n’ inputs. Similarly here, each input has O(log n) and there are such ’n’ inputs hence the resulting time complexity is O(n log n).

Quasilinear time complexity is common is sorting algorithms such as mergesort, quicksort and heapsort.

5) Quadratic Time [O(n²)]:

When the algorithm performs linear operation having O(n) time complexity for each value in input data, which has ’n’ inputs, then it is said to have a quadratic time complexity.

For example, we can say whenever there is a nested ‘for’ loop the time complexity is going to be quadratic time complexity. Because we are iterating through all the values for each value in the list making it O(n) * O(n) i.e. O(n²) time complexity.

for i in list:
for j in list:
print(i, j)

Few examples of quadratic time complexity are bubble sort, insertion sort, etc.

6) Exponential Time [O(c^n)]:

In this ‘c’ is any constant. Let’s consider c=2 for our article. Therefore, the time complexity becomes O(2^n).

When the time required by the algorithm doubles then it is said to have exponential time complexity.

Some of the examples for exponential time complexity are calculating Fibonacci numbers, solving traveling salesman problem with dynamic programming, etc.

For calculating Fibonacci numbers, we use recursive function, which means that the function calls itself in the function. Hence the time complexity depends on how many times does the function calls itself and also on the time complexity of function.

def fibonacci(n):
if n <=1
return n
return Fibonacci(n-1) + Fibonacci(n-2) #function calls itself

7) Factorial [O(n!)]:

When the algorithm grows in a factorial way based on the input size, we can say that the algorithm has factorial time complexity.

Simple example for this can be finding the factorial of given number.

def factorial(n):
for i in range(n):
print(n)
factorial(n-1)

How to analyze algorithms using Big-O notation?

Now, while analyzing time complexity of an algorithm we need to understand three cases: best-case, worst-case and average-case.

To understand these cases let us take an example of a one-dimensional array of integers [12, 6, 2, 8, -5, 22, 0] and our task is to search a specified number in the given array.

The best case in this example would be when the number that we have to search is the first number in the array i.e. 12. The number would be found out in one iteration because the number is at an index 0 hence it becomes the best-case scenario, as it requires least amount of time to search for number in the array, resulting in giving optimum time complexity of O(1).

The worst case in the above-mentioned example would be when the number to be searched is at the end of the array i.e. if we are searching for number 0 in the given example.

Here the number zero is at an index 6 and we have to traverse through the whole array to find it. Therefore, the algorithm takes the longest time to search for a number in the array, resulting in increasing the time complexity. O(n) becomes the time complexity.

There can be another worst-case scenario when the number to be searched is not in the given array.

The average-case here would be when the number to be searched is somewhere in the middle of the array i.e. in the above example it is 8. Then the algorithm is going to take average amount of time to search for 8 in the array. In this case the number of steps taken by algorithm would be n/2 but as we are doing asymptotic analysis, we consider the time complexity of O(n).

Conclusion

From above observations we can say that algorithms with time complexity such as O(1), O(log n) and O(n) are considered to be fast. Whereas, algorithms with time complexity of O(n log n) can also be considered as fast but any time complexity above O(n log n) such as O(n²), O(c^n) and O(n!) are considered to be slow. Hence we can say that O(n log n) acts like a threshold, any time complexity above it is slower than the complexities below it.

Of course, when you try to solve complex problems you will come up with hundred different ways to solve it. So, the point here is not of ‘right’ or ‘wrong’ but of ‘better’ and ‘worse’. Hence, whenever you write a code take time complexity into perspective, as it will prove to be beneficial in a long run.

I hope you enjoyed the post and learned something from it. This is my first post. I wanted to start with this topic because during my bachelor’s even I struggled understanding the time complexity concepts and how and where to implement it. Over the years through practice I have become quite confident with the concept and would encourage everyone to do so through this post.

--

--