Introduction to Time complexity, Space complexity & Big-O Notation

Shahzeb Ali
9 min readAug 12, 2022

--

Photo by Jeremy Perkins on Unsplash

Time complexity, Space complexity & Big O Notation often seems like a hard topic to many people because of the mathematical equations and stuff but I’m here to explain you in simple terms and you’ll not get confused at all!

First let’s start with understanding programming algorithm. In computer programming terms, an algorithm is a set of well-defined instructions to solve a particular problem. It takes a set of input(n) and produces the desired output.

Now let’s say we have to develop an algorithm to sort numbers in a list. Now there could be a million types of algorithms which can do that. Some would be faster and much efficient on the other hand some could be slower and take more time to be executed or do the work. Now “faster and much efficientis often referred to as “best case scenario” and “slower and take more time” is often referred to as “ worst case scenario” and you’ll come up with these terms more often in the programming world when you’re mostly learning Data Structures and Algorithms.

Best case = fastest time to complete, with optimal inputs chosen. For example, the best case for a sorting algorithm would be data that’s already sorted. Worst case = slowest time to complete, with pessimal inputs chosen.

Let’s look at an example to understand more better:

image from https://logicmojo.com/sorting-algorithms
image from https://logicmojo.com/sorting-algorithms

Now we have two unsorted lists (numerical) with us and our task is to sort them out in the correct numerical order. Although there are many algorithms we can use to sort these two lists but now now we’ll just look into two of them.

  1. Bubble Sort is the most basic sorting algorithm, which works by exchanging neighbouring components in the wrong order repeatedly. Because of its average and worst case time complexity, this approach is not suitable for huge data sets.
  2. Insertion sort is a simple sorting algorithm that operates in the same way that you arrange cards in your hands. The array is divided into two sections: sorted and unsorted. The values from the unsorted component are selected and placed in the sorted part in the proper order.
    Insertion Sort Characteristics
image from https://logicmojo.com/sorting-algorithms

Now let’s first understand Bubble Sort. Bubble sort looks at each pair of neighbouring numbers as it passes over the array of integers. The lower number will be placed on the left, towards the beginning of the array, while the larger number will be placed on the right, near the end. This process is repeated, with bubble sort looping around the array until no swaps are done, resulting in a sorted array.

The raw algorithm of Bubble Sort:

selectionSort(array, size)
repeat (size - 1) times
set the first unsorted element as the minimum
for each of the unsorted elements
if element < currentMinimum
set element as new minimum
swap minimum with first unsorted position
end selectionSort

Algorithm implementation in Python:

list1 = [5,3,8,4,6]
# Traverse through all array elements
for i in range(len(list1)):
# Find the smallest element in the remaining elements
# unsorted array
min_idx = i
for j in range(i+1, len(list1)):
if list1[min_idx] > list1[j]:
min_idx = j

# Swap the initial element with the discovered minimum element
list1[i], list1[min_idx] = list1[min_idx], list1[i]

# Driver code to test above
print ("Sorted array")
for i in range(len(list1)):
print("%d" %list1[i],end=" ")
image from https://logicmojo.com/sorting-algorithms

Now let’s look at insertion sort. Insertion sort is a sorting algorithm in which the elements are transferred one at a time to the right position. In other words, an insertion sort helps in building the final sorted list, one item at a time, with the movement of higher-ranked elements.

The raw algorithm of Insertion Sort:

insertionSort(array)
mark first element as sorted
for each unsorted element X
'extract' the element X
for j <- lastSortedIndex down to 0
if current element j > X
shift sorted element to the right by one
break loop and insert X here
end insertionSort

Algorithm implementation in Python:

def insertionSort(array):
for step in range(1, len(array)):
key = array[step]
j = step - 1

# Compare the key to each element to the left of it until you find one that is smaller.
# For descending order, change key<array[j] to key>array[j].
while j >= 0 and key < array[j]:
array[j + 1] = array[j]
j = j - 1

# Place the key after the element that is somewhat smaller than it.
array[j + 1] = key


data = [5,3,8,4,6]
insertionSort(data)
print('Sorted Array in Ascending Order:')
print(data)
https://pediaa.com/what-is-the-difference-between-bubble-sort-and-insertion-sort/

If I were to ask you that which algorithm saves most of the time in these two and well it’s insertion sort.

Now let’s look at why? The number of swaps is an important difference between bubble sort and insertion sort. Insertion sort has less number of swaps compared to bubble sort thus it’s more efficient. Bubble Sort checks the neighboring elements and swaps them accordingly which increases the number of swaps, while insertion sort transfers an element at a time to the partially sorted array.

These are just some of the algorithms we looked at and there are many more in the programming world, but these were just some examples I was giving you. Now let’s move to Time-complexity.

As we saw different algorithms take different amount of time to be executed and some are faster and some are slower. If we run this bubble sort algorithm on an old specifications-laptop vs a powerful-specifications laptop, the time may vary due to computers specifications so that’s why we use the concept of Time-Complexity.

Time complexity is defined as the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm. It is not going to examine the total execution time of an algorithm. Rather, it is going to give information about the variation (increase or decrease) in execution time when the number of operations (increase or decrease) in an algorithm. Yes, as the definition says, the amount of time taken is a function of the length of input only.

Space complexity is nothing but the amount of memory space that an algorithm or a problem takes during the execution of that particular problem/algo. The space complexity is not only calculated by the space used by the variables in the problem/algo it also includes and considers the space for input values with it.

Now that we know different factors can influence the outcome of an algorithm being executed, it is wise to understand how efficiently such programs are used to perform a task. To gauge this, we require to evaluate both the Space and Time complexity of an algorithm.

By definition, the Space complexity of an algorithm quantifies the amount of space or memory taken by an algorithm to run as a function of the length of the input. While Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input. Now that we know why Time complexity is so significant, it is time to understand what is time complexity and how to evaluate it.

image from source https://www.mygreatlearning.com/blog/why-is-time-complexity-essential

The first algorithm is defined to print the statement only once. The time taken to execute or run this code is shown as 0 nanoseconds. The second algorithm is defined to print the same statement but this time it is set to run the same statement in FOR loop 10 times. In the second algorithm, the time taken to execute both the line of code — FOR loop and print statement, is 2 milliseconds. And, the time taken increases, as the N value increases, since the statement is going to get executed N times. Now the time may differ from computer to computer since some are very powerful and some are very slow.

In terms of Time Complexity, Big O Notation is used to calculate how quickly runtime will grow when an algorithm (or function) runs based on the size of its input or you could say Big-O notation describes how good is the performance of your algorithm as the input data grows larger. With the Big-O notation, we are able to measure the scalability of our algorithm: is our algorithm still going to perform well as the input grows larger?

In order to dive deep into Big O Notation, we first need to understand about constants. A constant is a value or number that never changes in expression; it’s constantly the same. It will not effect your graph whenever you plot the values. Don’t worry you’ll understand much better when we further look at it.

image from source https://levelup.gitconnected.com/big-o-time-complexity-what-it-is-and-why-it-matters-for-your-code

O(1) = Constant Time

O(1) means that it takes a constant time to run an algorithm, despite of the size of the input. So if the input is 100 or 1000 it’ll take the same amount of time. You can look at the picture above, the line in the graph of O(1) is a straight horizontal line which is not affected by the number of inputs

O(n) = Linear Time

O(n) means that the run-time or the time it takes for an algorithm to run increases at the same pace as the input. n is represented as the number of inputs here. It’ll increase linearly. Linear means straight and a graph is a diagram which shows a connection or relation between two or more quantity. You can look at the picture above, the line in the graph of O(n) is a straight line which increases as the number of input increases and you can also say that it’s directly proportional to the number of inputs.

O(n²) = Quadratic Time

O(n²) means that the calculation runs in quadratic time, which is the squared size of the input data. so if the input(n) is 100 then it mean it’s 100 raised to power two which would be 10000. You can look at the picture above, the line in the graph of O(n²) is a curved line.

O(log n) = Logarithmic Time

O(log n) means that the running time grows in proportion to the logarithm of the input size. this means that the run time barely increases as you exponentially increase the input.

Till now we have covered:

  • Programming Algorithms
  • Best Case and Worst Case
  • Bubble Sort
  • Insertion Sort
  • Time Complexity
  • Space Complexity
  • Big-O notation and it’s various cases

Now to sum up the article we should know that the best case of Bubble Sort is that the array is already sorted, but just in case, bubble sort performs O(n) comparisons. In the worst-case scenario, the outer loop runs O(n) times.

Insertion Sort’s worst-case (and average-case) complexity of the insertion sort algorithm is O(n²). Meaning that, in the worst case, the time taken to sort a list is proportional to the square of the number of elements in the list. The best-case time complexity of insertion sort algorithm is O(n) time complexity.

--

--