A Pragmatic Introduction to Big-O Notation and Time-Complexity

Michael kawwa
CodeX
Published in
6 min readAug 28, 2021

It’s no secret that software is all around us from whatever device you’re reading this on, to the plane 35,000 feet above your head, and even the pacemakers and ventilators helping countless people stay alive. Computer science has crossed paths and affects nearly every other field from medicine to engineering to astronomy. The fact that we interact with software at nearly all times of the day, reinforces the need to write good and efficient code.

Big O notation, which is a major concept in Computer Science, is by far the most common method of measuring time complexity — that is, the amount of time it takes an algorithm to completely execute relative to the size of its input. Big O does not necessarily measure the actual amount of time an algorithm takes to run as that often depends on many variables including what computer it is run on and how many other tabs are open, etc. Big O measures how the time it takes to execute an algorithm grows as the input grows.

Linear Complexity: O(n)

Linear complexity represented by O(n) is essentially what it sounds like. It is a linear relationship between the size of the input and the execution time. This means as the input grows the execution time grows accordingly in a straight line as seen above. You’ll likely remember linear relationships from high school algebra represented by the function y = mx + b, linear complexity is essentially the same thing. A simple analogy to visualize linear time, is watering plants. In our scenario plants are the input n, and each plant takes 3 seconds to water, with each new plant we add the time to water (time to execute our function) will increase linearly.

When it comes to efficiency linear time is very predictable and could work with smaller data sets, however with larger datasets linear time could be very inefficient.

Lets look at an example of how linear complexity looks like in a python function that simply iterates through each item in an array:

a = [1,2,3,4,5]def iterate_over_array:
for i in a:
print(i)

This is a simple function that goes through each item in an array. A function like this could be used to find a particular value in an array. While it works just fine with a small dataset like this one, once datasets start reaching into the thousands and beyond, linear complexity starts becoming extremely inefficient.

Constant Time Complexity: O(1)

Although Constant Time Complexity is the most basic of all the complexities I chose not to start with it due to the fact that in this case that time it takes to run the function does not change with the size of the input — hence the name constant time. This is why the graph of Constant Time always looks like a strait horizontal line, the time of execution always stays the same no matter what the value of n is:

Many functions run in constant time: from a simple “Hello World” program to a function that gets the first item of a list as such:

a = [1,2,3,4,5]def get_first_item():
print("the first element is: " + str(a[0]))

It does not matter if the list a was one item or a hundred it would always take the same amount of time to get just the first item of the array.

Quadratic Time Complexity: O(n²)

With exponential time complexity (not to be confused with exponential time complexity: O(2^n)) the execution time grows proportionally to the square of the input size. So if for example the input grows from 3 to 4, the growth rate would change from 3² to 4², so 9 to 16. The increments in which the growth rate grows only increase as the input grows larger. The graph for quadratic time complexity looks as such:

Due to the exponential nature of quadratic time it often isn’t an efficient solution to most problems and should only real be used as a last resort.

An example of algorithms running quadratic time are nested for-loops as such:

for x in range(5):
for y in range(5):
print x, y

This is quadratic time as for every time we go through an element in ‘x’ we loop through every element in ‘b’, so this would work out to be 5*5 steps which would equal to 25 steps, and if we increased n to 6 then similarly it would become 6*6=36 steps, growing in an exponential fashion.

Logarithmic Time Complexity: O(log n)

This one may seem scary to those unfamiliar with logarithms or are still haunted by them since high school (as am I), however all we need to know for Big O is that as the input size gets larger the time decreases. The execution time of this complexity is proportional to the logarithm of the input size:

Logarithmic time is often extremely efficient even and often does better than linear search for many tasks and larger datasets. Many search algorithms rely on this complexity to efficiently shift through large datasets. One such example is the binary search algorithm. This algorithm in many cases is much more optimal than something like linear search due to its logarithmic time complexity. This algorithm searches for a certain value by repeatedly cutting the interval in half until the value is found, meaning the amount of time it takes after every iteration is cut in half — decreasing exponentially. The only caveat is that the list must first be sorted by size before binary search can be applied. An implementation in python looks like this:

def binary_search(list, value):
left_index = 0
right_index = len(list) - 1
while left_index <= right_index:
mid_index = (left_index + right_index) / 2
if list[mid_index] == value:
print("found the value: ", list[mid_index])
return list[mid_index]
if list[mid_index] < value:
left_index = mid_index + 1
elif list[mid_index] > value:
right_index = mid_index - 1

We first set the left index to the first element in the array and the right index to the last element in the array, and the middle index is well … the middle element. We then adjust the left or right element based on wether the middle element is smaller or larger than the actual value, however is the middle element is the actual value we just take that right away. By adjusting the elements we slash the array in half each time and make our dataset smaller and thus easier and faster to go through.

In this article we went over four complexities: linear, constant, quadratic, and logarithmic. While there are many more these are some of the most common and useful complexities to be familiar with. After reading this article you will be ready to delve deeper into time complexity and figuring out the complexity of your own algorithms and programs.

--

--