The Big O: What is Computational Complexity?

Gabriel Tonini Lopes Gramicelli
7 min readFeb 12, 2024

--

Computational complexity is a fundamental area of Computer Science, but to understand why, we must first understand how computers operate.

All computers are built under the idea that they will be used to perform operations, which in computer science we call computations — They are the basis of any computer. You can read this article right now because your phone or computer can run sophisticated computations that allow it to connect to the internet.

However, all of the computations being done by your device and the server in which this article is hosted are done automatically, without either one of us having to tell the computer which computations to do. This ability is the basis of an algorithm. An algorithm is a sequence of computations which can be used to return certain outputs from certain inputs. Algorithms are at the essence of all computer science as they are what we can create to solve a class of specific problems. For example, if we want to determine the number of prime and non-prime numbers up to a positive integer n, we can use an algorithm that will compute the result.

An Example

Let's create this algorithm together! The first thing we must do is define what our inputs and outputs for the algorithm will be.

  • Since we wish to know information about all the integers leading up to a positive integer n, we can define our input as n, for some positive integer value: n ∈ ℕ⁺
  • Next, to define our outputs, we must think of all of the possible values for the number of prime (p) and non-prime numbers (q). Since the smallest value n can be 1, and 1 is not commonly defined as a prime number, we can say that the smallest value for p will be 0, and the smallest value for q will be 1. Therefore the outputs will be two integers p, q where p ≥ 0, q ≥ 1.

We can write this algorithm in just about any programming language. However, for this article, we will use Python. Then, the algorithm f, which determines the number of prime (p) and non-prime (q) numbers up to a positive integer n will be the following:

def f(n):
primes = []
for num in range(2, n + 1):
is_prime = True
for i in range(2, num):
if num % i == 0:
is_prime = False
break
if is_prime or num == 2:
primes.append(num)
p = len(primes)
q = n - p
return p, q

This function does exactly what we want. Here’s an example:

print(f(50)) # returns (15, 35), meaning 15 are prime and 35 are not

Where computational complexity comes in, is when we increase n. I will write a simple time counter on this function to see how much time it will take to run for different values of n using a simple function:

import time
def time_test(n):
start = time.time()
print(f(n))
end = time.time()
print(f"Time taken: {end - start}")

If we run this function we will get:

time_test(1) # Time taken: 2.7179718017578125e-05
time_test(10) # Time taken: 1.3828277587890625e-05
time_test(100) # Time taken: 5.507469177246094e-05
time_test(1000) # Time taken: 0.002808094024658203
time_test(10000) # Time taken: 0.22690796852111816
time_test(20000) # Time taken: 0.865760326385498
time_test(30000) # Time taken: 1.8458709716796875
time_test(50000) # Time taken: 4.95271110534668
time_test(80000) # Time taken: 12.876786470413208

By increasing n, the time taken for the function to run increases too. To see at what rate it increases, we can take a sample of values and plot them on a graph:

The time complexity for f(n)

I ran this code on my computer for values from 10,000 to 150,000 as shown above. From this, we can see that as values of n increase, the time taken increases at a non-linear rate, maybe quadratic or even exponential.

This is an example of what computational complexity is—as the size of the input n, the time taken for the function to run increases. However, it is not that simple.

Big O Notation

Instead of describing how the time taken to run the function will increase as n increases, we describe how the number of operations done by the function increases as the input size increases. For this specific function, it can be simplified to O(n²).

Here, the capital O — sometimes referred to as “Big O”, is used to tell that what is inside the brackets expresses how the number of operations done by the function will increase with an increase in the input size.

As you may have noticed, our function f seems to increase fairly rapidly. Given the fact that for an input of n = 150,000 it took the function 49 seconds. However, from the big O notation, I can extrapolate that if we doubled the size of the input to n = 300,000, it could take roughly 196 seconds for it to run. It may not seem like 196 seconds is too big of a time, but functions like this that involve evaluating prime numbers are used with inputs of way larger magnitudes, such as in RSA encryption, where prime numbers would have around 300 digits (2).

If our original function had an input of n = 1,000,000,000 (a trillion), which has a mere 10 digits (when compared to 300-digit long primes), we can use the same extrapolation method to estimate that it would take 54 hours and 26 minutes, and 59 seconds for it to run (3). Now can you see why Big O matters?

The image below gives you an idea of commonly used big O expressions, and how big they grow with time:

A graph of common complexities (1)

Some of these functions may seem arbitrary, but they are very insightful to understand the complexity of algorithms (4):

  • O(1): this is a function that will take the same amount of operations independently of the input size (E.g. first item of a list of size n).
  • O(log n): this is a function that will grow its number of operations at a slower rate than the input size grows (E.g. Binary Search).
  • O(n): this is a function that will grow its number of operations at the same rate as the input size (E.g. Unsorted List Search).
  • O(n log n): this function will have to perform operations of log n complexity for each input (E.g. Quicksort for most cases).
  • O(n²): this function will have to perform operations of n complexity for each input (E.g. the function we used earlier).
  • O(n³), O(n⁴), … : these all are similar to O(n²), they have to perform operations of n complexity for each input, except these operations are ofcomplexity for O(n³), of complexity for O(n⁴), and so on.
  • O(2^n): complexity is multiplied with each additional input (E.g. Recursive algorithms, Traveling salesman).
  • O(n!): complexity is multiplied with each additional input, and the rate of growth of the complexity increases with every input

The Bigger Picture

The end goal of studying computational complexity is to improve the way we write code; we must ask ourselves how we can reduce the complexity of the algorithms in our code to minimize the time taken to perform tasks.

A great example of how computational complexity has been extremely optimized is your maps app. To give you an example, I opened Google Maps and selected a car route from Lisbon to Luxembourg, and it found the fastest path from place to place in 3.21 seconds:

The shortest possible route from Lisbon to Luxembourg on Google Maps (5)

How much time would you take to find this route? You’d have to consider a significant percentage of the highways in 4 different countries, all of their curves and speed limits, and all of the local traffic variations in the past, and still be certain that there couldn’t be a better option.

P vs NP — at the essence of problem-solving

One of the most famous unsolved problems in computer science is P vs NP. This problem stems from the complexity of certain classes of problems.

The P class of problems is essentially every problem that can be solved in “polynomial time”. Here what this means is that the Big O notation for an algorithm that solves the problem is a polynomial, such as O(n²) for example. Up to algorithms in polynomial time, they are considered “easy” as they are relatively fast for computers to solve. Now, of course, a problem having a solution in polynomial time by no means makes it solved, but it is generally viable to work with the majority of polynomial time algorithms with today’s computers (5).
The NP class of problems is composed of problems where their solutions can be verified in polynomial time. However, NP problems are nondeterministic, which means that no particular rule is followed to find solutions, and therefore guessing is required. An example of an NP problem is creating a school calendar for all classes, students, and teachers such that all class requirements are met by students and teachers, and no student or teacher is required to attend 2 classes at once (5).

The P vs NP problem asks us a very fundamental question in problem-solving: can all easy-to-check solutions be found easily? The problem is still unsolved and is considered one of the most important open problems Computer Science. As a way to encourage thinkers around the world to try and solve the problem, the Clay Mathematics Institute has set an award of 1 million dollars for whoever solves it (6).

Sources

  1. Huang, Shen. “What Is Big O Notation Explained: Space and Time Complexity.” FreeCodeCamp.org, freeCodeCamp.org, 16 Jan. 2020, www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/. Accessed 11 Feb. 2024.
  2. “How to Generate Large Prime Numbers for RSA Algorithm.” GeeksforGeeks, GeeksforGeeks, 26 June 2020, www.geeksforgeeks.org/how-to-generate-large-prime-numbers-for-rsa-algorithm/. Accessed 11 Feb. 2024.
  3. “Seconds to HH:MM:SS Time Calculator.” Inch Calculator, Inch Calculator, 11 Dec. 2023, www.inchcalculator.com/seconds-to-time-calculator/. Accessed 11 Feb. 2024.
  4. DataDaft. “Algorithms Explained: Computational Complexity.” YouTube, 12 Mar. 2021, www.youtube.com/watch?v=47GRtdHOKMg&ab_channel=DataDaft. Accessed 11 Feb. 2024.
  5. “P versus NP Problem | Complexity Theory & Algorithmic Solutions | Britannica.” Encyclopædia Britannica, 2024, www.britannica.com/science/P-versus-NP-problem. Accessed 12 Feb. 2024.
  6. “The Millennium Prize Problems — Clay Mathematics Institute.” Clay Mathematics Institute, 27 May 2022, www.claymath.org/millennium-problems/. Accessed 12 Feb. 2024.‌

--

--