Complexity Analysis

Natasha Ferguson
4 min readOct 31, 2022

--

Photo by Андрей Сизов on Unsplash

Just as data structures are the foundational knowledge that you need to tackle coding interviews, complexity analysis is the foundational knowledge that you need to better understand data structures.

When you have a coding problem, it is very common for a problem to have multiple solutions. For example, there are several different ways to sort a list. When several algorithms solve a problem, how do you know which one is the best? Is it the simplest? The fastest? The smallest? How do we analyze algorithms and compare algorithms against each other?

For example, you and your coworker are tasked to write a function to sum up numbers from 0 to N. You each write a function.

A list of numbers:

n = [22, 10, 371, 50, 7, 92, 11, 12, 600, 3, 52]

Solution 1

def sum_numbers(n):
num_sum = 0
for i in range(len(n)):
num_sum += n[i]

return num_sum
%timeit sum_numbers(n)

Solution 2

%timeit sum(numbers)

How would you compare the functions? How would you know which one is better? One approach is to measure the time it takes to run using the built-in %timeit magic function in Jupyter to compare the time of the functions.

Solution 1

614 ns ± 22.1 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

Solution 2

113 ns ± 2.65 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

We can see that the second solution that uses Python built-in sum() function is much more efficient. Running at a much faster rate than the first means that the second function is a lot more efficient but the problem is we can’t just use the time to run as an objective measurement because that will depend on the speed of the computer itself and hardware capabilities. We want to use an objective measurement of how much time it takes for both functions to run and we want it to be hardware-independent. And that’s where Complexity Analysis and Big O Notation come in.

Often in a coding interview, you will be asked if you could do better, find a more efficient solution to the problem. This is where complexity comes into play — what makes one solution better than another is whether or not it has better complexity.

What is complexity? It’s the process of determining how efficient an algorithm is and the amount of resources used by the algorithm. The most common resources considered are runtime and memory usage (time and space complexity). Complexity analysis is used to identify and avoid using algorithms with long runtimes or high memory usage.

Time complexity — a measure of how fast an algorithm runs.

Because an algorithm’s run time is affected by several different variables, such as your computer’s processing power and the programming language, run time (the amount of time it takes your computer to execute an algorithm written in a programming language) is not an effective way to compare two algorithms. Instead, computer scientists compare algorithms by looking at the number of steps they require and how an algorithm performs as the input size (N) gets bigger.

An algorithm’s runtime complexity is a function, T(N), that represents the number of constant time operations performed by the algorithm on an input of size N.

Because an algorithm’s runtime may vary significantly based on the input data, a common approach is to identify best and worst-case scenarios. An algorithm’s best case is the scenario where the algorithm does the minimum possible number of operations. An algorithm’s worst case is the scenario where the algorithm does the maximum possible number of operations. It’s expressed using Big O notation.

Space complexity — a measure of how much auxiliary memory an algorithm takes up.

Computers have finite resources such as memory, so in addition to thinking about an algorithm’s time complexity, you should consider its resource usage. An algorithm’s space complexity is a function, S(N), that represents the number of fixed-size memory units used by the algorithm for an input of size N and includes fixed space (memory your program requires), data structure space(memory your program needs to store a data set), and temporary space (for intermediary processing). It’s also expressed using Big O notation.

Big O notation is a mathematical notation that describes how an algorithm’s time and space requirements increase as the size of N increases. We use Bio O notation to generalize the space-time complexity of an algorithm as a function of its input size. We will discuss this topic in detail next.

Vocabulary

Algorithm: a sequence of steps that solves a problem.

Run time: the amount of time it takes your computer to execute an algorithm written in a programming language.

Big O notation: a mathematical notation that describes how an algorithm’s time and space requirements increase as the size of N increases.

Time complexity: the maximum number of steps an algorithm takes to complete as N gets larger.

Space complexity: The amount of memory space an algorithm needs.

Best-case complexity: how an algorithm performs with ideal input.

Worst-case complexity: how an algorithm performs in the worst possible scenario for it.

--

--