Computer Science Fundamentals: Complexity

Time vs. space complexity and more

Marc Fichtel
Feb 21 · 6 min read
Photo by Christian Lambert on Unsplash.

Almost everyone who writes code has to, in one way or another, think about performance. But what does it mean for an algorithm to be performant? In this piece, I want to introduce you to some basic principles computer scientists commonly learn about computational complexity in order to reason about their programs and systems. Thinking about performance in these terms might help you write better code and, more generally, get better at problem-solving.

Small disclaimer: I am by no means an expert in this area. My goal here is simply to write a short introduction. If you spot errors or something is unclear, I’d very much appreciate you speaking up!

Time vs. space.

Time vs. Space

Computer scientists typically talk about time complexity and space complexity. The former refers to how long an algorithm takes to run (or more accurately, how many steps an algorithm requires to solve a problem since the real-time duration of those steps depends on the computer hardware used), while the latter describes how much memory is taken up in the process.

To talk about these, computer science uses descriptors such as the Big-O notation, which, in layman’s terms, refers to the worst possible case. For example, to check if an unsorted array contains a specific element, the worst case is the element being in the last checked place, so this checking algorithm would have a time complexity of . There are other notations (Little-O, Big-Omega, Small-Omega) that describe slightly different cases, such as the average case, but Big-O (usually abbreviated as just O) is by far the most widespread one.

An excellent problem to illustrate this is the Fibonacci sequence. In case you’re not familiar, a number in the Fibonacci sequence is be calculated like so:

  • The first number is 0.
  • The second number is 1.
  • Every following number is the sum of its two predecessors. For example, the third number is the first plus the second (0+1=1), the fourth number is the second plus the third (1+1=2), and so on. Following this pattern, we get a sequence that looks like so: 0, 1, 1, 2, 3, 5, 8, 13, 21, etc.

This is fairly easy to put into code using a little recursion, which most closely resembles how the problem was stated or how a human might think about the problem. I’ll be using Java-like pseudo-code for any examples, but feel free to try it out in any language of your choice.

Fib(int i) {
if (i < 2) return i;
return Fib(i-1) + Fib(i-2);
}

Observe what happens to the number of instructions needed to complete as the input number increases:

  • If i is 0 or 1, we return just that in a single step. This the base case (something very important for recursive functions).
  • If i is 2, the first conditional check fails and the next line makes two Fib calls, each of which takes one step to return. That’s four steps in total.
  • If i is 3, two calls (Fib(1) and Fib(2)) are made. The latter leads itself to two calls (Fib(0) and Fib(1)). With the conditional checks, that makes nine steps. Note that that the call to Fib(1) was duplicated. We will use this later to optimize this algorithm.

This naive Fibonacci implementation has a time complexity of . That is, it will take 2 to the power of i instructions to complete, because for each number ≤ input (apart from the base case), two Fib calls are made. To compute the tenth number, this would take 100 instructions. While computers are getting faster and faster, this is not a very good runtime. Depending on what language you are using, this runtime will quickly increase from seconds to minutes to hours and further, regardless of the computer’s hardware. As far as space complexity is concerned, this is also due to each recursive call to Fib allocating an additional call stack. In general, I’d wager that time complexity is more often prioritized over space complexity, as memory is becoming cheaper and cheaper. Unless you are coding with specific limitations or constraints (such as on embedded or IoT devices), speed is usually the limiting factor.

The precise complexity of an algorithm might be something like , but constants such as the 3 in this example are conventionally excluded, as the point of all this is more to classify an algorithm’s complexity than to define its exact runtime. Another thing to note is that constant time or space complexity (i.e. the complexity doesn’t change regardless of input size) is denoted as . An example of this is array lookup. No matter how many elements the array has, accessing its first, second, or 100th element only takes one (high-level) step.

Photo by Hello I’m Nik 🍌 on Unsplash.

Gotta Go Fast

There are multiple approaches to address the above function’s performance issues. One popular way to reduce time complexity at the cost of extra memory is a technique called memoization. This saves time by storing (i.e. memorizing) previously calculated numbers, much the same way a human might write down the numbers they have computed. In the case of our Fibonacci function, we could track Fib results in a map and only compute a new one, if it’s not already in the map.

Map memory = new HashMap<int, int>();
memory[0] = 0;
memory[1] = 1;
FibMem(int i) {
int result = memory[i];
if (result == null) {
result = FibMem(i-1) + FibMem(i-2);
}
return result;
}

With a very small amount of additional code, we have achieved a time complexity of . Computing the Nth Fibonacci number now takes as opposed to steps. The trade-off is that we are now using more memory.

Another implicit trade-off is that this function is arguably more complicated than the one above. Premature optimization is the root of all evil — or so the saying goes. While analyzing computational complexity, one should never forget the human factor. The trade-off between performance and readability is one not made lightly, but it’s beyond the scope of this piece.

We can take this one step further and bring the space complexity down to as well. To do this, rather than using recursion, we use a simple for loop and some local variables. Since we’re only updating local variables and not making recursive calls, the memory consumption remains the same.

FibOptimized(int i) {
if (i < 2) return index;
int a = 0;
int b = 1;
for (int j = 0; j < i; j++) {
int temp = b;
b = a;
a = a + temp;
}
return a;
}

And there you have it: a Fibonacci implementation with time complexity and space complexity . One more time, I also want to point out that this is, in my opinion, significantly less readable than the naive Fibonacci implementation. There are implementations that can further reduce the time complexity, and yes, those do become even more complicated.

As an exercise, I’ll leave you the following question: Given an array [a, b, c, …] of integers, write a function with time complexity to return an array [b*c*…, a*c*…, a*b*…] such that each element of the output array is the product of all elements of the input array, except the one with the current index. For example, the input [1, 2, 3, 4] would produce the output [24, 12, 8, 6] (i.e. [2*3*4, 1*3*4, 1*2*4, 1*2*3]). An solution is fairly straightforward, but an solution can be achieved.

Photo by JESHOOTS.COM on Unsplash.

Beyond Big-O

Computational complexity is a vast field with many, many more things to learn than the proverbial tip of the iceberg highlighted here. Things you may want to look up are dynamic programming, the travelling salesman problem, the divide and conquer approach, greedy algorithm design and other heuristics, and the P vs. NP question. Some of these may be more academic in nature than others, and you may never be asked to write a formal proof for your code at work, but all are hugely important for algorithm optimization and computer science as a whole.

Better Programming

Advice for programmers.

Thanks to Zack Shapiro

Marc Fichtel

Written by

Software Soothsayer @ Finger Food Advanced Technology Group ~ linkedin.com/in/mafichtel

Better Programming

Advice for programmers.

More From Medium

More from Better Programming

More from Better Programming

More from Better Programming

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade