The 5 Fundamental Running Times in Computer Science

James Le
James Le
Sep 7, 2018 · 5 min read

As a software engineer, there are many important things for you to take care of, like code readability, modularity, security, maintainability… Why worry about performance? Well, performance is like the currency through which we can do all the other things. For example, let’s say given 2 algorithms for a task, how can you find out which one is better? The naive way to do so is to implement both and run them for different inputs to see which one takes less time. However, there are problems with this approach:

  • For some inputs, one of the two algorithms performs better than the other.
  • For some inputs, one algorithm performs better on one machine and the other algorithm works better on other machines for some other inputs.

Asymptotic analysis is the idea that handles these issues in analyzing algorithms. In an asymptotic analysis, we evaluate the performance of an algorithm in terms of input size (we don’t measure the actual running time). We calculate, how does the time (or space) taken by an algorithm increases with the input size.

An algorithmic analysis is performed by finding and proving asymptotic bounds on the rate of growth in the number of operations used and the memory consumed. The operations and memory usage correspond to the analysis of the running time and space, respectively. Here are the 3 types of bounds most common in computer science:

  1. Asymptotic Upper Bound (aka Big-Oh) — Definition: f(n) = O(g(n)) if there exists a constant c > 0 and a constant n_{0} such that for every n >= n_{0} we have f(n) <= c * g(n).
  2. Asymptotic Lower Bound (aka Big-Omega) — Definition: f(n) = Omega(g(n)) if there exists a constant c > 0 and a constant n_{0} such that for every n >= n_{0} we have f(n) >= c * g(n).
  3. Asymptotically Tight Bound (aka Big-Theta) — Definition: f(n) = Theta(g(n)) if there exists constants c_{1}, c_{2} > 0 and a constant n_{0} such that for every n >= n_{0} we have c_{1} * g(n) <= f(n) <= c_{2} * g(n).

Here the 2 major properties of these asymptotic growth rates:

  1. Transitivity: if a function f is asymptotically upper-bounded by a function g, and if g in turn is asymptotically upper-bounded by a function h, then f is asymptotically upper-bounded by h. A similar property holds for lower bounds.
  2. Sums of Functions: if we have an asymptotic upper bound that applies to each of 2 functions f and g, then it applies to their sum.

Out of these 3 bounds, computer scientists should focus mostly on Big-Oh Notation, which specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used by an algorithm. It enables a software engineer to determine how efficient different approaches to solving a problem are. Let’s look at some common types of time complexities in Big-Oh Notation:

1 — O(1) Constant Time

void printFirstItem (const vector<int>& array) {   cout << array[0] << endl;}

2 — O(n) Linear time

void printAllItems (const vector<int>& array) {   for (int i = 0; i < array.size(); i++) {      cout << i << endl;   }}

3 — O(log n) Logarithmic Time

int BinarySearch (const vector<int>& array, int targetValue) {   length = array.size();   int minIndex = 0;   int maxIndex = length - 1;   int currentIndex;   int currentElement;   while (minIndex <=  maxIndex) {      currentIndex = (minIndex + maxIndex) / 2;      currentElement = array[currentIndex];      if (currentElement < targetValue) {         minIndex = currentIndex + 1;      } else if (currentElement > targetValue) {         maxIndex = currentIndex - 1;      } else {         return currentIndex;      }   }   return -1; // If the index of the element is not found}

4 — O(n²) Quadratic Time

for (int i = 0; i < 5; i++) {   for (int j = 0; j < 4; j++) {      // More loops   }}

5 — O(2^n) Exponential Time

int Fibonacci(int number) {   if (number <= 1) return number;   return Fibonacci(number - 2) + Fibonacci (number - 1);}

As you see, you should make a habit of thinking about the time complexity of algorithms as you design them. Asymptotic analysis is a powerful tool, but use it wisely. Sometimes optimizing runtime may negatively impact readability or coding time. Whether you like it or not, an effective engineer knows how to strike the right balance between runtime, space, implementation time, maintainability, and readability.

— —

If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. You can find my own code on GitHub, and more of my writing and projects at https://jameskle.com/. You can also follow me on Twitter, email me directly or find me on LinkedIn.

Cracking The Data Science Interview

Your Ultimate Guide to Data Science Interviews

James Le

Written by

James Le

Blue Ocean Thinker (https://jameskle.com/)

Cracking The Data Science Interview

Your Ultimate Guide to Data Science Interviews

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade