Algorithm Series Part I: Big O Notation

Jacob Kagon
Geek Culture
Published in
4 min readApr 2, 2021

An important concept in computer science when you’re trying to understand algorithms is big O notation. Big O notation describes how long it takes for a program to execute in relation to changing input size.

Big O was created by German mathematicians Paul Bachmann and Edmund Landau and is a type of asymptotic notation for the worst-case running time of an algorithm. When thinking about the run-time of a program, we can break it down into three categories: best-case, worst-case, and average-case. We are mostly interested in identifying either the worst-case or the average-case, to give us a good baseline on the computational speed of our program. This is how understanding big O will come in handy.

There are a handful of big O notations computer scientists use to describe the worst-case runtime of a program. For the sake of brevity, we’ll only discuss a few of them here.

I. Constant Time Complexity

The first big O notation worth mentioning is O(1) also known as constant time-complexity. The O in front of the parenthesis denotes big O, and the 1 denotes constant time.

From this graph, we can see that as the input size grows, the execution time stays the same. Here’s an example of a constant-time algorithm in code:

function evenOrOdd(number) {   if (number % 2 === 0) {     return "even";   } else {     return "odd";   }}

To determine the time complexity of this algorithm, we have to look at how many times each expression is executed. Here’s the algorithm again:

function evenOrOdd(number) {  if (number % 2 === 0) {  // executed once     return "even";  // executed once  } else {    // executed once     return "odd";  // executed once   }}evenOrOdd(4) // => evenevenOrOdd(7) // => odd

In this example, we can see that regardless of what number we input as an argument, the expressions still only execute once. Because the expressions only execute once, we can determine that the time-complexity of this algorithm is O(1) or constant time-complexity.

Algorithms that run in constant time are very fast, due to the time not being impacted by the input. As we discuss more big O notations, you’ll start to see how an algorithm's input can really slow down the program.

II. Linear Time-Complexity

Another big O notation worth mentioning is O(n), also known as linear time-complexity. We’ve already discussed the meaning of O, but n defines the input size of an algorithm. Below, we can see an example of an algorithm that runs in O(n) time. Try to figure out why that is.

function repeatStr (n, s) {
let string = '';
for(let i = 0; i < n; i++) {
string += s
}
return string
}

Hopefully, you were able to count the number of times each expression ran. This was sort of a trick question as I never explained what happens when we don’t know how many times an expression will run. Let’s take another look.

function repeatStr (n, s) {
let string = ''; // executed once
for(let i = 0; i < n; i++) { // executed once
string += s // executed n times
}
return string // executed once
}
repeatStr(3, "hello") // => "hellohellohello"// 0(n)

When looking at this algorithm, we can see that there is a difference compared with the constant time algorithm. Mainly it’s the introduction of the loop. The number of times the loop will run is dependent on the size of n. Because the runtime increases proportionally with the input size, this algorithm will run in linear time O(n).

III. Quadratic Time-Complexity

Quadratic time-complexity or O(n²) is a common time-complexity where the input is squared. This makes for a less than ideal runtime but is worth mentioning as it is fairly common especially in some sorting algorithms.

function bubbleSort(array) {

let isSorted = false;
let counter = 0;while (!isSorted) {
isSorted = true;
for (let i = 0; i < array.length - 1 - counter; i++)
if (array[i] > array[i + 1]) {
swap(i, i + 1, array);
isSorted = false;
}
counter++;
}
return array;
}
function swap(i, j, array) {
let temp = array[j];
array[j] = array[i];
array[i] = temp;
}

The above code is a sorting algorithm known as bubble sort. We can see that it has some common characteristics of previously shown algorithms, but with one major change. We can see that there are two loops that are being executed and one loop is inside the other. When we go back to counting how many times each expression runs, we can see that the worst-case run-time for expressions inside the if-statement is n² because it’s nested inside two loops. If the if-statement was only inside one loop, the time complexity would only be O(n) because the inner expression would execute n times. The fact that there is a nested loop, means that the expression will execute O(n²) times.

An important note however is that if an algorithm had more than a loop, but they weren’t nested, the algorithmic run-time would most likely be O(n) because inner expressions on the loops would only execute n times. So if we had 1n + 1n = 2n, we drop the coefficient and we are left with just n.

Summary

I hope that you’ve learned a bit more about algorithmic run-time. I’ve only touched on the basics, but this should give you a solid foundation for how to find the time complexity of an algorithm. Just remember to focus on the number of times expressions execute and some common patterns for identifying big O quickly like if there is one loop or nested loops.

--

--