Confused by Big O Notation? A Newbie’s Guide to Understand it Once and For All.

Yuri Bett
7 min readSep 8, 2023

--

Image from https://www.bigocheatsheet.com/

Big O notation is a theoretical concept used in computer science to describe the performance of an algorithm, specifically in terms of its time complexity (how the running time of an algorithm grows as the size of the input increases) and sometimes space complexity (how the amount of memory used grows as the size of the input increases). At its core, Big O notation provides an upper limit on the running time of an algorithm in the worst-case scenario.

Let’s break it down with some real-world and coding examples!

Real-World Analogy:

Imagine you’re looking for a word in a book.

  1. If you know the exact page and location of the word, you’ll just flip to the page and find it immediately. This is the fastest method and is analogous to O(1), often called “constant time.”
  2. If you don’t know where the word is, you might start reading the book page by page until you find it. If the word is on the last page or isn’t there at all, you’ll have gone through the entire book. In the worst-case scenario, if the book has 100 pages, you’ve read through all 100 pages. This is analogous to O(n), known as “linear time.”
  3. Now, let’s say for every page you read, you also decide to read all the footnotes on that page and for each footnote, you read its references. The amount of work you’re doing grows exponentially based on the number of pages. This can be thought of as O(n²), or “quadratic time.”

Here are some examples of each of the main BigO Notations:

Constant Time — O(1)

function getFirstItem(array) {
return array[0];
}

No matter how big the input array is, this function simply retrieves the first element, making its performance constant.

Linear Time — O(n)

function findItem(array, item) {
for (let i = 0; i < array.length; i++) {
if (array[i] === item) {
return true;
}
}
return false;
}

In the worst case, this function might have to check every element of the array to find the item.

Quadratic Time — O(n²)

function containsDuplicates(array) {
for (let i = 0; i < array.length; i++) {
for (let j = i + 1; j < array.length; j++) {
if (array[i] === array[j]) {
return true;
}
}
}
return false;
}

This function checks each pair of elements in the array to see if there are any duplicates. As the array grows, the number of pairs we have to check grows quadratically.

Okay, those were the most common ones and the ones we most deal with on a daily basis. The next ones are really important also, especially if you are dealing with the worst cases. By the end of this text, I will explain how we can use memory as a trade-off for some more performant notations.

Logarithmic Time — O(log n)

Often found in algorithms that reduce the problem size with each step, most notably in binary search algorithms.

Example: Binary Search

function binarySearch(array, item) {
let low = 0;
let high = array.length - 1;

while (low <= high) {
let mid = Math.floor((low + high) / 2);
let guess = array[mid];

if (guess === item) {
return mid;
}

if (guess < item) {
low = mid + 1;
} else {
high = mid - 1;
}
}

return null; // Item not found
}

Linearithmic Time — O(n log n)

Common in more optimized sorting algorithms like quicksort, mergesort, and heapsort.

Example: Simple Merge Sort

function mergeSort(array) {
if (array.length <= 1) {
return array;
}

const middle = Math.floor(array.length / 2);
const left = mergeSort(array.slice(0, middle));
const right = mergeSort(array.slice(middle));

return merge(left, right);
}

function merge(left, right) {
let result = [];
let leftIndex = 0;
let rightIndex = 0;

while (leftIndex < left.length && rightIndex < right.length) {
if (left[leftIndex] < right[rightIndex]) {
result.push(left[leftIndex]);
leftIndex++;
} else {
result.push(right[rightIndex]);
rightIndex++;
}
}

return result.concat(left.slice(leftIndex)).concat(right.slice(rightIndex));
}

Exponential Time — O(2^n)

Often found in recursive algorithms that solve a problem of size n by recursively solving two smaller problems of size n-1.

Example: Recursive Fibonacci

function fibonacci(n) {
if (n <= 1) {
return n;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}

Factorial Time — O(n!)

Found in algorithms that compute all possible combinations of a set, like the infamous Traveling Salesman Problem. These algorithms can get slow extremely quickly as the input size increases.

Example: Generating All Permutations of a String

function getAllPermutations(string) {
let results = [];

if (string.length === 1) {
results.push(string);
return results;
}

for (let i = 0; i < string.length; i++) {
let firstChar = string[i];
let charsLeft = string.substring(0, i) + string.substring(i + 1);
let innerPermutations = getAllPermutations(charsLeft);

for (let j = 0; j < innerPermutations.length; j++) {
results.push(firstChar + innerPermutations[j]);
}
}

return results;
}

Memory Trade-offs in Algorithms: Using Space to Save Time

In computer science, performance optimization often comes down to a balancing act between time and space — that is, between the speed of an algorithm (how fast it computes) and the memory it uses. Sometimes, by using a bit more memory, we can dramatically speed up computations. This trade-off can be essential, especially when dealing with large datasets or time-sensitive operations.

Why Use More Memory?

Memory is relatively cheap nowadays. If using a bit more memory can make our algorithm significantly faster, especially in scenarios where speed is of the essence, it’s often a worthwhile trade-off.

For example, caching is a technique where we store the results of expensive function calls and return the cached result when the same inputs occur again. This technique sacrifices memory to improve speed.

The Trade-offs of Using More Memory

While using additional memory can lead to faster algorithms, it’s crucial to be aware of potential pitfalls:

  1. Hardware Limitations: Not all devices have abundant memory. An algorithm that runs well on a powerful server might not run as efficiently on a mobile device or an IoT gadget with limited RAM.
  2. Memory Leaks: Using more memory might make your program more susceptible to memory leaks, especially if you’re not managing memory properly.
  3. Cost: In cloud environments, while memory might be “cheap,” it’s not free. Using excessive memory can increase costs.

Decreasing Time Complexity with Memory

Using more memory can sometimes decrease the time complexity of an algorithm. Here’s a practical example:

Problem: Imagine you have a list of 10,000 products, and you want to quickly check if a product exists in the list.

Solution Without Extra Memory (O(n):

function productExists(products, product) {
for (let i = 0; i < products.length; i++) {
if (products[i] === product) {
return true;
}
}
return false;
}

This solution checks each product in the list one by one. In the worst-case scenario, it has a linear time complexity, O(n).

Solution With Extra Memory (O(1)):

let productSet = new Set(products);

function productExistsFast(product) {
return productSet.has(product);
}

By using a JavaScript Set, which internally uses a hashing mechanism, we can check the existence of a product in constant time, O(1). However, the trade-off is that we’ve used additional memory to store the Set.

Optimizing algorithms is as much an art as it is science. While Big O notation provides a lens to view and measure the time efficiency of our solutions, the practical aspects, like memory usage, bring another layer of complexity. As developers, our goal isn’t just to write code that works, but to sculpt solutions that strike the right balance between speed and space, always keeping in mind the specific needs and constraints of our applications.

In the Grand Tapestry of Computation: Why Big O Matters

As we journey through the vast realms of coding and algorithms, we often hear the echoing question: How does my code stand against the tides of time and data? The answer, more often than not, lies in the poetic language of Big O notation.

Big O, in its essence, is a compass. It doesn’t just tell us where we are, but it provides insight into the journey ahead — the pitfalls of inefficiencies and the rewards of optimized paths. By understanding these complexities, from the swift O(1) to the daunting O(n!), we’re not just becoming better programmers; we’re evolving into computational storytellers. We gain the capability to weave narratives of code that don’t just solve problems, but do so with grace, efficiency, and flair.

In a world brimming with data, where the stakes of efficiency rise every day, the mastery of Big O becomes a beacon. It’s more than just a measure; it’s a philosophy, a mindset, a critical tool in the coder’s arsenal. The difference between a solution and an elegant solution often resides in its complexity.

So the next time you pen down an algorithm or choose a method to solve a problem, let Big O be your guiding star. Remember, in the vast sea of computation, it’s not just about reaching the destination, but also about the voyage — and how gracefully you sail through it!

If you want a comprehensive cheat sheet for every possibility, don't hesitate to check this website: https://www.bigocheatsheet.com/. I find myself sometimes going back to it and using it as a reference.

--

--

Yuri Bett

Senior Software Engineer | Technical Lead | Technical Writer - I love everything about Javascript, React.js, Next.js, and Node.js