# Coding Interview: Big O Notation in a Nutshell

**Big O** notation describes the **complexity** of an algorithm as a function of the size of an input. Complexity can be about performance (runtime complexity) or space (amount of memory used).

**Big O** is an asymptotic notation for the **worst case** or the ceiling of growth for a given function. It is not about finding an exact value but about describing the worst-case scenario as closest as possible. For instance,

**printing all the elements of an array with N elements**, it needs**O(N)**time, and**searching linearly for an element in an array with N elements**is also**O(N)**.

Why? Because the worst-case scenario is to make N comparisons when searching, reviewing each element one time, just as we did when printing.

Typical values for Big O are shown in Figure 1 for input sizes 0 to 100, just as an example. As can be noticed, it quite evident that we have three zones there: Big O that keep low even with high values for the input size, Big O that is high even with low values for the input size, and Big O that *almost* grow linearly with the increase of the input size.

Now, the question is how to calculate the Big O forgiven a source code of an algorithm? Let us focus on runtime complexity, i.e., performance. The following sections describe the diverse possibilities.

## 1. Big O for Instructions and Conditions

First, **Big O** for an instruction, including conditionals, is a constant **O(1)**. The algorithm below (example 1) has a complexity of **O(1)**.

`// Example 1`

void sayHello () {

System.out.**println** ("Hello!");

}

The same rule applies to conditions. The method below (example 2) has constant complexity. For practicality, even though we have two instructions, the complexity is **O(1)**.

`// Example 2`

void isEven (int number) {

if (**number % 2 == 0**) {

System.out.**println** ("It is even!");

}

}

Key Idea:

O(1)is typical for simple instructions, including conditions.

## 2. Big O for Loops

The Big O for a loop answer the question, how many times the loop body will run. A simple example running n times such as the algorithm below (example 3) has a complexity **O(n)**

`// Example 3`

void printNumbers() {

for (int **i=0**; **i<n**; **i++**) {

System.out.println(i);

}

}

Key idea:

O(n)is typical for simple single loops.

## 3. Runtime Drop Constants and Non-Dominant Terms

What about an algorithm with a combination of instructions and diverse independent loops such as the one below (example 4)?

`// Example 4`

void doThis () {

**// sinlge instruction**

System.out.println("Hello World!");

**// condition**

if (input % 2 == 0) {

System.out.println("This is even!");

}

**// loop 1**

for (int i=0; i < n; i++) {

System.out.println(i);

}

**// loop 2**

for (int i=0; i < n; i++) {

System.out.println(i);

}

}

If you are considering an addition, you are right. The equation that describe the runtime could be something like (1 + 2 + n + n) or (3 + 2*n). However, when representing Big O, key ideas to have in mind:

**Drop the constants**. For instance, O(2*n) is O(n)**Drop the non-dominant terms**. For instance O(1 +n) becomes O(n), O(n² +n) becomes O(n²), or O(n+ log n) becomes O(n) and so on.

Therefore, our code (example 4) is just **O(n).**

Another example, consider a loop such as the following (example 5) that does not increment in 1 :

`// Example 5`

void doThis () {

for (int i=0; i < n; **i=i+2**) {

System.out.println(i);

}

}

It runs n/2 times, so its complexity would be O(n/2) or O(0.5 * n), but after applying our rule (**drop the constants), **we have a complexity of **O(n) **for the code given before (example 5).

Key idea: Use

additionto combine diverse sections of the algorithm’s complexity butdrop constantsandnon-dominant terms.

## 4. Runtime Big O for Loops that Change the Problem Space

What about Big O for a searching algorithm such as the** **binary search?

`// Example 6`

int binarySearch(int[] sortedArray, int key, int low, int high) {

int index = Integer.MAX_VALUE;

while (**low <= high**) {

int mid = (low + high) / 2;

if (sortedArray[mid] < key) {

**low** = mid + 1;

} else if (sortedArray[mid] > key) {

**high** = mid - 1;

} else if (sortedArray[mid] == key) {

index = mid;

break;

}

}

return index;

}

It is just a loop, but you remember that the number of elements in the problem space gets halved in every iteration. How many times can a number be halved? The answer is **log n. **For instance, an array size 10 will start comparing the element in 5, then 2, then 1. That is a loop running 3 times. Thus, **O (log n). **We are talking about logarithm base 2. However, it does not really matter the logarithm base because the difference between a logarithm in a base “*a”* and a logarithm in a base “*b”* is a constant value. Yes, logₐn= K logᵦn. And we drop the constants when expressing Big O.

Math Note:

logᵦn=logₐb/logₐn.

A similar case to consider is as shown below (example 7), notice how the variable change in the loop

- if n=5, then it will print 2 then stop
- if n=10, then it will print 2, 9, then stop
- if n=20, then it will print 2, 9, 16, then stop
- if n=30, then it will print 2, 9, 16, 25, then stop

And, so on. It runs as many times as there is a number *x* such that *x²* *< n*. That sounds like the definition of square root! It is correct, this is **O(√n)**

`// Example 7`

public void method (int n) {

for (int x = 2; **x * x < = n**; x++) {

System.out.println (x);

}

}

Notice:

O(log n)is typical for algorithms that got the problem space halved in every iteration.Notice: Review review stop conditions in loops carefully. They are not always O(n); you could identify other values, including

O(√n).

## 5. Runtime Big O for Recursive Algorithms

Big O for recursive implementation is similar to Big O for loops. It answers the question How many times the body will be executed. Simple recursive functions have a Big O just as their loop version equivalents. For instance, the recursive factorial as shown below (example 8) will run n times; therefore, it is **O(n).**

`// Example 8`

public long **factorial**(int n) {

if (n <= 2) {

return n;

}

return n * **factorial**(n - 1);

}

What about having more than one recursive call, such as in the classical recursive Fibonacci number calculation shown below (example 9)?

`// Example 9`

int **fibonacci** (int n) {

if (n <= 1)

return n;

return **fibonacci**(n-1) + **fibonacci**(n-2);

}

It will create a tree. One call will do two, and each of these twowill do its own two, and so on. It will create a tree, as shown in figure 2.

Hence, each node in the tree represents the total number of times that the function is running. The total number of nodes in the tree is 1 + 2 + 4 + 8 … 2**ᵈ-¹**, where *d* is the depth of the tree. It is a geometric progression, and there is an equation to solve it.

Math Note:

B⁰+ B¹+ B²+ B³… Bᵈ-¹= (Bᵈ -1) / (d-1)

Therefore the complexity is (Bᵈ -1) / (d-1). However, we drop constants; in consequence: for a recursive function that makes multiple calls, the Big O is usually something as** O(B ᵈ)**, where B is the number of branches per call and d the depth of the recursive tree. In the example above (example 9), each call creates two branches, and the depth is 6. Thus, our recursive Fibonacci has an **O(2ⁿ). **Remember, we are expressing the worst-case scenario, thus the case in which the tree has all its nodes.

What about a runtime Big O for the code below (example 10)?

`// Example 10`

int **sum** (Node node) {

if (node == null)

return 0;

return **sum** (node.left) + node.value + **sum** (node.right);

}

A little tricky. Branches are 2 per call; therefore, *B=2*. What about depth? Well, it looks like a method for a tree, then the question is, Having a tree with n nodes, what is the depth of the tree? The answer is *d=log n*. Therefore, B ᵈ = n. In that scenario, our algorithm is **O(n)**. Wait, what if all nodes are inserted in a different level, so they form a line? That is also **O(n).**

Notice: O

(B ᵈ)is common when you have a recursive function that makes multiple calls (branches) each time.

## 6. Runtime Big O for Nested Loops

Nested loops are interesting. For instance, what is the Big O for the Bubble Sort algorithm implementation shown below? For an array with n elements the first loop run n times and the loop inside run **n** times for each time that the loop outside is executed, so **O(n*n)** or **O(n²)**

`// Example 11`

void **bubbleSort**(int[] arr) {

for (int **k** = 0; k < **arr.length** - 1; k++) {

for (int i = 0; i < **arr.length - 1 - k**; i++) {

if (arr[i] > arr[i + 1]) {

swap(arr, i, i + 1);

}

}

}

}

You could have noticed the -k and thinking that the inner loop does not run n times. The first time will run n times, but then n-1, n-2, n-3, n-4, etc. Consequently, the total number of times that the code is executes is not n * n but n + (n-1) + (n-2) + (n-3) + …. + 1.

Math Note:

n + (n-1) + (n-2) + (n-3) + …. + 1 = n* (n-1) / 2

And by dropping constants and non-dominant terms, that result in **O(n²).**

What if we have two, such as in the typical matrix multiplication code instead of one array, as shown below (figure 12)?

`// Example 12`

public static int[][] multiplication(int[][] A, int[][] B) {

int[][] C = new int[A.length][B[0].length];

for (int i = 0; **i < A.length**; i++) {

for (int j = 0; j < **B[0].length**; j++) {

C[i][j] = 0;

}

}

for (int i = 0; i < **A.length**; i++) {

for (int j = 0; j < **B[0].length**; j++) {

for (int k = 0; k < **A[0].length**; k++) {

C[i][j] += A[i][k] * B[k][j];

}

}

}

return C;

}

There are two parts; first, the resulting matrix is initialized to zeros. That part is executed as many times as rows in matrix A times columns in matrix B. Next, the multiplication part. It runs as many times as rows in the matrix A times columns in the matrix B times columns in the matrix A. Figure 3 highlights matrix multiplication steps for reference.

Thus, O(aᵣ*b𝒸 + aᵣ*b𝒸*a𝒸). Nevertheless, since we drop the non-dominant terms, it is **O(aᵣ*b𝒸*a𝒸)**. A key idea, we can have more than one input, therefore more than one variable controlling the Big O equation. Be aware this is NOT n² or n³

Notice: When dealing with nested loops, be aware of the inputs. You could have a case of O(n*n), which is O(n²), or a case with multiple inputs as

O(n*m). They are not the same.

## 6. Runtime Big O for Loops and Recursion

What if we put togheter loops and recursion? the complexity is not so different to evaluate as we did before. However, it is important to keep an eye into the stop conditions. Let us consider the example below (example 13).

`// Example 13`

private static void **permute**(String str, String permutedStr) {

if (str.length()==0) {

System.*out*.println(permutedStr);

} else {

for (int i=0; i<**str.length()**; i++) {

String newStr = str.substring(0, i)

+ str.substring(i+1, str.length());

*permute*(newStr, permutedStr + str.charAt(i));

}

}

}

You can identify the problem: printing all permutations of the characters in a string. What is the Big O for this?

It looks like something we saw before, something as** O(B ᵈ)**, but it is not. The number of branches is not constant. At every level, we do fewer branches. We have:

- 1 node at the top
- n nodes in the second level
- n*(n-1) nodes in the next level
- n*(n-1)*(n-2) nodes in the next level
- and so on, until n*(n-1)*(n-2)*(n-3)… * 1

This can be expresed as:

1 + n + n*(n-1) + n*(n-1)*(n-2) + …. +n*(n-1)*(n-2)*(n-3)… * 1

And, with some mathematical magic, that is the same that,

n!/n! + n!/(n-1)! + n!/(n-2)! + n!/(n-3)! + … + n!/(n-n)!

Which is the same that,

n! * (1/n! + 1/(n-1)! + 1/(n-2)! … + 1/3! + 1/2! + 1/1!)

And, also is

n! * (e)

Math Note:

e = 1/n! + 1/(n-1)! + 1/(n-2)! … + 1/3! + 1/2! + 1/1!

The Euler’s number (e) is a mathematical constant approximately equal to 2.71828. So, our Big O is O (n! * e), and we drop constants, then we got **O(n!)**.

Well, one last thing, what about these **substrings** that we calculate inside the for loop, they are O(n), and we calculate them every time before a recursive call so, our runtime complexity is **O(n * n!)**

Notice: Be aware of

O(n!)for permutations.

## Big O for Common Algorithms

As an advice, keep in mind the Big O values for diverse versions of sorting and searching algorithms. I mentioned before linear and binary search:

- Linear search is
**O(n)** - Binary search is
**O(log n)**

Also, in the examples above, we reviewed the bubble sort algorithm and explained that it’s Big O as **O(n²)**. What about quick sort or merge sort?

- Quicksort is
**O(n²)** - Mergesort is
**O(n*log n)**

Now, test yourselves. It is time to practice.

- Remember the implementation using recursion for traversing a binary tree (pre-order, post-order, in-order). What is the Big O for these?
- What about common data structure operations, such as insert a new element in a Binary Search tree? What about inserting a new element in an AVL tree? What about inserting a new element in a Queue or Stack?
- What is the Big O for Dijkstra’s algorithm?