**Getting to know Time Complexities and Big-O Notation**

If two algorithms output the same result, what makes one algorithm superior over the other?

Performance.

As a junior developer, initially I focused on simply outputting the correct result and passing the specs, not necessarily concerned with how I got there, thinking: *the logic worked, right? It should be fine.*

But no!

It was only when I learned the concept of **time complexity** I understood that not only were there multiple ways to solve a problem, but the subtleties involved in each method greatly contributed to its performance, and some being more efficient and superior over others.

Understanding this concept is vital in being able to optimize the performance of your algorithms. Therefore, this post will go through some basic definitions, and examples, as well as highlight some tips and tricks in being able to identify which category your algorithm falls into.

Side Note:In optimizing algorithm performance,space complexityis also taken into consideration. This is the measure of theamount of memory spacetaken up by your algorithm. The less used, the better. While this is an important factor, this post will focus mainly on time complexity.

So, what is **time complexity**?

The basic concept of time complexity is to approximate the **worst-case scenario** for the **runtime** of your algorithm.

The measure of time complexity is broken up into different categories. Of these, there are 5 main categories that will be addressed in this post:

- Constant: O(c) or O(1)
- Logarithmic: O(log n)
- Linear: O(n)
- Quadratic: O(n²)
- Exponential: O(c^n)

You’ll notice that alongside these categories are what look like invocations of some function, ‘O’. this is termed** Big-O notation**. It is an easy convention by which to refer to each of the categories, and mathematically describes how an algorithm will perform for different values of n, which here refers to the size (usually length) of your input.

It is important to note that when exploring time complexities and Big-O, we should are only concerned with how these perform at large values of n, especially when comparing them to one another, as shown below.

So, let’s start with best performance and work our way up to the worst.

**Constant Time**

*Big-O Notation: O(c) or O(1)*

This is the case where the runtime of an algorithm is not dependent on the length of the input, and is therefore **constant**. This is the **most ideal scenario** as your algorithm will perform consistently no matter the size of your input.

**Examples**

- A function that takes an array as its input and only references the first item in the array:

function constantTime(array) {

return array[0];

}

- A for-loop that only ever logs the first 3 elements of an array, thereby setting a constant limit. Again, not dependent on length of the input array:

var array = [0, 1, 2, 3, 4];

for (var i = 0; i < 3; i++) {

console.log(array[i]);

}

**Logarithmic**

*Big-O Notation: O(log n)*

Here, the runtime increases at a **decreasing rate** as the input size is increased.

A good example of this is searching for a specific value on a **binary tree**.

Due to the nature of how binary trees work, at every node we want to check the following things:

- Is the current node value
**equal**to the value I’m searching for? - If not, is the value I’m search for
**greater****than**the current node value? Hence, we move to the**right**child node. - If not, then it must be
**less****than**the current node value, and so we move to the**left**child node.

Each time either step 2 or 3 is chosen, provided the tree is balanced, half of the possible options are removed, and will never be searched. The worst case scenario here is in fact the height of the tree, thereby avoiding traversal of the whole tree, and producing a logarithmic time complexity.

#### Example

function searchBinaryTree(target, currentNode) {

// ---------------------- STEP 1: --------------------

// check if current node is the target value

if (currentNode.value === target) {

return true;

} else {

// -------------------- STEP 2: --------------------if (target > currentNode.value) {

// check if target value is greater than the current value

// check that a right child node exists

if (currentNode.right !== null) {

// if so, move to the right child node and search again

return searchBinaryTree(target, currentNode.right);

} else {

// if not, the node with the closest value has been reached// so the tree does not contain the value

return false;

}

}

// -------------------- STEP 3: --------------------

// check if target value is less than the current value

else if (target < currentNode.value) {

// check that a left child node exists

if (currentNode.left !== null) {

// if so, move to the left child node and search again

return searchBinaryTree(target, currentNode.left);

} else {

// if not, the node with the closest value has been reached// so the tree does not contain the value

return false;

}

}

}

}

**Linear**

*Big-O Notation: O(n)*

This is the case when the runtime is **directly proportional** to the size of the input.

The most likely scenario where you will encounter a linear time complexity is in **loops**. These traverse the length your input, and so as the length of your input grows, so to does the runtime.

**Examples**

- Traversing an array using a for-loop:

var array = [1, 2, 3, 4, 5];

for (var i = 0; i < array.length; i++) {

console.log(array[i]);

}

- A function that searches an array for a specific value:

var array = [1, 2, 3, 4, 5];

function search(n) {

for (var i = 0; i < array.length; i++) {

if (array[i] === n) {

console.log(arr[i]);

}

}

}

Even though the target value may be found at an index less than the length e.g. search(3), the worst-case scenario would be that the target value is the last element in the array e.g. search(5), and hence the would require traversal of the whole array.

**Quadratic**

*Big-O Notation: O(n²)*

The runtime for these algorithms is **proportional to the square of the input size**, and therefore increases at an increasing rate with the size.

**Nested loops** are a common example of this.

`var array = [[1,2], [3,4], [5,6]];`

`for (var i = 0; i < array.length; i++) {`

for (var j = 0; j < array[i].length; j++) {

console.log(arr[i][j]);

}

}

As with linear time complexity before, each for-loop itself has a linear time complexity but as these for loops are nested, we need to aggregate their time complexities into one, so n x n = n².

**Exponential**

*Big-O Notation: O(c^n)*

This is the last category that will be covered, and is the has the worst performance of the bunch. Here, the runtime** increases exponentially **with the input length.

Imagine an algorithm that must guess a password by brute force, where each character in the password has the constraint of being a lowercase letter in order to avoid special characters.

Each character in the password therefore has only 26 possibilities.

So for a password of character length 3, we would imagine 3 nested for-loops each looping over the 26 letters of the alphabet. The time complexity would be 26 x 26 x 26 = 26³.

And thus for a password of length **n**, where each character has a number of **c** possible values, the time complexity would be, **c^n**.

You made it! Hopefully, this post has given you:

- A brief overview of Time Complexities and Big-O Notation,
- An explanation of the 5 main categories that you may encounter in your algorithm, and
- How to easily identify them

As I said at the beginning of this post, it is vital for you to understand these concepts when writing algorithms in order to give you the best chance at optimizing their performance.

Good luck and happy coding!