Big O Notation in JavaScript

“What is Big O Notation?” that is a very common job interview question for developers. In short, it is the mathematical expression of how long an algorithm takes to run depending on how long is the input, usually talking about the worst case scenario.

In practice, we use Big O Notation to classify algorithms by how they respond to changes in input size, so algorithms with the same growth rate are represented with the same Big O Notation. The letter O is used because the rate of growth of a function is also called order of the function.

Why is important to know this? Knowing Big O helps and facilitates developers being aware of the efficiency of an algorithm so they can create applications with good performance.

It is usually acceptable that your first version of an algorithm is not the most efficient solution but the fastest to code, assuming it will be replaced for a more efficient one in future versions. This is a very common practice in agile development, especially in test-driven-development.

Sometimes we want to focus on using less memory instead of (or in addition to) using less time (iterations) and usually there’s a trade-off between saving time and saving space. You can also see this kind of balance between saving read time and saving disk space, or when comparing relational (normalized) and non-relational (denormalized) databases.

Let’s now explore the most common types of Big O Notations, we will be using JavaScript (ECMA6) as our reference language but the same principle applies to any other.

Constant-Time Algorithm

O(1) — “Order 1”

On this order, regardless of the complexity (number of items), the time (iterations) is constant.

You will see this on algorithms that returns an element in an already known position of an array, regardless kind or length.

Example code:

Example case:

Linear-Time Algorithm

O(N) — “Order N”

In this order, the worst case time (iterations) grows on par with the number of items.

In other words, for N elements we will require N iterations.

Example code:

Example case:

Quadratic-Time Algorithm

O(N 2 ) — “Order N squared”

For this kind of order, the worst case time (iterations) is the square of the number of inputs. The time grows exponentially related to the number of inputs.

Example code:

Example case:

Logarithmic-Time Algorithm

O(n log n) — “Order N log N”

These are the holy grail of search/sort algorithms, they are usually the most efficient approach when dealing with large collections. Instead of looking through the components one by one, they split the data in chunks and discard a large amount on every iteration, usually the half, or log base 2.

Assuming we are using a log base 2, we could -ideally- find a specific element in a collection of one million elements using less than 20 iterations, if we scale the size of the collection to a billion we would require only less than 30 iterations.

With big data being more common every day, it’s easy to see the advantage of this kind of algorithms since the larger the collection, the more relative efficiency they provide.

The most popular of this algorithms is the Quicksort algorithm, which can be used to find a specific element or sort a list very efficiently. Another popular example of this order is the Merge-Sort algorithm. We will explore these algorithms on future articles.

Example code:

Example case:

And that’s how we covered the most common types of algorithms, I invite you to read more about search & sorting algorithms.

Don’t forget to follow Cesar’s Tech Insights to learn more about software development and technology. Feel free to share your thoughts, comments, requests and suggestions at the bottom.

Happy coding!

Cesar’s Tech Insights

Tutorials, Articles and Reviews about Software Development…

Cesar’s Tech Insights

Tutorials, Articles and Reviews about Software Development & Technology

César Antón Dorantes

Written by

Google Developer Expert | Lead Software Engineer

Cesar’s Tech Insights

Tutorials, Articles and Reviews about Software Development & Technology