Understanding Big O Asymptotic Analysis

Nora Mensah
mPharma Product & Tech Blog
6 min readOct 8, 2021

As Software Engineers, our call is to solve some of the most difficult problems. With deadlines fast approaching, we tend to make delivering products before deadlines our priority and usually trade that for writing good code. Understanding the concept of Big O goes a long way to help you as an Engineer to understand how well the code you ship performs as the number of your users increase.

What makes a good code?

Good code is code that is readable, reusable, maintainable and scalable. Today, we would shine more light on the scalability aspect of writing good code. What does it mean for code to be scalable?

There are two things that define the word scalability:

  • Speed and
  • Memory

Time Complexity

In measuring code scalability, time is one of the things we look out for. How long does it take your code to execute — the runtime? How about when your input size keeps growing? Is your code going to execute much slower?

Space Complexity

Another thing that we look out for is how much additional memory your code uses in order to execute. As the input size increases, how much memory will your code take to execute?

Usually, we have to make a trade-off of either choosing speed over memory or memory over speed when analysing code. To measure the scalability of our code, we use a concept called Big O Notation also known as Big O asymptotic analysis.

Calculating Big O

There are two main things to look out for when calculating the Big O of an algorithm:

  1. Number of operations
  2. Number of elements or input size

Linear Time — O(n)

Linear Time is one of the most common complexities you are going to come across. As the name suggests, we say an algorithm has linear time complexity or O(n) (pronounced Big O of n) when the time is directly proportional to the input size. That is, the time to execute the code grows as the input size grows. To explain further, take a look at the code below.

From the code above, we have an array that contains only one item — a string, findNemo function that takes an array as input, a for loop that loops through all the items in the array and checks if the item in the array is strictly equal to the string “nemo”, and logs ‘found nemo!!’ to the console.

This means that no matter the size of the array, we go through each item and that will cost us time. Therefore, the time to execute this is completely dependent on the size of the input. To further explain this concept, consider copying and executing the code snippet below:

We are using JS’ inbuilt performance object to track the time it takes to execute this code. The expected output is somewhere around 0.09999996423721313 milliseconds. The kind of computer you use and how fast your CPU is may affect this answer but generally, you should expect somewhere around 0.x milliseconds. Now let's try to increase the size of the input passed to the findNemo function.

Now increasing the size of the input from 1 to 100, we get between 4.700000047683716 and 9.200000047683716; a significant change in execution time. This is because the number of operations, in this case, the for loop runs on each item thereby increasing the execution time as the input size increases.

Constant Time — O(1)

We say an algorithm runs in constant time when the number of operations has no correlation to the input size. In other words, no matter the input size, the number of operations does not change.

For example, given the code below:

We have an array of numbers and a function called findFirstItem which logs the first item in the array to the console.

We realise that no matter the size of the array, there’s no increase in the number of operations. All we do is get the first item and log it to the console.

Therefore, operations that do not depend on the input size are said to run in constant time or O(1).

Quadratic Time — O(n²)

We say that an algorithm run in quadratic time when the number of operations increase by two times of itself depending on the input. For each increase in input size, the computation time increases by the squared size of the input.

This usually occurs when you have nested loops. For instance, if we want to print a pair of numbers as shown in the code snippet below:

To print out the pair, we loop through each item and for every first item, we loop through the item itself in addition to the other items.

If you still don’t understand how this works, try running the code in your console to see its output.

We have already dealt with O(n) and now we have two O(n)’s, nested within itself therefore we multiply O(n) by O(n) :

O(n) * O(n)  = O(n^2)

Rules for computing Big O

Having gone through the main Big O complexities, when calculating, there are certain rules you have to take note of in order to give the right answers when asked questions concerning Big O during technical interviews.

Rule 1

Look at the worst-case scenario.

When calculating, we consider the worst thing that could happen to your code such as your user size growing to about a billion. Will your code still scale?

Rule 2

Remove constants

After considering the worst-case scenario, looking at the grand scheme of things, constants are neglected.

Consider this:

When our function gets called, we log to the console, ‘this function has been called’. Then we have our function print its pairs.

The conclusion will be:

O(1) + O(n) * O(n) = O(1 + n^2)

When asked to compute the big O value of this printPair function, we neglect the constant which in this case is O(1) therefore the final answer will be O(n²). Because in the grand scheme of things, the O(1) will not have any significant impact on the speed of our code.

Rule 3

Treat inputs differently

One thing that many overlook when calculating Big O is when they have different inputs, they treat them as the same.

For example:

Instead of the final answer being:

O(n) + O(n) = O(2n)

It should rather be:

O(a) + O(b) = O(a + b)

Two separate inputs should have two separate variables. Don’t worry too much about not having n in this answer. It is a concept that needs to be understood.

Rule 4

Drop non-dominants

Just like we dropped constants in our second rule because we look at the bigger picture, we drop non-dominants here too for the same reason.

So if we have a calculation summing up to O(n + n²), We drop the O(n). Thus, our final answer is O(n²). Therefore, make sure that your final answer always points to the complexities.

Conclusion

In conclusion, Big O is not just a theory but a tool we can leverage to build not just applications that work but applications that are efficient as they are useful. So, the next time you write that code, ask yourself, “how well will this code scale when I have about 7.6 billion people using my app”?

mPharma: Good health across Africa

At mPharma, we believe in the principle that access to safe and affordable healthcare is a universal human right. We won’t cease until our vision is a reality.

--

--

Nora Mensah
mPharma Product & Tech Blog

I love to share what I learn from time to time usually about frontend development; JS, React and whatever I find interesting the tech space.