A Brief Glimpse Into the Big O

Kendall Stephens
The Startup
Published in
4 min readNov 3, 2020

In computer programming, there are many solutions to any given problem. So how do you know which solution is the best and how can you compare one algorithm to another?

This is where the Big O comes into play! The O stands for Order of Complexity and it’s used to find the worst-case scenario with regard to an algorithm’s run time.

The mathematical definition is…

f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.

Hmm… can’t we just time each algorithm to see which is the fastest!? Well, it’s not quite that simple.

There are a few problems you will find with time….

  • Different computers will record different times
  • The same computer will record different times
  • When dealing with very fast algorithms, speed measurements may not be precise enough

So how can we measure an algorithms efficiency? The Big O uses two complexity factors to do this,

Time Complexity

  • performance: speed & efficiency

Space Complexity

  • memory use: extra variables and data structures

We measure algorithms not in absolute terms, but in relative terms — relative to the size of the input. Think of the input as an argument to a function. If you double the size of the input, the time and space scale relatively.

Let’s take a look at a simple example of an array and a loop that will console log each element in the array one by one.

Doubling the input size of this array will also double the amount of time the algorithm takes. A proportional exact increase like this is called linear time. In the big O notation linear time is represented as O(n).

Every line of code has it’s own time complexity, to understand each line’s complexity simply ask yourself — how does the input size effect this specific line? If it does not have a direct impact, then by default that line is considered constant time because it has no relationship to the input.

That takes care of the time complexity, but what about space? How does this algorithm grow in memory usage as the input size scales? Our space complexity for this algorithm is essentially zero, we are not using any additional space because we are not creating anything inside of the loop. And, because our input does not qualify as additional space either, we are starting and ending with zero space in this example. So the space complexity would be a have a Big O notation of O(1).

Ok, so we now have a time complexity of O(n) and a space complexity of O(1). So what is the final notation for this algorithm? Well, when using the Big O, we only keep the terms that are in the highest order. In this example because O(n) will ultimately have the biggest effect on the run time, this will be our final notation. Again, the Big O is used to evaluate the worst case scenario.

Here is a quick view of the Order of Complexity from most efficient to least.

What would happen if we wanted to create a brand new array in this algorithm using the data inside of our array?

Does this change our space complexity? The answer is yes. Anytime you are creating new data, you are using additional space. The space complexity here would be O(n), because the amount of space needed will grow exponentially based on the size of the original array.

What I’ve shared here is just a small glance into the complexities of the Big O and it’s evaluation of an algorithm’s efficiency. To visualize the scale of complexities I found this chart especially helpful!

https://www.bigocheatsheet.com/

I hope this sheds a bit of light on the Big O for you, thank you for reading!

--

--