One of the reasons I find there are so many blogs about Big O notation is because no mainstream bootcamps (including the one I attended) are teaching about it. Thus, it is the perfect topic to blog about because writing helps to reinforce your own knowledge. I believe it’s overlooked because people think that it won’t be useful at small companies, it won’t be asked about in a junior developer job interview, or maybe (most likely) there’s just not enough time to cram it into a 15-week bootcamp schedule. But, as you’ll see, even just knowing the basics (and this is basic) will help set you apart from the competition and fundamentally change the way you look at algorithms.
To begin, take a look at this chart (you’ve probably seen it countless times):
This chart basically shows us how efficient our algorithm is when scaled up. For example, a basic for loop is fine if you’re searching through 20 elements, but what about 10,000,000 elements? We need a way to discuss the time and space of our code. Since everyone has different computers, internet speeds, etc., measuring the actual time or space (i.e. 40ms, 3kb) of an algorithm would serve us little to no good. Big O notation gives us a uniform way that we can measure the time and space complexity of our code.
We’re only going to discuss Constant Time — O(1), Linear Time — O(n), and Quadratic Time — O(n²); I believe these will be the most helpful for junior software developer interviews (if you’re applying to Apple, Amazon, etc. your knowledge will need to go much deeper).
Constant Time O(1)
An algorithm with O(1) time will return in a fixed amount of time, no matter the size of the input. For example:
This function returns a string containing the element accessed at index 2 of the inputted array. As you see, it doesn’t matter whether the array has 4 elements or 1,000; the return time is always the same — constant. This is the fastest Big O time.
Linear Time O(n)
An algorithm with O(n) time has its speed determined by the size of the input (n). One very important rule when measuring in Big O is that we always take the worst-case scenario as our Big O notation. For example:
This function finds “tiger” rather fast, but what if our array had 1,000 elements and “tiger” was at the end? This is what we mean by taking the worst-case scenario. So for every element (n) that is added, our time increases linearly O(n). Think of it as every step through an array causes the time to increase with it. How about this example:
Each for loop has a time of O(n), so this function’s time is O(2n). However, in Big O notation we drop the constant and still just call this O(n).
A note about arrays and time:
Accessing an element in an array (arr) will always be constant time O(1). The same goes for appending and unappending (push, pop) an element. However, inserting, deleting, shifting, unshifting, splicing all take O(n) time. This is because all of the array’s indexes have to be shifted when these operations are performed — which means the time to complete this increases with every element (n) in the array. Even if you perform a slice to make a copy of the array, that copy will take O(n) time to be performed because the entire array has to be mapped over to make a copy. Array methods like .forEach, .map, .find, .filter, .reduce are all O(n) linear time as well.
Quadratic Time O(n²)
As we move on, our time is getting worse and worse as inputs increase. Quadratic Time veers into the unacceptable, and you should be weary of giving a solution to interview questions with a time complexity this high. However, it is a great place to start when coming up with your first, brute-force solution to the problem; it shows you can at least solve the problem and gives you an opportunity to talk about Big O and why Quadratic Time is (usually) unacceptable. The most common example of this is a nested loop:
If your answer involves nested loops, there is always a faster time it can be solved and it most likey involves an object or “hash table”. Let’s look at a better solution:
Using an object allows us to solve the problem with two for loops that aren’t nested. See above in Linear Time that multiple for loops still give us O(n) time, so this is a much better answer.
The above solution introduces space complexity since there is a variable that can take up memory (space). Space complexity works logically the same way as time complexity. Is it true that our variable could increase in size (space, memory) as our input (n) increases? Yes it is, so our space complexity here is O(n).
There is a good chance no one is going to ask you about Big O in your interview — this is because they are waiting to see if you bring it up when you’re talking through your coding process. This is your time to stand out from the rest of the bootcamp grads who all made the same puppy-liker app and never learned anything new on their own! Big O-yeah! lol.