Big O for Noobs - Part II
This is part 2 of Big O for Noobs, if you’d like to read part 1, you can do so here.
In the last post, we concluded that the time complexity for the operations performed on an array were:
Read — O(1)
Insertion — O(n) — worst case, O(1) — best case if inserted at the end
Search — O(n)
Deletion — O(n) — worst case, O(1) — best case if deleted from the end
When we apply the same logic to hashes, we will get:
Read — O(1)
Insertion — O(1)
Search — O(n)
Deletion — O (1)
What’s the takeaway from this? If you’re going to be doing a lot of inserting and deleting, and it won’t be at the end of an array, a hash may be the better way to go. If the order matters, an array might be the better choice, All else, they’re pretty equal in performance.
Now let’s look at how we can apply these concepts to problem solving. Let’s say we want to add up all the values between n and m. There’s plenty of ways to do this, one of which may be like this:
As you can tell, this solution has a time complexity of O(n) since it will have to cycle through all the numbers between n and m. What if n was 1 and m was 1,000,000? It would have to go through 1,000,000 numbers and add them all up. Now let’s take a look at different solution.
In this case, the time complexity for sumNToM is O(1). No matter what the numbers are, it will only take one step. 1,000,000 numbers? No problem, 1 step!
I’m in no way implying that being a math genius is a requirement to be a developer, but I’m just using these examples to illustrate that being cognizant of time complexity matters especially when working with a lot of data. In the next part, we’ll look at O(n²) and why we want to avoid them as much as possible, and O(logn). Stay tuned!