# How to Think About Big-O Notation

Time/Space complexity is a paradoxical concept.

On the one hand, it’s something you just need to know for whiteboarding interviews and which rarely comes up for most engineers unless you’re building something that is very performance-sensitive. And even then, usually, you’re working with tools other people have built that abstract away a lot of those details for you.

But on the other hand, it’s also very useful to have a deep intuition about it. It gives you a theoretical framework for analyzing complex systems and allows you to predict where the pain-points will be when scaling. It tells you which parts of your application are resource-intensive and which ones will remain performant under much higher loads.

Over the years, I always seem to come back with new insights about this topic. It’s one which I’ve seen trip up a lot of engineers, especially relatively inexperienced ones. A lot of engineers also seem to learn the solutions to a number of algorithm problems, but then still feel shaky about evaluating the time or space complexities of problems they haven’t seen before. I think this is mainly because of how it’s taught since the more I understand it, the more intuitive it seems.

My philosophy has always been that having a lot of different ways of looking at problems. For me, it helped a ton to have a good visual understanding, and also a few examples of, let’s call them “prototypical”, problems on hand. But that’s just what worked for me. An important meta-skill is coming up with your own system for learning.

One thing worth mentioning about this article too is that this isn’t a substitute for doing lots of practice problems. When learning almost any programming concept, there is some degree of memorization and then pattern matching before you can start to see the Matrix.

The goal with this series of posts to help engineers have a better intuition about some of the most time and space complexities, and ultimately to have more confidence when they talk about them in interviews, and on the job if it ever comes up.

# Some Useful, General Tips

Where there are multiple possible **configurations** for inputs of the **same size**, we generally think about the “**worst-case” scenarios **for those inputs. This might mean searching through a really unbalanced binary-search tree, or scanning through each an array for a number that’s in the last spot,

One way of looking at complexity is thinking about **what happens to runtime, or memory needed**, as the input scales up to **really large values**. Sometimes you’ll hear it taught as going to “**infinity**” but I think it’s easier to grapple with more tangible values, especially at first.

There are different ways we can measure the size of the input, that depend on what we are doing with it. Typically, we’re talking about **numeric value **for numbers, **number of characters** for strings, **number of key-value** pairs in maps, **number of elements** in arrays, and** number of nodes and/or connections **in a data structure. There’s some nuance to this, but we’ll talk about that more in each section.

Another useful question to consider is, **“what happens if I INCREASE my input size by 1?”**. To me, this is one of the most powerful tools, because it lets you use **inductive reasoning** to figure out what happens at very large values. If I **add one element **to my input array, how much more work, does my algorithm have to perform? Is it one additional step? Is it twice as many steps? Is it more than twice as many steps?

Another good trick is asking, if I **double or half **my input size, what happens? Sometimes the amount of work doubles, other times, it only adds on an extra step. It depends.

And it’s not just doubling or adding one, once, try **adding one or doubling several times in a row**. So try the number 2, 3, 4, 5… or an array with 2, 4, 8, 16 elements.

And whether you’re whiteboarding or coding, always write out your **results neatly in a table** somewhere you can refer back to. Try to see if you can spot any **patterns** or **trends** in your results.

Just doing these few hacks alone helps you tremendously towards coming up with your solution. This advice also lines up nicely with the steps laid in The Algorithm of an Algorithm, specifically in that you want to help the unconscious, pattern-matching part of your brain do its magic.

So with that preface, let’s jump in. I’ll be going from best to worst time complexities: