# Big O No!

## Understanding Big O notation

We’ve all been there, you are in an interview, and you’ve just finished solving some particularly challenging interview question. The sweat is beginning to evaporate from your forehead, your heart is beating normally again, and then your interviewer says, “So what’s the complexity of your solution?”

For new developers, especially those of us that don’t come from a computer science background, Big O Notation can be extremely intimidating. It’s a weird combination of specificity and abstraction that feels like some sort of secret handshake that everyone else knows.

#### What Is It?

Big O notation is just a way of documenting the efficiency of some code. More specifically, it describes how a change in the size of the input of a function (or algorithm, or whatever) affects its running time. Big O is also used to describe space complexity, but we are going to focus on time because I think that is the more difficult one to get a handle on. An excellent quick reference for the different complexities is Big-O Cheat Sheet.

For me, the best way to get a handle on each of them is to understand what they are describing and then walk through some examples.

Before we get started, it’s important to remember that the big O in Big O stands for **order of**, so we are really only concerned with changes in the order of magnitude. This means we don’t have to worry about constants. It also means that O(n), O(2n), O(10n) are all just O(n).

#### Constant Time — O(1)

This is the dream! It means that no matter how big our input is, our algorithm will only take a finite number of steps. An example of this would be accessing an element in an array by its index. It doesn’t matter how long the array is because we don’t have to traverse it in any way, we can just grab the item directly. You’re probably not going to be writing a lot of O(1) functions, but it’s good to recognize when an action happens in constant time because it can be a handy way to improve the performance of a solution you are writing.

Also, it bears repeating that Big O notation isn’t interested in what the constant is, only that it is constant. The algorithm could take 1 step, 5 steps, or 1000 steps — it is still constant and still O(1) because we can count on it to always take the same amount of time no matter how our input changes.

#### Logarithmic Time — O(log n)

Logarithmic Time, basically means that when you double the size of your input, it increases but doesn’t double the amount of work required. This one comes into play a lot in search algorithms, but outside of that probably won’t pop up too much in general coding problems.

#### Linear Time — O(n)

Linear time means that for every new element we add to our input our algorithm runs one more time, so the speed grows in a linear relationship to the size of our input. A great example of this is a basic for loop. If we use a for loop to iterate over an array, the loop will run an extra time for each item we add to the array.

In fact, when figuring out the run time of a function that you have written, a good thing to look at is how you are iterating over your data. If you are using some sort of looping structure to move over a collection and perform an action, it is likely that you are in Linear Time.

There are a couple of scenarios to look out for that can make linear time slightly confusing. The first is multiple for loops. For example, let’s look at the following function:

function reverseAndBack(str) {

let reversed = ''

let unreversed = ''

for (let i = str.length - 1; i >= 0; i--) {

reversed += str[i]

}

for (let j= reversed.length - 1; j >= 0; j--) {

unreversed += reversed[j]

}

return unreversed

}

Other than being completely pointless, this function has two loops, and each one increases in linear time in relation to the input. If each loop is O(n), then the speed of this function is O(n) + O(n) which is O(2n), but remember that we don’t care about constants, so we still classify this as O(n) because it’s all about orders of magnitude.

But what if we are running loops over two different pieces of data? For example:

function combineArrays(arr1, arr2) {

const combined = []

for (let i = 0; i < arr1.length; i++) {

combined.push(arr1[i])

}

for (let j = 0; j < arr2.length; j++) {

combined.push(arr2[j])

}

return combined

}

In this case, we are looping over two different arrays. Both loops still happen in linear time, but we would write this as O(n+m) to denote that the runtime is dependent on two different inputs.

#### Quasilinear Time — O(n log n)

Quasilinear Time means that doubling the size of your input increases the amount of work you need to do by slightly more than double. This is a common runtime for sorting algorithms.

#### Quadratic Time — O(n²)

Quadratic time, like it’s notation suggests means that for every new element we add to our input, the number of steps increases exponentially. A great problem for visualizing this runtime is the steps problem, which is a common interview question.

In this problem, the goal is to take in a number *n *and log a series of # signs to the console that form steps, the bottom row of which is length *n.*

function steps(n) {

for (let i = 0; i < n; i++) {

let line = '';

for (let j = 0; j < n; j++) {

if (j <= i) {

line += '#';

} else {

line += '_';

}

}

console.log(line);

}

}

so calling `steps(5)`

would log:

#____

##___

###__

####_

#####

The first warning sign here should be the nested for loops. In this example, the outer for loop is responsible for the rows, and the inner for loop is responsible for the columns of *#’*s and spaces. This means that every time we increase the value of *n *we have to run the inner for loop an entire extra time. Therefore the amount of work we are doing increases exponentially. If this one is still a little hard to visualize, this image might help.

As you can see in the image, adding an element to the array adds both a new row and a new column, so the increase happens in two dimensions.

Just because you have a nested for loop doesn’t always mean your runtime is O(n²). If both loops aren’t iterating over the same input, then it could be a different runtime, like O(n*m) or something else, so make sure you take a moment to look at what is actually happening in your code.

#### Exponential Time — O(2^n)

Exponential time means that if you add a single element to your input, the processing time more than doubles. Yikes! This can start to creep in when you are solving things recursively, where every new element in your input requires multiple new function calls.

#### Combining Run Times

Another thing to bear in mind is that complicated problems can have combinations of these runtimes. To evaluate your function, the best thing to do is look at each step and evaluate its runtime. Then you can combine those runtimes and remove any unnecessary constants. This should leave you with the runtime of your entire function.

So next time Big O comes up in one of your interviews, I hope it won’t seem so daunting and that it will be a little bit easier to figure out the complexity of your code.

If you’d like to read more about Big O, I’ve included some links below to resources that I found helpful.

**The Coding Interview Bootcamp: Algorithms + Data Structures**

*Data Structures? They're here. Algorithms? Covered. Lots of questions with well-explained solutions? Yep! If you're…*www.udemy.com

**CRACKING the CODING INTERVIEW**

*CONTENTS | CHANGES FROM THE 5TH EDITION I am not a recruiter. I am a software engineer. And as such, I know what it's…*www.crackingthecodinginterview.com