# Algorithms & Time Complexity (Talking Big O like the Pros)

Picture yourself standing in front of the white board. You have your coding problem, you’ve gone over the typical questions — inputs, outputs and edge cases. Are there negative numbers? Is the data sorted? What if the parameters are null? You have a solution in hand and you’re ready to begin pseudo-coding, and your interviewer asks, “What is the time complexity of that?”

There is a language unto itself when talking about Big O notation or the time complexity of an algorithm. We hear things like, “Oh that’s quadratic…”, “Can you do better than linear?”, and so on..

But what does it freakin’ mean?? And how can you use these terms to not only identify the time complexity of your answers, but also know how a particular solution stacks up against the others?

Since algorithm solutions usually go from less efficient to more, I will go through these in the order that they may come to you when solving a real problem.

An easy first-pass solution for many algorithms is to use the fabled nested for-loop. It’s the quick-and-dirty, brute force approach that can give you an answer as well as a foundation to move on to more efficient solutions.

The first loop will go through each item in the array and the second, inner loop will go through the array again and compare that value to the rest.

• You have a nested for loop
• You pass through an array twice for each item

Problems: time consuming, memory issues when array get larger and larger.

“Can you make it linear?”

#### Linear Time — O(n)

Single-pass solutions are linear because the time complexity grows in proportion to the length of the data. So if you had 100 items in the array, worst case you would go though all 100 items 1 time.

Examples: finding first duplicated value, finding min/max in array, etc.

Problems: Worst case, you have to touch every item in the array, even if there are solutions where you don’t have to. If data is really long, it would be better to find a way to cut the problem in half or break it into sub problems.

“This is logarithmic?”

#### Logarithmic Time — O(log n)

Binary Search allows us to cut an array in half by starting at the middle and then using a reference value to decide which direction to move until a value is found.

Examples: binary search, when you don’t have to touch every element of the array, when you can cut the array in half and decide which side you are going to look through using a reference value.

Merits: Faster than linear search, but the list has to be sorted.

Demerits: Still slower, especially if unidirectional

“What is constant time?”

#### Constant Time — O(1)

A good example of constant is array lookup `arr[0]`. No matter how big the data set is, the time will always be the same. Hash lookups are also constant.

Merits: Fastest. Takes same time no matter what.

Demerits: pretty rare to be able to find or use as a single solution. Although you can often add some if/else statements at begging of a function to return faster if certain conditions are met. For example:

`if (arr.length === 1) {   return arr[0];}`

### Demo

For a really great example of this type of thinking/language used in action, check out this mock technical interview posted by Google.

Notice how he analyzes the time complexity every step of the way and uses this exact language to communicate well with his mock interviewer.

### Conclusion

There are lots more concepts when talking about Big O Notation, but these are the general ones. And it is good to have a mental model and know the terminology that is used.

When considering your solutions to algorithm problems, keep in mind this list from high-complexity, low time performance to the opposite:

• Quasi-linear (e.g. sorting first, then linear)
• Linear (one pass)
• Logarithmic (binary search)
• Constant

I didn’t talk about recursive solutions. I will cover that in a different post.

### Resources

Here are some more resources to continue your studies on cracking the coding interview, as it were: