Big O Notation

Reilly Claro
3 min readFeb 3, 2019

--

As I prepare for technical interviews for coding jobs, one thing keeps coming up: Big O. I’m always like, “Yea… complexity… time… space… I know it.” So… I set out to really learn it.

Big O notation is a BIG topic. I realized pretty quickly that I’m not going to be able to learn ALL there is to know about it, but I want to understand the most important things, so that I can have a basic foundational knowledge to build on when Big O comes up.

So, what is Big O notation?

Big O notation describes the complexity of an algorithm. So, how long an algorithm takes to execute or the space used up by an algorithm. Basically, how efficient your algorithm is.

Why does it matter?

Big O indicates how your program will scale when the input increases. The efficiency of your algorithms can dramatically change the speed and user experience of your program, so you need to be able to predict how the runtime will change given different inputs.

O(1)

is a constant time algorithm. This means it always executes in the same time and space regardless of the size of the input. The runtime stays consistent whether the input is 1 or 1,000. More input, same time.

ex. grabbing an item from an array from a specific index

O(N)

is a linear time algorithm. As the input size increases, the time it takes to execute increases. More input, more time.

ex. traversing an array

O(N²)

is a quadratic time algorithm. When you take each element and do something with each other element, like with nested loops.

ex. bubble sort

O(2^N)

is an algorithm where growth doubles every time input is added to the data set. So, an exponential growth curve.

ex. recursive functions

O(log N)

is a logarithmic time algorithm. Very efficient. Produces a growth curve that peaks at the beginning then flattens out. So, more input doesn’t change your runtime enough to matter.

ex. binary search

Big O meme by me

--

--

Reilly Claro

software engineer with a passion for developing assistive technology