# A Quick Overview of Big O Notation

# Introduction

If you’re studying computer science or software engineering then no doubt when discussing algorithms you’re very likely to hear about Big O notation. But what is it and why do you need to know it? Let’s discuss.

Big O notation is simply a measure of the time complexity of an algorithm. In other words, if we increase the data that we input into an algorithm how does it affect the speed of computation? For example if we have an algorithm that processes ** x** amount of data in

**amount of time, we want to know what at what rate more or less**

*y***would increase by if we introduced**

*y***,**

*2x***,**

*3x***, etc. amount of data. This is the purpose that Big O notation serves, to estimate how the speed of an algorithm’s computation is affected relative to the input value. It’s important to note that big O is not used to calculate how long a computation will take in absolute terms. It is only concerned with the relative speed of the computation. This post is meant to serve as an overview for the four most often discussed time complexities.**

*x2*# O(1) — Constant

A Big O notation of ** O(1)** means that the time complexity of the algorithm is constant. In other words, the size of the input data does not have an affect on the speed of the algorithm. An example of

**or constant time complexity would be accessing an element of an array. The time it takes to execute does not depend on the size of the input data.**

*O(1)*# O(n) — Linear

A Big O notation of ** O(n)** means that the time complexity of the algorithm is linear. This means that the time it takes for this algorithm to execute is proportional to the the input data. As a rough example, if the value of n is 3 and it takes 3 milliseconds to execute then with an input value of 5 for n, we could expect that the computation would take about 5 milliseconds. An example of

**or linear time complexity would be a simple for loop.**

*O(n)*# O(log (n)) — Logarithmic

A Big O notation of ** O(log (n))** means that the time complexity of the algorithm is logarithmic, meaning that the time complexity is proportional to the logarithm of n. In case you need a refresher on what a logarithm is, here’s a link to a couple of Khan Academy videos discussing logarithms. An algorithm with logarithmic time complexity can be thought of as a divide and conquer approach such as the famous phone book example from Harvard’s CS50 course. An example of

**or logarithmic time complexity would be binary search.**

*O(log (n))*# O(n^2) — Quadratic

A Big O notation of ** O(n^2)** means that the time complexity of the algorithm is quadratic, meaning that the time complexity is proportional to the square of n. An example of a

**or quadratic time complexity would be a nested for loop. This can actually go deeper if there are even more nested loops**

*O(n^2)***and so on, depending on the number of nested loops.**

*n^3*To better understand these time complexities, it’s often helpful to see them on a graph. Here’s a graph that demonstrates how various time complexities perform based on the number of input elements. This graph has the four time complexities discussed in this post as well as a few others.

# Recap

To quickly recap the four most common types of time complexity:

- O(1) is a constant time complexity
- O(n) is a linear time complexity
- O(log (n)) is a logarithmic time complexity
- O(n2) is a quadratic time complexity