Big-O Notation Explained in Plain English

A quick primer on asymptotic notation and algorithmic complexity

Natasha Ferguson
7 min readNov 1, 2022

What is Big O Notation?

Big O notation is used in computer science to describe the performance or complexity of an algorithm. It is the most fundamental tool to measure the cost of an algorithm. It describes the time complexity and space complexity of the code using algebraic terms.

Algorithmic complexity is a measure of how long an algorithm will take to complete given the size of the input, or what is commonly called n or n times. The n represents the number of elements. Read more on algorithmic complexity here.

Variables used in Big O notation denote the size of inputs to algorithms. For example, O(n) might be the time complexity of an algorithm that traverses through an array of length n; similarly, O(n + m) might be the time complexity of an algorithm that traverses through an array of length n and through a string of length m.

The below table shows the examples of common complexities and their Big O notation, ordered from fastest to slowest:

O(1) — Constant Time

The most efficient order of magnitude is called constant time complexity. An algorithm runs in constant time when it requires the same number of steps regardless of the problem size. The big O notation of constant complexity is O(1).

Example:

nums = [1,2,3,4,5]

Accessing a value of the list with a list index nums[2] is a good example of constant time. The execution time is independent of the size of the input. Other examples include append() and pop() operations on a list.

Your algorithm requires one step no matter the size of the nums list. If you have 5 numbers, accessing a value at a specific index takes one step. If you have 1000 numbers in the list, your algorithm takes one step, and if you have 100000 numbers, your algorithm takes only one step. The number of steps your algorithm takes to complete does not get larger as the problem size increases. Therefore, it’s the most efficient algorithm you can write.

O(log n) — Logarithmic Time

Logarithmic time is the second most efficient time complexity. An algorithm takes logarithmic time when its run time grows in proportion to the logarithm of the input size.

A logarithm is a mathematical concept that’s widely used in Computer Science. In the context of coding interviews, its usage implies a logarithm of base 2.

log(n) = y if and only if 2^y = n

If an algorithm has logarithmic time complexity, then whenever the algorithm’s input size doubles (whenever n doubles), the number of operations needed to complete the algorithm only increases by one unit. For example, an algorithm with logarithmic time complexity with an input size of 1000, takes roughly 10 operations to complete, since 2^10 ~= 1000

You see this time complexity in algorithms such as binary search that can discard many values at each iteration. You express a logarithmic algorithm in Big O notation as O(log n).

O(n) — Linear Time

The next most efficient type of algorithm is the one that runs in linear time. An algorithm that runs in linear time grows at the same rate as the size of the problem, meaning the time to execute the algorithm is directly proportional to the input size n and therefore the time it will take to run the algorithm will increase proportionately as the size of input n increases. You express a linear algorithm as O(n).

For example, if we used a for loop to print out the values in the list:

team_a = ['Sam', 'Harry', 'Jenny', 'Chris', 'Britney', 'Alexey']
for i in team_a:
print(i)

When your team list contains 6 items, your program takes 6 steps to complete. For a list of 60 items, your program requires 60 steps, for 100 items, 100 steps, and so on. In a linear algorithm, as n gets bigger, the number of steps your algorithm takes increases by however much n increases.

O(n log(n)) — Log-linear Time

An algorithm that runs in log-linear time grows as a combination (multiplication) of logarithmic and linear time complexities. For example, a log-linear algorithm might evaluate an O(log n) operation n times. In Big O notation, you can express a log-linear algorithm as O(n log n).

Log-linear algorithms often divide a data set into smaller parts and process each piece independently. Efficient sorting algorithms like Quicksort and Merge Sort are examples of log-linear time.

Log-linear complexity is not as efficient as linear time. The effort grows slightly faster than linear because the linear component is multiplied by a logarithmic one. For clarification, you can also insert a multiplication sign: O(n × log n). However, its complexity does not grow nearly as quickly as quadratic, which we will cover next.

O(n²) — Quadratic Time

After log-linear, the next most efficient time complexity is quadratic time. An algorithm runs in quadratic time when its performance is directly proportional to the problem’s size squared. In big O notation, you express a quadratic algorithm as O(n^2).

You will encounter quadratic time complexity in algorithms involving nested iterations, such as nested for loops. The deeper nested loops will result in O(n³), O(n⁴), etc.

list_elements = [ [1,2,3], [4,5,6], [7,8,9] ]def sum_list_elements(param_list):
sum = 0
for row in param_list:
for ele in row:
sum += ele
return sum

sum_list_elements(list_elements)

Other examples of quadratic time are simple sorting algorithms like Insertion Sort, Selection Sort, and Bubble Sort.

O(n³) — Cubic Time

After the quadratic time comes cubic time complexity. An algorithm runs in cubic time when its performance is directly proportional to the problem’s size cubed. It’s similar to quadratic except n is raised to the third power instead of second. You will most likely encounter cubic time complexity if your work involves data science or statistics.

Both quadratic and cubic time complexities are special cases of a larger family of polynomial time complexities. An algorithm that runs in polynomial time scales as O(n**a), where a = 2 for quadratic time and a= 3 for cubic time. When designing your algorithms you generally want to avoid polynomial scaling when possible because the algorithms can get very slow as n gets larger.

Other Complexities

Further complexity classes are O(2^n) — exponential time and O(n!) — factorial time. They are the worst time complexities. In exponential time algorithms, the growth rate doubles with each addition to the input (n), often iterating through all subsets of the input elements. Any time an input unit increases by 1, it causes you to double the number of operations performed. While exponential functions grow by multiplying by a constant amount, factorial functions grow by multiplying by an increasing amount. Examples of O(n!) factorial runtime algorithms: permutations of a string and solving the traveling salesman problem with a brute-force search. As you might guess, you want to stay away, if possible, from algorithms that have this running time.

Let’s summarize the common algorithm complexities with examples:

Rules of Big O Simplified

When analyzing your code’s time and space complexity keep these four rules in mind.

1. With Big O, we typically look at the worst-case scenario

2. Different inputs — different variables: O(n + m) denotes two different inputs.

3. We drop constants

4. We drop non-dominant (less significant) terms

Example:

a = [……..]

func1(a) => 1 + a[0] — — — O(1) constant

func2(a) => sum(a) — — — O(n) linear

func3(a) => pair(a) — — — O(n^2) quadratic

func4(a) => { func1(a), func2(a), func3(a) } — — — — O(n^2 + n+ 1) this becomes O(n^2)

If the number of steps stays the same no matter how large n is, then it’s constant O(1) time.

If you go through an n long list (linearly; say a loop), then you’ll take at most n steps: O(n).

If you go through an n long list and then do n things each time (e.g.., a loop inside a loop), then you’ll take at most n^2 steps: O(n^2).

If you did the first and then the second thing, n + n^2, we call this O(n^2).

As gets n big, n + n^2 will be very close to n^2. So we only worry about the significant part. For the same reason, the notation is always simplified to ignore differences that will be insignificant as n gets large.

To summarize, the Big-O complexities are best illustrated by this graph:

This graph and Big O Cheat Sheet webpage created by Eric Rowell cover the space and time Big-O complexities of common algorithms used in Computer Science.

Why Is This Important?

As a software engineer, you need to understand different orders of magnitude to optimize your code. When you are trying to improve an algorithm, focus on changing its order of magnitude instead of improving it in other ways. For example, if you have an algorithm with O(n²) complexity that uses two for loops, instead of optimizing what happens inside the loops, see if you can rewrite it so that it doesn’t have two nested for loops and thus have a smaller order of magnitude. An algorithm with O(n) complexity will make a massive difference in its performance. The decisions you make about algorithms can have big consequences on performance in the real world — faster page loading, product checkout, and posts populating social feeds, making your users happy and not losing customers before the request loads.

--

--