Common Complexities | Complexity Analysis

Rafael
6 min readJul 14, 2024

--

There’s a handful of complexities that are really common, and you’ll find them a lot in your journey.

They are, from best to worst:

  • constant: O(1)
  • logarithmic: O(log n)
  • linear: O(n)
  • loglinear or linearithmic: O(n log n)
  • polynomial: O(n^x), where x > 1
  • exponential: O(x^n), where x > 1
  • factorial: O(n!)
Chart illustrating the different complexities: https://www.bigocheatsheet.com

We will be using the Big O notation throughout this article to represent complexities. But keep in mind that other notations, such as Omega (Ω), or Theta (θ), can also be used.

The Big O notation describes how the performance of an algorithm scales as the input size grows, in terms of either space or time complexity. This notation is written as O(x), where x is an expression representing the growth rate of an algorithm’s performance in relation to an input of size n.

Constant Complexity: O(1)

O(1) is said to be a constant complexity. O(1) implies that the amount of time or memory required does not increase or decrease as n (the input) increases or decreases. For time complexity, it means that there’s no iteration or recursion on n.

Algorithms with constant time complexity executes a fixed number of statements regardless of what n is. For instance, the following code just doubles a value and returns the result.

Function that returns the double of a number.

For space, it means that the sizes of the data structures used in your algorithm, if any, do not change as n increases or decreases. For example, the following code logs the double of each member in an array and has constant space complexity because no data structures are created whose size scales with the input.

Function to print out all numbers of an array times 2.

Logarithmic Complexity: O(log n)

O(log n) is known as logarithmic complexity. The logarithm in O(log n) usually has a base of two. The best way to understand this is by thinking about the concept of halving (dividing by two). If an algorithm halves the problem size on each iteration or recursive call, it is said to have logarithmic time complexity. This essentially means that when the input size n doubles, the time or space complexity is increased by a constant amount.

One of the most famous algorithm with a time complexity of O(log n) is the binary search algorithm:

Binary Search Algorithm

For example, suppose that the array passed to the binary search algorithm initially has a length of 4, so it takes log₂ 4 = 2 steps in the worst case to find the target value. If the array length increases to 8, it will take log₂ 8 = 3 steps in the worst case. Similarly, with an array length of 16, it takes log₂ 16 = 4 steps. Therefore, every time n doubles, the time complexity increases by a constant amount.

Space complexities of O(log n) are rare and typically happen in recursive algorithms. When a recursive call is made, all current variables are placed on the stack and new ones are created. If the space usage increases logarithmically, meaning the space requirements increase by a constant amount when n doubles, then the space complexity is said to be O(log n). For instance, the Quick Sort algorithm, when implemented recursively, has a space complexity of O(log n):

Quick Sort Algorithm

Linear Complexity: O(n)

O(n) indicates linear complexity. It means that the time/space increases 1:1 with changes to the input size, that is, the time/space complexity of an algorithm increases directly in proportion to the size of the input n.

For time complexity, it means that when n increases by one, a new operation is needed, as shown by the following code:

Function to print out all numbers of an array times 2.

In this code, if the length of the array increases by one, a new iteration is needed.

For space complexity, O(n) means that a corresponding amount of memory is needed in the used data structures when n increases by one, as shown by the following algorithm:

Function to copy the elements of an array

In this code, if n increases by one, one more element is added to the array.

Loglinear or Linearithmic Complexity: O(n log n)

O(n log n) is called loglinear or linearithmic complexity. This complexity indicates that an algorithm performs n operations log n times. It’s usually found in the time complexity of recursive sorting algorithms.

The Quick Sort algorithm, for example, has O(n log n) time complexity:

Quick Sort Algorithm

Space complexities of O(n log n) are extremely rare, to the point that you don’t even need to care about them. If an algorithm uses O(n log n) space, it’s awkward and impractical enough that you’ll likely have to use another approach in most cases.

Polynomial Complexity: O(n^x)

O(n^x) is known as polynomial complexity and includes a wide range of complexities depending on the value of x. It means that every element of the input will be traversed x times. Polynomial complexity is extremely inefficient. While it is the only way to implement some algorithms, it is a warning sign that code needs refactoring. This holds true for all the following complexities covered in this article.

This complexity happens especially when there are nested loops in an algorithm. In most cases, x will be equal to the number of nested loops, except when working with matrices or multi-dimensional arrays. Nested loops are needed to traverse these data structures and do not represent polynomial time. For example:

Function that runs in O(n) despite nested loops, where n = size of the matrix.

When the time complexity is polynomial, the number of loops equals 2d, where d is the number of dimensions the data structure has. Addiotionally, all loops must be iterating over the input.

Function for finding two equal values in an array.

Polynomial space complexity will generally happen when creating matrices or multidimensional arrays in a function, for example:

Function that creates a multiplication table of array elements using O(n²) space to create the table.

Exponential Time: O(x^n)

Exponential time, which is presented as O(x^n), means that time or space will be raised to the power of n. This is extremely inefficient and should be avoided whenever possible. O(x^n) often results recursive functions that make multiple recursive calls.

The recursive Fibonacci sequence is a great example with exponential time:

Function to find the nth element in a Fibonacci sequence in O(2^n) time.

O(2^n), like in the above algorithm, means that every time n increases by 1, the number of operations is doubled.

Algorithms with a space complexity of O(x^n) are extremely rare in standard software engineering. If you find it, it’s likely because something is really, really wrong with the algorithm you’re using.

Factorial Complexity: O(n!)

Factorial complexity is the “worst” complexity that exists. To illustrate how quickly factorial solutions grow, a standard deck of cards has 52 cards, with 52! possible orderings. This number is astronomically large — greater than the number of atoms on Earth! Factorial complexity is unimaginably inefficient.

An example of an algorithm with exponential time complexity is one that prints all permutations of an array:

Function to print all orderings of an array in O(n!) time.

The above function prints all possible permutations of the array, leading to O(n!) time complexity.

Algorithms with O(n!) space complexity are very rare and are usually found in robotics when they involve the security of RFID tags. They’re intentionally designed to be complex to make it hard to compute a solution.

Conclusion

The first four complexities discussed, and sometimes the fifth one, O(1), O(log n), O(n), O(n log n), and O(n^x), are the most common you’ll encounter. The final two are extremely rare, but important to understand the concept of complexity.

Donate

Did you like this article? If you did, please consider buying me a coffee.

--

--

Rafael

Hey, there! I'm Rafael Maia, a self-taught front-end developer sharing the knowledge that I acquire along my journey with you in the simplest way I can.