The Ultimate Guide to Asymptotic Analysis

A Guide to Understanding the Hidden World of Algorithm Analysis!

CS Dagger
5 min readJan 25, 2023

What is the Asymptotic Analysis?

The asymptotic analysis is the idea that solves the problem of analyzing the performance of algorithms. We investigate how the time or space of any algorithm will increase, taking into account only the input size. It is important to note that we’re not measuring the actual running time!

This way of analysis has some benefits:

  1. We can measure the algorithm's efficiency, which does not depend on which computer hardware our algorithm will work with and what architecture we will use, which means that this type of analysis is machine-independent.
  2. We operate only with input size, which means that this type of analysis is simple

In simple words, we can say that the asymptotic analysis studies how algorithms behave as the size of our data grows so large, or even go to infinity. This analysis allows us to compare algorithms and define which algorithm will be more efficient on the specific amount of input size.

What is the asymptotic notation, and for what do we need it?

Asymptotic notations are the mathematical notations used to describe the running time of an algorithm when the input tends towards a particular value or a limiting value.

But what are mathematical notations? The mathematical notation consists of symbols, unspecified numbers, relations, and any other mathematical objects, assembling them into expressions and formulas. So, this is one of the representation tools.

Subject to the above, we can say that: Asymptotic notations are the tools used in the asymptotic analysis for mathematical representation of the time complexity of an algorithm.

The main asymptotic notations to know

There are mainly three asymptotic notations:

  1. Big-O notation
  2. Omega notation
  3. Theta notation

Let’s take each one in turn!

Big-O notation

The Big-O notation is used to represent the upper bounds of a function, the upper bound of the running time of an algorithm. Therefore, it gives us the worst-case complexity of an algorithm. And this is why the most usable notation is BigO notation — we always want to know the worst time our algorithm will take.

In the real world, we will write it like this: The algorithm's complexity is O(n), which reads like a big o of n.

As it is a mathematical tool, we will describe it mathematically. Here we can see the diagram:

What does it mean when we say that T(n) = O(f(n))?

Formally, we can say that: O(f(n)) is a set of all functions T(n) that satisfy: there exist positive constants C and N such that for all n ≥ N, T(n) ≤ C * f(n).

So, for any value of n, the running time of an algorithm does not cross the time provided by O(f(n)).

Table of important Big-O sets (from smallest to largest), ⊆ — a subset:


╔═════════════╦═══════════════════════════╗
║ Function ║ Common name ║
╠═════════════╬═══════════════════════════╣
║ O(1) ║ constant ║
║ ⊆ O(logn) ║ logarithmic ║
║ ⊆ O(log^2n) ║ log-squared ║
║ ⊆ O(√n) ║ root-n ║
║ ⊆ O(n) ║ linear ║
║ ⊆ O(nlogn) ║ n logn ║
║ ⊆ O(n^2) ║ quadratic ║
║ ⊆ O(n^3) ║ cubic ║
║ ⊆ O(n^4) ║ quartic ║
║ ⊆ O(2^n) ║ exponential ║
║ ⊆ O(e^n) ║ exponential (but more so) ║
║ ⊆ O(n!) ║ factorial ║
║ ⊆ O(n^n) ║ n of an n ║
╚═════════════╩═══════════════════════════╝

Algorithms that run O(n * log(n)) OR FASTER time are considered to be EFFICIENT.

Algorithms that run n⁷ OR SLOWER time are considered to be USELESS.

NOTE: Big-O notation doesn’t always tell a whole story. Suppose we have 2 algorithms: T(n) = n*log2(n), U(n) = 100n. We say that T(n) dominates U(n) asymptotically. We can see that log2(n) is bigger than 100 if n is 2¹⁰⁰ — we can’t fit it to any computer (because we have the 64-bit architecture, which means we can address up to 2⁶⁴ memory addresses), so IN PRACTICE log2(n) < 50, which means that n ≤ 2⁵⁰, so the log2(n) will be the better choice.

Omega notation

Omega notation is used to represent the lower bound of a function, the lower bound of the running time of an algorithm. Therefore, it gives us the best-case complexity of the algorithm.

In the real world, we will write it like this: The algorithm's complexity is Ω(n), which reads like an omega of n.

Let’s describe the following diagram:

What does it mean when we say that T(n) = Ω(f(n))?

Formally, we can say that: Ω(f(n)) is a set of all functions T(n) that satisfy: there exist positive constants C and N such that for all n ≥ N, T(n) ≥ C * f(n).

So, for any value of n, the minimum running time required by the algorithm is given by Ω(f(n)).

Theta notation

Theta notation encloses the function from the above and below, which means it represents the upper and the lower bound of the running time of the algorithms. Therefore, it gives us the average-case complexity of an algorithm.

In the real world, we will write it like this: The algorithm's complexity is Θ(n), which reads like a theta of n.

Let’s describe the following diagram:

What does it mean when we say that T(n) = Θ(f(n))?

Formally, we can say that: Θ(f(n)) is a set of all functions T(n) that satisfy: there exist positive constants C1, C2, and N such that for all n ≥ N, C1 * f(n) ≤ T(n) ≤ C2 * f(n).

In this situation, we can say that T(n) is asymptotically tight bound.

We use Theta notation when the set of functions lies in both O(f(n)) and Ω(f(n)). So, if the best case of the algorithm is O(n²) and the worst case is O(n²), then we can say that the average case is Θ(n²).

Conclusion

Asymptotic notations allow us to compare the efficiency of different algorithms with a specific input size.

Asymptotic notation is nothing more than expressing the relationship between 2 functions, and these notations don’t say what the function is, which means that we’re not measuring the actual running time.

In the next chapter, you can read about asymptotic notation calculations.

--

--

CS Dagger
0 Followers

CS Dagger strive to make complex concepts easy to understand for everyone. Join me on this journey of discovery as we explore the world of CS together.