A Guide to Understanding Big-O, Big-Ω and Big-ϴ Notation

Alejandro Ito Aramendia
5 min readOct 1, 2023

--

Time Complexity is critical when it comes to programming. It is a key decider on whether your app, website or game loads in a blink of an eye. Not considering it whilst coding would be a grave mistake and lead to suboptimal run times. This effect may be miniscule for small programs but when scaled up, can be devastating!

Time complexity can be described in three ways: Big-Oh, Big-Omega and Big-Theta notation. Possibly you are only familiar with Big-Oh, or maybe you are not familiar with any. Whichever it may be, by the end of this article, you will have a rock-solid grasp on time complexity and its notations.

Coming into this article, I will assume that you have at least a basic understanding of Time Complexity. If that is not the case and you are unfamiliar with this concept, please have a quick read of my article on Time Complexity (it’ll only take a few minutes) and come back.

Table of Contents

Big-Omega (Ω) Notation

Big-Omega notation is used to describe the asymptotic lower bound of an algorithm, i.e. its best-case scenario.

An algorithm with a running time of f(n) is equivalent to Ω(g(n)) if and only if there are constants c and n0 such that the below expression is satisfied.

The formula for big-omega notation.

To gain a better understanding, let’s look at this graph.

A time complexity graph with two lines indicating the function f(n) and cg(n).

Big-Omega is an asymptotic notation and therefore only describes the rate of growth at large input sizes. Therefore, for inputs above n0, cg(n) is defined as the asymptotic tight lower bound of f(n) with a time complexity of Ω(g(n)).

Let’s run through an example.

Example

Find the lower bound of the function f(n) = 4n²- 3n + 2.

The step by step solution to the example.

In this example we compare the coefficients of n² and observe c = 4. The next step is to find n0. With c = 4, the inequality is only true if n0 = 2 or above. With this we have all the necessary information to describe the asymptotic lower bound of the function.

Big-Oh (O) Notation

Moreover, Big-Oh notation is used to describe the asymptotic upper bound of an algorithm, i.e. its worst-case scenario.

An algorithm with a running time of f(n) is equivalent to O(g(n)) if and only if there are constants c and n0 such that the below expression is satisfied.

The time complexity formula for big-O notation.

Equivalently, the graph of this upper bound is shown below.

A time complexity graph with two lines show casing f(n) and cg(n).

Just like in Big-Omega notation, Big-Oh represents the asymptotically tight upper bound of a function f(n). The reason that it is asymptotic is because we are only interested in large values of n, as it is for these values that the run time affects the computer’s performance. Therefore, the inconsistencies present below n0 are ignored and we search for a function cg(n), which f(n) will tend to.

Let’s seen an example.

Example

Find the upper bound of the function f(n) = 3n + log (n + 2).

Solution to the example problem.

In this example, our function f(n) has two terms, 3n and log(n+2). 3n is the larger term so therefore, g(n) = n. In addition, as both terms are positive, we deduce that c = 4 and n0 = 2. The full time complexity of the function’s upper bound can now be described with these constants.

Big-Theta (ϴ) Notation

Now that we’ve explored Big-Oh and Big-Omega notations, let’s dive into Big-Theta notation.

Big-Theta notation compares a little differently to the others. This notation describes BOTH the upper and lower bound of a function and is often referred to as the average time complexity.

The purpose of this notation is to provide a tight and consistent bound that represents the lower and upper bound of a function (algorithm) in a single notation. If a function has a time complexity of ϴ(n), it means that both the worst and best case are Ω(n) and O(n) respectively.

Can you now see how Big-Theta provides more information?

However, it must be noted that Big-Theta can only be used when both the lower and upper bound have the same time complexity. As if not, there is no single tight bound to describe the algorithm.

Big-Theta notation is described mathematically as follows (a combination of the previous notations).

Big-theta notation formula.

This can be visualised in the graph below.

A graph with three lines showing a function f(n) and the lower and upper bound line.

Let’s go through an example to understand this better.

Example

Find the ϴ bound of f(n) = 3n²- n.

Solution to the Big-theta notation example.

In this example, we did exactly the same as in the Big-Oh and Big-Omega notation examples. Additionally, as the lower and upper bound shared the same time complexity, Big-Theta notation was used.

It’s finished! You’ve learnt it all!

In this article we have been on a journey exploring essential notations that shape our understanding of time complexity. While they may just seem like symbols from afar, they are in fact, the key to our efficient algorithms, speedy apps and lightning fast loading times.

As we conclude, let’s take a moment to recap:

  • Big-Oh (O) Notation describes the worst-case scenarios.
  • Big-Omega (Ω) Notation reveals the best-case run time.
  • Big-Theta (ϴ) Notation encapsulates the extremes and provides a tight and consistent range (average). But, is confined to matching time complexities.

That is all from me, keep exploring!

--

--