4. Algorithmic Thinking Series — What makes an algorithm good enough?

The First Principles Lover
4 min readFeb 27, 2024

--

In the world of comparing algorithms, a quantitative approach is necessary, but how do we define the metric by which we decide which algorithm is better?

First of all, we need to ensure that the algorithm is correct. Then, we look at how efficient it is. Let’s look at the task of multiplying two numbers. One can use a method of repetitive addition or the method we all learned in our first grade: multiply one of the numbers with each digit of the other, shift, and add. If we think about it, this method seems better, but why? It minimizes computational operations, which relates to the time taken to execute all the operations if we fix a unit time for each operation, and eases the cognitive load of memorizing intermediate values. Hence, we want algorithms that optimize the time as well as memory requirements. But how do we quantify these?

Enter complexity theory to provide a mathematical framework for analyzing algorithmic efficiency.

  1. Big Oh Notation (O):
    - A function f(n) is said to be O(g(n)) if there exist positive constants c and n_0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n_0.
    - Symbolically, f(n) = O(g(n)), read as “Big Oh of g(n)”, implies a loose upper bound relationship of g(n) on f(n).
  2. Little Oh Notation (o):
    - A function f(n) is said to be o(g(n)) if for all positive constants c, there exists n_0 such that 0 ≤ f(n) < cg(n) for all n ≥ n_0.
    - Symbolically, f(n) = o(g(n)), read as “Little Oh of g(n)”, implies a tight and strict upper bound relationship of g(n) on f(n).
  3. Big Omega Notation (Ω):
    - A function f(n) is said to be Ω(g(n)) if there exist positive constants c and n_0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n_0.
    - Symbolically, f(n) = (g(n)), read as “Big Omega of g(n)”, implies a loose lower bound relationship of g(n) on f(n).
  4. Little Omega Notation (ω):
    - A function f(n) is said to be ω(g(n)) if for all positive constants c, there exists n_0 such that 0 ≤ cg(n) < f(n) for all n ≥ n_0.
    - Symbolically, f(n) = (g(n)), read as “Little Omega of g(n)”, implies a tight and strict lower bound relationship of g(n) on f(n).
  5. Big Theta Notation (Θ):
    - A function f(n) is said to be Θ(g(n)) if it is both O(g(n)) and Ω(g(n)).
    - Symbolically, f(n) = Θ(g(n)), read as “Big Theta of g(n)”. This is a tight/exact bound.

Notice that we can’t have a small theta complexity that represents a case where the complexity is both ω(g(n)) and o(g(n)) because an algorithm cannot touch g(n) from both sides in this case which basically means that no such algorithm exists. In other words, the intersection of the set of algorithms with complexity ω(g(n)) and complexity o(g(n)) is a null set.

Mathematically,

https://slideplayer.com/slide/6652675/

Intuitively, if your algorithm is of complexity

O(n), it means that your algorithm is upper bounded by some linear complexity in the long run. This is a loose bound.

o(n), it means that your algorithm is strictly lower / has better complexity than any linear complexity in the long run, which basically means the growth rate of o(n) is strictly less than linear. This is a tight bound.

It is easy to see that o(n) cannot be linear while O(n) can be linear. Also, notice that Little Oh implies Big Oh, but not vice-versa. Eg: log(n) is o(n) as well as O(n) but n is only O(n), not o(n).

Ω(n), it means that your algorithm is lower bounded by some linear complexity in the long run. This is a loose bound.

ω(n), it means that your algorithm is strictly greater / has worse complexity than any linear complexity in the long run, which basically means that the growth rate of ω(n) is strictly greater than linear. This is a tight bound.

It is easy to see that ω(n) cannot be linear while Ω(n) can be linear. Here also, Little Omega implies Big Omega, but not vice-versa. Eg: n² is ω(n) as well as Ω(n) but n is only Ω(n), not ω(n).

Θ(n), it means that your algorithm has a linear complexity in the long run.

These notations are commonly used in the analysis of algorithmic time complexity to describe the bounds of how an algorithm’s performance scales with input size n.

Next Article: https://medium.com/@the.first.principle.lover/algorithmic-thinking-series-greediness-is-all-it-takes

--

--