Deriving every asymptotic analysis relation: Big O, Big Omega, Big Theta

Scott Sauers
10 min readApr 10, 2024

--

We’ll walk through the derivations for every limit test result in asymptotic analysis, and finish with a table showing all possible test results.

Were you taught rules about the limit of f(n)/g(n) as n approaches infinity tells you about the Big O, Big Omega, and/or Big Theta relationships between the functions, but you’re not exactly sure why? This post will walk through exactly how to arrive at the rules of why, for example, if the limit is a constant between zero and infinity then f(n) is in Ω(g(n)), and why, for example, if the limit is infinity then f(n) is still in Ω(g(n)), but is not in Ω(g(n)) if the limit is zero instead. We will find out why Big O has the exact opposite set of rules, and why Big Theta has the unique property of if the limit of f(n)/g(n) as n approaches infinity is either zero or infinity, f(n) is not in Θ(g(n)) either way!

L’Hôpital’s Rule

L’Hôpital’s rule applies under two key conditions:

  1. Both f(n) and g(n) approach 0 as n → ∞.
  2. Both f(n) and g(n) approach infinity as n → ∞.

If either condition is met and additional criteria are satisfied, the limit of their ratio f(n)/g(n) as n approaches infinity equals the limit of the ratio of their derivatives f′(n)/g′(n). Differentiating f and g does not alter the limit of their quotient.

Definition of Big O

f(n) = O(g(n)) signifies the existence of a positive constant C such that for all n beyond a specific threshold, f(n) ≤ Cg(n). This describes a scenario where f(n) grows no faster than g(n) after a certain point. You can imagine this as an upper bound on f with respect to g.

Definition of Big Omega

f(n) = Ω(g(n)) occurs when a positive constant C ensures f(n) ≥ Cg(n) beyond a particular n value. This asserts that f(n)’s growth rate is at least as fast as g(n)’s, effectively setting a lower growth boundary for f.

Statement 1

From these definitions, it follows that f(n) = Ω(g(n)) is true if and only if g(n) = O(f(n)).

Definition of Big Theta

f(n) = Θ(g(n)) is true if f(n) is both O(g(n)) and Ω(g(n)). This dual condition means that f(n) grows neither significantly faster nor significantly slower than g(n) over the long term, effectively bounding f within a “tight” range around g.

Consequence of the Definition of Big Theta

If f(n) = Θ(g(n)), then necessarily f(n) = O(g(n)) and f(n) = Ω(g(n)). This establishes that f(n)’s growth rate is tightly coupled with that of g(n), so there are both upper and lower bounds of growth rate equivalence.

Fact 1

f(n) = Θ(g(n)) when constants a and b exist such that beyond a certain point, f(n) does not exceed a⋅g(n) nor fall below b⋅g(n). This is a stringent equivalence in growth rates, with f(n) squished between two proportional bounds relative to g(n).

Alternative Definition of Theta

Given f(n) = Θ(g(n)), there are non-zero constants a and b making a⋅g(n) ≤ f(n) ≤ b⋅g(n) for all n beyond some large value n₀. From this, we see f(n) has constrained variability in relation to g(n).

Statement 2

If f(n) = Θ(g(n)) holds, then g(n) = Θ(f(n)) also holds true. The relationship between f and g concerning Θ is symmetric.

Statement 2a

If f(n) is not in O(g(n)), meaning no constant a can ensure f(n) ≤ a⋅g(n) beyond some n, then f(n) cannot be in Θ(g(n)). This absence from O(g(n)) directly implies f cannot meet the requirements for Θ, lacking the necessary upper bound relation to g.

Statement 2b

Similarly, if f(n) is not in Ω(g(n)), indicating no constant b exists for which f(n) ≥ b⋅g(n) after a certain n, then f(n) is not in Θ(g(n)). The lack of a lower bound relationship with g precludes f from fitting within the tight growth rate equivalence specified by Θ.

When analyzing the behavior of two functions f(n) and g(n) as n approaches infinity, the relationship between their limits can reveal a lot about their comparative growth rates.

If the limit of (f(n) / g(n)) as n → ∞ is exactly 1, it indicates that f and g grow at the same rate. This equality in limits reflects a direct proportionality between f and g: neither function outpaces the other as n grows indefinitely.

In cases where this limit approaches a non-zero constant C, f and g differ only by a constant factor. Specifically, if (f(n) / g(n)) = C, it implies f(n) is equivalent to C times g(n), or conversely, g(n) is f(n) scaled by 1/C. This relationship showcases that while f and g may not be exactly equivalent, their growth trajectories are fundamentally aligned, differing only in scale.

Further, if C ⋅f(n) = g(n), rearranging gives f(n) = g(n) / C. Evaluating the limit of (f(n) / (g(n) / C)) as n → ∞ simplifies to the limit of (f(n) / f(n)) = 1. This transformation confirms that when adjusted for the scaling factor C, f and g effectively exhibit the same growth pattern, reaching a point where the ratio of their values uniformly equals 1 across all n.

Hence, if the limit as n approaches infinity of (f(n) / g(n)) equals C, where C is a nonzero constant, it follows that there exist constants a and b such that for sufficiently large n, f(n) is always bounded by a⋅g(n) from above and b⋅g(n) from below. The existence of such constants a and b is deducible from the behavior of limits: f and g maintain a consistent relative growth pattern, modulated only by the scaling constant C.

Statement 3

When the limit of (f(n) / g(n)) as n → ∞ equals a non-zero constant C, it establishes a definitive relationship between the growth rates of f(n) and g(n). From Statement 1, Statement 2, and particularly the Definition of Big Theta (Θ), we can deduce:

  • f(n) = Θ(g(n)): f and g grow at the same rate, adjusted by a constant factor.
  • g(n) = Θ(f(n)): By the symmetry in the definition of Θ, the growth rate of g is also tightly bound to f.
  • f(n) = Ω(g(n)): f grows at least as fast as g, considering the non-zero constant C.
  • g(n) = Ω(f(n)): Similarly, g grows at least as fast as f when adjusted for C.
  • f(n) = O(g(n)): f does not grow faster than g beyond a constant factor.
  • g(n) = O(f(n)): g is bound by f’s growth, modulo a constant.

We can know these because Statement 2 allows us to “flip” g(n) and f(n), allowing us to write the second point, and then the definition of Θ along with the first and second lines give us points 3, 4, 5, and 6.

Statement 4

The limit as n approaches infinity of (f(n) / g(n)) being 0 tells us that f(n) is O(g(n)).

If the limit of (f(n) / g(n)) as n → ∞ is 0, it categorically implies that f(n) is O(g(n)). f’s growth is negligible compared to g’s as n becomes very large: at an arbitrarily large value of n, f(n) increases slower (slower than any constant multiple) than g(n). If f(n) and g(n) differ by the factor of a non-zero constant C, then f(n) = C⋅g(n).

f(n) / g(n) = (C⋅g(n)) / g(n). Then,

The limit of (C⋅g(n))/g(n) as n → ∞ is C⋅(the limit of g(n)/g(n) as n → ∞)

= C⋅(1) = C.

Therefore:

Fact 2

If f(n) and g(n) differ only by a scaling factor of a non-zero constant C, then the limit of (f(n) / g(n)) as n approaches infinity will be that constant C. This is derived from setting f(n) = C⋅g(n), leading to the simplification of (f(n) / g(n)) = C as n → ∞. The calculation follows:

f(n) / g(n) = (Cg(n)) / g(n) = C,

where the limit of g(n) / g(n) as n approaches infinity is 1, hence the limit of the original ratio is C. A constant ratio between f and g means proportional growth rates, and C indicates the scale of difference.

Fact 3

When f(n) and g(n) possess identical leading terms but have differing coefficients, the limit of (f(n) / g(n)) as n approaches infinity results in a non-zero constant. This is because of the limit property, specifically the constant law, used in Statement 4.

Let f(n) be represented as a·n^p + …, with p being the highest power, and g(n) as b·n^q + …, where q is the highest power, and both a and b are positive constants. The subsequent terms, indicated by + …, become negligible at the limit of n approaching infinity (so we are able to cancel them out), leaving only the leading terms to dictate the limit of (f(n) / g(n)):

The limit of (f(n) / g(n)) as n approaches infinity

= the limit of (a·n^p / b·n^q) as n approaches infinity

= the limit of (a/b) · (n^p / n^q) as n approaches infinity

= (a/b) · (the limit of n^(p-q) as n approaches infinity).

When p and q are distinct, two possible outcomes can emerge:

  1. If p > q, the expression equals (a/b) · (the limit of n^(p-q) as n approaches infinity) = infinity. Here, (C·n^(positive number)) grows without bound.
  2. If p < q, it results in (a/b) · (the limit of n^(p-q) as n approaches infinity) = 0, indicating that (C·n^(negative number)) inevitably trends towards 0.

Therefore:

Fact 4

If f(n) and g(n) possess differing highest powers of n, then the limit of (f(n) / g(n)) as n approaches infinity will not be a non-zero constant, irrespective of the coefficients attached to the leading term. By dividing every term in (f(n) / g(n)) by the highest power of n, all non-leading terms become (a·n^q / n^p), where p > q. As n grows, non-leading terms approach zero (since p > q). This type of behavior can generalize to certain non-polynomial functions as well. We can conclude:

Fact 4a

Only the leading terms determine the limit of (f(n) / g(n)) as n approaches infinity, provided f and g are polynomials.

Fact 4b

Only leading terms are responsible for determining time complexity O, Ω, and Θ. Lower order terms become less significant as n increases, and constant factors on the leading term only change C.

Squeeze theorem

If a function is bounded by two other functions, and the bounding functions have the same limit, then the bounded function must also have that limit. If there is a positive number C such that above a certain value of n, f(n) is always more than or equal to C·g(n), then f(n) may be less than g(n) at arbitrarily large values of n only if f(n) is less than g(n) by a non-zero constant multiple (which will be less than one in this instance).

That is, when f(n) is less than g(n), it is possible for f(n) = Ω(g(n)) only when f(n) and g(n) differ by the factor of a non-zero constant. If f(n) and g(n) do differ by the factor of a non-zero constant, then the limit of f(n) / g(n) as n approaches infinity is a non-zero constant.

If the limit of f(n) / g(n) as n approaches infinity is 0, then we can conclude:

  1. f(n) and g(n) do not differ by the factor of a non-zero constant, and
  2. f(n) increases slower than g(n) at arbitrarily large values of n. Therefore, f(n) is not in Ω(g(n)), as there is no C such that above a certain value of n that f(n) is always more than or equal to C·g(n).

We can conclude:

Statement 5

The limit as n approaches infinity of (f(n) / g(n)) being 0 tells us that f(n) is not Ω(g(n)).

Statement 6

Likewise to statement 4, the limit as n approaches infinity of (g(n) / f(n)) being 0 tells us that f(n) is Ω(g(n)). The limit as n approaches infinity of (g(n) / f(n)) being 0 means that at an arbitrarily large value of n, f(n) increases faster (faster than any constant multiple) than g(n). Because of this, f(n) clearly is Ω(g(n)). (Though this is redundant with statement 3, we can see that if the limit as n approaches infinity of (g(n) / f(n)) is a constant, then f(n) = O(g(n)) and f(n) = Ω(g(n)) because you can just scale either f(n) or g(n) by some constant C so that the rate of change is the same at an arbitrarily large n.)

Statement 7

If the limit as n approaches infinity of (f(n) / g(n)) is infinity, f(n) grows faster than g(n) (by more than a constant multiple) at an arbitrarily large value of n, and f(n) = Ω(g(n)). If a positive number C exists such that above a certain value of n, f(n) is always less than or equal to C·g(n), then f(n) may be more than g(n) at arbitrarily large values of n only if f(n) is more than g(n) by a non-zero constant multiple.

That is, when f(n) is more than g(n), it is possible for f(n) = O(g(n)) only when f(n) and g(n) differ by the factor of a non-zero constant (which will be greater than one in this case). If f(n) and g(n) do differ by the factor of a non-zero constant, then the limit of f(n) / g(n) as n approaches infinity will be a non-zero constant.

If the limit of f(n) / g(n) as n approaches infinity is infinity, then we can conclude:

  1. f(n) and g(n) do not differ by the factor of a non-zero constant, and
  2. f(n) increases faster than g(n) at arbitrarily large values of n. Therefore, f(n) is not in O(g(n)), as there is no C such that above a certain value of n that f(n) is not always more than or equal to C·g(n).

(This should feel reminiscent of the squeeze theorem.)

We can conclude:

Statement 8

If the limit as n approaches infinity of (f(n) / g(n)) is infinity, f(n) is not in O(g(n)).

By the definitions of O, Ω, and Θ and by evaluating the limit of f(n)/g(n) as n approaches infinity, we can now prove or disprove any of the following:

  • f(n) = Θ(g(n))
  • f(n) = Ω(g(n))
  • f(n) = O(g(n))

Let’s summarize what we’ve found so far.

We can also include the reasons or derivations for the results for reference:

It is possible for the limit of f(n)/g(n) as n approaches infinity to be undefined. In this case, all, none, or some of the three statements (f(n) = Θ(g(n)), f(n) = Ω(g(n)), and f(n) = O(g(n))) may be true or false, and the test doesn’t give us any information.

--

--