Demystifying Master Theorem

Aman Jain
4 min readMar 14, 2018

--

“A group of people brainstorming over a laptop and sheets of paper” by Štefan Štefančík on Unsplash

When you implement any algorithm, the first question that you are asked is what is the run time complexity?

Refresher: Run time complexity is defined usually by the Big O notation. Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required by an algorithm.

Calculating Big O notation for non-recursive programs is bit simpler than for recursive programs. It becomes even difficult to calculate when you have divide and conquer type of algorithms such as Binary Search, Merge Sort or Quick Sort.

This is where Master Theorem comes in handy.

Consider a problem that can be solved using recursive algorithm such as follows:

function T(n: size of the problem) defined as:
if n < 1 then return

Do work of amount f(n) // this is constant amount of work

T(n/b)
T(n/b)
T(n/b)
..... repeat this for a times
end

In the above algorithm, we are dividing the n input into sub problems each of size n/b. The number of sub-problems a on each function call increases exponentially based on how we decide to divide the problem. So in-case of merge sort the first call to the function creates 2 sub problems (as we divide the input into 2) the next call divides it into 2² = 4 sub problems and so on.

Algorithms such as above can be represented as: T(n) = a T(n/b) + f(n)

Master Theorem allows us to easily calculate the running time of these algorithms without doing an expansion of the expression above.

Application of Master Theorem helps in finding run time of some popular algorithms such as:

Let’s simplify this a bit. Unlike others, I get confused with mathematical terms and would like a simple explanation.

Consider the above program, if we were to draw this in a tree format it would look like this:

Refresher: log base 2 of 8= 3

To simplify this further, let’s take an example of merge sort where you are sorting 8 numbers. The tree would look like this:

Now, the amount of work done at each level can be show-cased as:

Its important to understand how did we derive the total work done here.

Work done at 0 level is a function which takes some deterministic constant time which we refer to as O(n^d). This constant effort keeps reducing at every level as our problem size also reduces. Based on the number of sub problems that we create, we multiply it with the factor of a- that keeps on increasing at each level as our sub problems keep on increasing.

Keep this total work done formula aside for few minutes (just don’t forget how we got here.. :)).

Let’s look at another formula for Geometric sequence. An example of geometric sequence would be 1,2,4,8,16,32,64… where the numbers keep on increasing by a certain factor.

We can calculate any number at a certain position within the geometric progression by using the formula:

Where a is the first term, r is the common factor and n is the position in the series.

Go ahead give it a try. For the above series: X(4) = 8

Now, if we add a geometric sequence, it can be written as:

if r <1 then the biggest value in the sequence will be ar⁰ and then rest can be discarded (as with Big O notion, we discard the lower terms)

if r > 1 then biggest value in the sequence will be ar^(n-1)

Lets bring back our formula that we calculate for total work done in a recursive call:

This too can be expressed in terms of adding up a geometric sequence as our first element is O(n^d) and r is (a/b^d)^i

Thus for Case A:

Case B:

Case C:

If we refer the recurrence relationship of different algorithms, replacing the values of a, b and d can now easily give us the run time complexity. Try it out :).

--

--