Amortized Algorithm Analysis

Tomáš Bouda
4 min readMay 3, 2017

In my 100 days of algorithms challenge, I usually implement an algorithm and write a few paragraphs to make a point or at least to motivate. The text sometimes contains phrases like, it can be shown that …, it can be proved that …, or there’s just a statement, the algorithm runs in O(n).

Even though I either search a reliable resource or do calculations on my own for each such claim, that’s still the part I don’t like. This article is going to be the right opposite.

This article is about programming without implementation. This is about programming with mathematics.

Algorithm

Given a binary number, make an increment by 1.

What is the time complexity? In other words, how many bits do we need to flip to make an increment by 1?

As a side note, this problem is closely related to day 40 where I claim that the amortized time to store 1-bit is O(1).

For a binary number n we need log n bits in memory. The worst case is that all bits are 1s and hence the algorithm runs in O(log n) time.

Let’s just check if the result follows intuition. You know, when dealing with math, intuition is way too often wrong and you should follow your figures, but we give it a try anyways.

We make an increment a few times and see what happens.

Isn’t that weird? Half of the numbers ends with 0, and only a single bit gets flipped. But just one out of n numbers has all bits set to 1. Is it correct to generalize the worst case in this way?

Would it be correct to say that n increments need at worst O(n.log n) time?

Instead of the worst case, we will focus on amortized analysis. Since an analysis of a single increment may be biased, we let the algorithm do a lot of increments and find the total number of flips.

Start with number 0 and make an increment n-times.

First think to notice is that half of the numbers are even. All even binary number end with 0 and only a single flip is made. Also note that quarter of all the numbers end with 01, hence two flips are made. This concept can be generalized.

The expression above describes number of flips after n increments, it’s the same pattern we just described.

To be a little bit more precise and since we do not know exact value of n, we can create an upper bound.

Now I make a claim that has to be proved later.

If n increments requires only 2n flips, we just need two flips per increment on average. The worst case is way too off, even though it still holds. Nth increment still needs O(log n) flips, but it will be hidden inside preceding increments.

To prove my claim, we will continue with the infinite sum. In the first step, we need to formalize the expression.

Rewriting the expression into a limit of partial sums and taking denominator out of the sum gives us a possibility to find the value intuitively. The sum can be inscribed into a table.

Notice that each diagonal sums to 2^k-1. And with just a little algebra, even the sums of diagonals have final sum in form of 2^(k+1)-k-2. That gives us a general formula for the infinite sum.

The limit now becomes just a trivial case you have seen countless times.

Sadly, the table is far from rigorous proof. But the good part is that even though we’re not finished, yet, the table gave us proposal that can be finally proved using induction.

The basis is simple.

And the inductive step is just playing with plus and minus ones.

We have finally proved that

Which means that the upper bound is correct.

If you made it up here, thanks and congrats. Together, we have just programmed and proved a bug-free binary adder. Without writing a single line of code.

--

--