Recursion, Memoization and Dynamic Programming

I wanted to quickly show the differences between these 3 very related computer science concepts that tend to be mixed.

For that, I will work on a well know problem, the Fibonacci series.

The Fibonacci series looks something like: 0, 1, 1, 2, 3, 5, 8, 13, 21 … and so on. Any person with sufficient analytical skills would quickly notice the pattern.

f(n) = f(n-1) + f(n-2)

There are many ways someone can tackle this problem, the most intuitive one is recursion.

def fib(n):
if n < 2:
return n
return fib(n-2) + fib(n-1)

Timing this algorithm gives is about 257ms wall time.

%time fib(30)

CPU times: user 266 ms, sys: 7.17 ms, total: 273 ms
Wall time: 257 ms

Now, the main problem of this algorithm is that we are computing some of the subproblems more than once. For instance, to compute fib(4) we would compute fib(3) and fib(2). However, to compute fib(3) we also have to compute fib(2). Say hello to memoization.

In memoization we are doing nothing but caching the results of previously computed sub problems.

m = {}
def fibm(n):
if n in m:
return m[n]
m[n] = n if n < 2 else fibm(n-2) + fibm(n-1)
return m[n]

This algorithm is faster since we are avoiding extra computations, instead we are using a little more memory.

%time fibm(30)

CPU times: user 38 µs, sys: 0 ns, total: 38 µs
Wall time: 27.2 µs

But the question is, can we do better than this? The use of the array is helpful, but when calculating very large numbers, or perhaps on memory contraint environments it might not be desirable. This is where Dynamic Programming fits the bill.

In DP we take a bottom-up approach. Meaning, we solve the next Fibonacci number we can with the information we already have.

def fibdp(n):
if n == 0: return 0
prev, curr = (0, 1)
for i in range(2, n+1):
newf = prev + curr
prev = curr
curr = newf
return curr

In this format, we don’t need to recurse or keep up with the memory intensive cache dictionary. These, add up to an even better performance.

%time fibdp(30)

CPU times: user 10 µs, sys: 0 ns, total: 10 µs
Wall time: 10 µs

Pretty cool, isn’t?

There is a lot of good information around the web to help you understand these methods better. I hope, however, that this post served you well.

Keep hacking!