Using Fibonacci to Exemplify Recursion, Big O, and Memoization

Fibonacci numbers are present in nature, and nowadays they’re often used in schools and interviews to test recursion. They can, however, provide us a greater insight into core computer science topics. I’m most fascinated by how they offer an important example of a powerful principle: We can make recursive functions run at almost the same time as their iterative counterparts.

Don’t believe me? Let’s attempt to solve the following problem:

A Fibonacci number is the sum of its previous two neighbors, starting at 0
0 1 1 2 3 5 8 13 21 34 .. i[n-2] + i[n-1] = i[n]
Given an integer n, write a function to compute the nth Fibonacci number. Determine the Big O time complexity of your solution
Expected Output:
fibonacci(1) //=> 1
fibonacci(4) //=> 3
fibonacci(12) //=> 144
fibonacci(40) //=> 102334155

And here is the naive solution:

To find the 4th Fibonacci number (input of 4), we get multiple returned recursive results when we run the function.

`fibonacciRecursive(4) = fibonacciRecursive(3)+fibonacciRecursive(2)fibonacciRecursive(3) = fibonacciRecursive(2)+fibonacciRecursive(1)fibonacciRecursive(2) = fibonacciRecursive(1)+fibonacciRecursive(0) //we know fib(1) and fib(0) are 0 and 1 respectively`

Pop off the recursive call stack.

`fibonacciRecursive(2) = 1 + 0 = 1fibonacciRecursive(3) = 1 + 1 = 2fibonacciRecursive(4) = 1 + 2 = 3`

The result is 3.

This gets computationally expensive very quickly. Note how we are being redundant; we call fibonacci(2) and fibonacci(1) twice just for an input of 4. Imagine if we are inputting larger numbers — our program will slow down.

Our time complexity is T(n) = T(n-1) + T(n-2) + C
T(n) = O(2^(n-1)) + O(2^(n-2)) + O(1) because each recursive call creates a binary tree of calls. See below if you don’t believe me.

`               fib(4)            /          \         fib(3)       fib(2)        /    \         /   \    fib(2)  fib(1)  fib(1) fib(0)`

At large n, we obtain O(2^n), which is exponential time (slow as molasses).

Memoization Using Array

We can use a technique called memoization to store solved results. This runs in O(n), which is a dramatic improvement for only a few extra lines of code. Visually, this allows us to remove redundancies and “cordon off” entire sections of the binary tree that we don’t have to traverse anymore.

We can also use Javascript’s default parameter instead of declaring a global object.

Memoization Using Object

Or, we can save up additional recursive calls by storing them in an object for retrieval when we need them. This runs in O(n).

Instead of making a global variable or putting the object as a default parameter, we can to put this as an property for an instance of the class because the recursive call method has to be kept separate from the cache. Also, think of us as separating out the methods defined on instances of the class from the properties and methods defined on the class constructor. The two are very different. This runs in O(n).

Iterative

Since everything recursive can be expressed in iterative terms, here is the following implementation. This is as fast the memoized version with O(n).

Benchmarking

Here are our tests and benchmarking results. The naive implementation is indeed slow, and the others outperform in O(n) time. As we see, the memoized recursive solutions run as fast as the iterative solution.

Sources:

Originally published at www.edwinyung.com on January 26, 2018.