**On memoization**

--

**Summary:**

Using “open” recursion:

- We can separate memoization logic from function logic.
- We can memoize functions without modifying the syntax of the function.
- We can make functions generic across multiple types of memoization.

# Memoization in Rust.

Consider a recursive function. The most basic example is the Fibonacci sequence. Here is a naive implementation:

fnfib_naive(arg: u32)->u32 {

matcharg {

0=>0,

1=>1,

n=>fib_naive(n - 1)+fib_naive(n - 2),

}

}fnmain() {

assert_eq!(fib_naive(40), 102334155);

}

Calculating `fib_naive(40)`

on my computer takes almost a second. If we look at this function, we can reason about it’s performance. The result of `fib_naive`

is going to be `0`

, `1`

or the result of two calls to `fib_naive`

added together. If we only have `0`

, `1`

, and addition at our disposal, then it will take at least `fib_naive(n)`

additions to calculate `fib_naive(n)`

. This is unacceptable performance for anything but the smallest values of `n`

.

# Memoizing the results

One way to improve performance is to *memoize* the results. That means that when we call our function, we record what the result is in some kind of cache. If we need to call the function again with the same arguments, instead of calculating the result, we look the result up in the cache.

The algorithm is a simple modification to the function above: whenever we call our function, we check if the result is already in the cache, if it’s already there, we return it. If it’s not there, we calculate it, save the result in the cache, and return the value:

fnfib_memo(cache:&mutHashMap<u32, u32>, arg: u32)->u32 {

matchcache.get(&arg).map(|entry| entry.clone()) {

Some(result)=>result,

None=>{

letresult=matcharg {

0=>0,

1=>1,

n=>fib_memo(cache, n - 1)+fib_memo(cache, n - 2),

};

cache.insert(arg, result.clone());

result

}

}

}fnmain() {

assert_eq!(fib_memo(&mutHashMap::new(), 40), 102334155);

}