Avoid writing expensive code in Python with Memoization

credits: renatocandido.org

Expensive Code?? Ya, you read it right. While we use the repeated function call with the same parameters, it becomes computationally expensive as well as it takes same amount of time for the computation as well. In this article I’m going to brief on a way to speed up the Python programs and make expensive code cheaper with help of powerful caching technique called Memoization. Using memoization, we can avoid unnecessary computation.

Memoization is an effective technique that is used to speed up the computer programs. It memorizes or caches the returning results of the given input to the function. So, when the same input or parameter is passed to the given function, the cached results are returned instead of performing the whole computation again. Hence, it is a simple and effective technique.

The Flow of the Memoization implementation

  • Create a data structure to store the cache results
  • Each time when the function is called, compute the results and return it to data structure for caching

For the similar input, we will return the cached results instead of re-computing it again. Here, I will write memoization technique from scratch with the help of decorator. The decorator is a function that takes another function as the parameter and returns function as the output. If you are not familiar with the decorator, then it might be little confusing at first. I would recommend to learn a bit about decorator.

def memoize(func): 
remember = {}
def memoize_helper(*args):
if args not in remember:
result = func(*args)
return remember[result]
return memoize_helper

Here, I am using dict as the data structure for storing the cache results. Searching for the value in the dictionary is quick, hence it is effective for caching purpose. Here, when the decorator function is called, it will check for results in the cache for the given input. If found, then it will return the cached results, else it will compute and store the results into the cache.

I will implement memoization on the Fibonacci series function. Let’s define the function that calculates the Fibonacci series.

def fibo(n): 
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n - 1) + fibo(n - 2)

Here, the computation Fibonacci will grow at the exponential rate. Hence, it will become pricey with growing results. Here, we will have execution time calculation with the help of the timeit module in Python.

For computing the the 10th number in Fibonacci, it took around 5 seconds on my machine. Now, let’s memoize this function and see the results. For the first run it would take a bit more time but within second run will have results with reduced time.

Let’s see how the results are stored in cache with the help of __closure__

WARNING: Here, we have the unbounded cache size which will keep eating your memory. And this is not the good idea, as it can raise out of memory bugs.

Inbuilt Function

In the above scenario we have written the decorator function from scratch. Now, let us implement the the memoization using the inbuilt function. And the inbuilt function will work far better than our manual function.

We will use the functools.lru_cache decorator for implementation of the memoization. lru_cache resides in the inbuilt functools package. Let’s try to implement our Fibonacci function using functools.lru_cache. With this function, we can also bound the cache memory with the help of maxsize parameter.

import functools 
def fibo(n):
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n - 1) + fibo(n - 2)

Let’s run and see the results. For the first time it will take a bit more time , because the cache is empty. In the second run, we can see the expected results. And the results are self-explanatory.

To check that how cache is stored by functools.lru_cache and how to clear the cache, fire the below commands.

#Cache Info

#Clear Cache

This is one of the ways to avoid writing expensive code. Keep sharing. Stay tuned.




From Applied Machine Learning to Data Science in the real world; From Modeling to ML Ops; Covering collective experiences reflecting AI, ML and Data Engineering solutions and Operationalising them. Data Engineers and Scientists @Srijan (Srijan Technologies)

Recommended from Medium

I Tried to Teach Myself to Code (and Failed Spectacularly)

Caching in programming:-

Touching AR [Computer Vision * AR]

What Is The ARC -Swift

Django REST Framework — Parse Nested JSON and projecting API with permissions

Test your mobile app offline for lightning fast, reliable automation

Using the Python Connector for Databricks SQL

Nextplane, Inc. Intracompany Federation — Connecting Avaya IX Workplace and Microsoft Teams

Avaya IX Workplace and Microsoft Teams Integration

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Srce Cde

Srce Cde

AWS Community Builder | YouTuber: https://youtube.com/srcecde

More from Medium

Python for Less-Terrible Secret-sharing

__new__ vs __init__ in Python, a most known resource!

10 Advanced Features of Tuples that Level Up Your Skills in Python

Escape Python’s GIL with Numpy