Python Code Profiling and Accelerating Your Calculations With Numba

Leszek
The Startup
Published in
3 min readSep 4, 2020

--

Just in case you can’t read it here, you can find this also in my personal blog.

You wrote up your excellent idea as a Python program/module but you are unsatisfied with its performance. The chances are high most of us have been there at least once. I’ve been there last week.

I found an excellent method for outlier detection (Enhanced Isolation Forest). eIF was initially written in Python and later optimised in Cython (using C++). C++ is ~40x faster than the vanilla Python version, but it lacks the possibility to save the model (which is crucial for my project). Since adding model saving to C++ version is a rather complicated business, I’ve decided to optimise Python code. Initially, I hoped for ~5–10x speed improvement. The final effect surprised me, as rewritten Python code was ~40x faster than initial version matching C++ version performance!

Code profiling

Traditionally I’ve been relying on %timeit which reports precise execution time for expressions in Python.

%timeit F3.fit(X)
# 1.25 s ± 792 µs per loop (mean ± std. dev. of 7 runs, 1 l oop each)

As awesome as %timeit is, it won’t really tell you which parts of your code are time consuming. At least not directly. For that you’ll need something more advanced.

Code profiling became easier thanks to line_profiler. You can install, load and use it in Jupyter notebook as follows:

# install line_profiler in your system
!pip install line_profiler
# load the module into current Jupyter notebook
%load_ext line_profiler

# evaluate populate_nodes function of F3.fit program
%lprun -f F3.populate_nodes F3.fit(X)

The example above tells you that although line 134 takes just 11.7 µs per single execution, overall it takes 42.5% of the execution time as it’s executed over 32k times. So starting optimisation of the code from this single line could have dramatic effect on overall execution time.

Code optimisation

First thing I’ve noticed in the original Python code was that in order to calculate outlier score individual samples were streamed through individual trees in the iForest.

for i in  range(len(X_in)):
h_temp = 0
for j in range(self.ntrees):
h_temp += PathFactor(X_in[i],self.Trees[j]).path*1.0 # Compute path length for each point
Eh = h_temp/self.ntrees # Average of path length travelled by the point in all trees.
S[i] = 2.0**(-Eh/self.c) # Anomaly Score
return S

Since those are operations on arrays, lots of time can be saved if either all samples are processed by individual trees or if individual samples are processed by all trees. Implementing this wasn’t difficult and, combined with cleaning the code from unnecessary variables & classes, resulted in ~6–7x speed-up.

Speeding array operations with numba

Further improvements were much more mild and required detailed code profiling. As mentioned above, single line took 42% overall execution time. Upon closer inspection, I’ve realised that calling X.min(axis=0) and X.max(axis=0) was really time consuming.

x = np.random.random(size=(256, 12))
%timeit x.min(axis=0), x.max(axis=0)
# 15.6 µs ± 43.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Python code can be optimised with numba. For example calculating min and max simultaneously using numba just-in-time compiler results in over 7x faster execution!

from numba import jit

@jit
def minmax(x):
"""np.min(x, axis=0), np.max(x, axis=0) for 2D array but faster"""
m, n = len(x), len(x[0])
mi, ma = np.empty(n), np.empty(n)
mi[:] = ma[:] = x[0]
for i in range(1, m):
for j in range(n):
if x[i, j]>ma[j]: ma[j] = x[i, j]
elif x[i, j]<mi[j]: mi[j] = x[i, j]
return mi, ma

%timeit minmax(x)
# 2.19 µs ± 4.61 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

# make sure the results are the same
np.all([minmax(x), (x.min(axis=0), x.max(axis=0))], axis=0)

Apart from that, there have been several other parts that could be optimised with numba. You can have a look at eif_new.py and compare it with older and C++ version using this notebook. If you want to know details, just comment below — I’ll be more than happy to discuss them

If you’re looking for ways of speeding up array operations, definitely check numexpr beside numba. eIF case didn’t really need numexpr optimisations, but it’s really impressive project and I can imagine many people could benefit from it. So spread the word!

--

--

Leszek
The Startup

molecular biologist and bioinformatician @CRGenomica passionate about sport and music