How I improved my python code Performance by 371%!

Shekhar Verma
4 min readJun 19, 2023

--

From 29.3s runtime to 6.3s without any external library!

Introduction

Before getting started let’s discuss the problem statement in hand. I wanted to analyze some data stored in a text file. Each row contained four numerical values demlimited by a space, for a total of 46.66M rows. The size of the file is around 1.11 GB, and I am attaching a small screen shot of the data below so that you get the idea what it looks like.

I needed to extract only the rows for a given value of the third column (3100.10 in the image above) The first thing I tried was to simply use numpy.genfromtxt() but it gave memory error as the data is too big to handle at once.

I tried to segment the data into smaller chunks and do the same, but it was painfully slow 😫 so I tried various things to get the job done in the fastest possible way. I will show you the code along with the concepts I used to optimise my code.

def Function1():
output="result.txt"
output_file=open(output, "w")
value=3100.10
with open(file, "r") as f:
for line in f:
if(line!="\n"):
if(len(line.split(" "))==4):
try:
if(int(float(line.split(" ")[2]))==int(value)):
output_file.write(line)
except ValueError:
continue

f.close()
output_file.close()

Starting Point

This is the most basic approach to solving this problem. Iterate through the entire file line by line, check if the line(row) contains that value, if it does then append the row to a new file.

The code took 29.3 s ± 56.7 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)

1. Loop Invariance

The first step of optimization is to look at the code and see whether we are doing something that is not necessary at all. In the loop where I am iterating through the lines, I am using int(value) in the loop to compare the value. This can be avoided by converting the value to an int once and using it in the loop. This is called Loop invariance, where we do something in the loop again and again which can be avoided.

You can read more about this here

Here is the code!

def Function1():
output="result.txt"
output_file=open(output, "w")
value=int(3100.10)
with open(file, "r") as f:
for line in f:
if(line!="\n"):
if(len(line.split(" "))==4):
try:
if(int(float(line.split(" ")[2]))==value):
output_file.write(line)
except ValueError:
continue

f.close()
output_file.close()

The code took 27.5 s ± 264 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)
By changing a single line of code, the code gained a 6.5% performance boost over the previous code. It’s not much but this is a very common mistake that programmers make.

2. Memory Mapping the File

It is a technique in which we load the entire file into memory (RAM) which is much faster than conventional file IO. Conventional IO uses multiple system calls to read the data from the disk and get it back to the program via multiple data buffers. Memory mapping skips these steps and copies the data to memory leading to improved performance (in most cases).

Python has a module named “mmap” that is used for thispurpose.

Here is the code!

def Function3():
output="result.txt"
output_file=open(output, "wb")
value=int(3100.10)
with open(file, "r+b") as f:
mmap_file=mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
for line in iter(mmap_file.readline, b""):
if(line!=b"\n"):
if(len(line.split(b" "))==4):
try:
if(int(float(line.split(b" ")[2]))==value):
output_file.write(line)
except ValueError:
continue
mmap_file.flush()

f.close()
output_file.close()

The code took 22.8 s ± 124 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)
Its 20% performance increase from the previous code.

3. Using slicing instead of data type conversion

In line int(float(line.split(b” “)[2]))==value , I am slicing the row to get the 3rd element and then converting the string to float and then to int to do the comparison.

It goes like “0 3098 3100.10 56188” ->“3100.10:->3100.10->3100

Now instead of using float and then int to convert a string with a decimal to integer I used slicing which resulted in a huge performance gain as string operations are faster than data conversions.

Here is the code!

def Function4():
output="result.txt"
output_file=open(output, "wb")
value=int(3100.10)
with open(file, "r+b") as f:
mmap_file=mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
for line in iter(mmap_file.readline, b""):
if(line!=b"\n"):
if(len(line.split(b" "))==4):
try:
if(int(line.split(b" ")[2][:-3])==value):
output_file.write(line)
except ValueError:
continue
mmap_file.flush()

f.close()
output_file.close()

This time the code took 20 s ± 171 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)
Its 14% performance increase from the previous code just by changing a line of code

4. Using the find operation

Now this is the final nail in the coffin, up until this time I was iterating through the lines, extracting the 3rd column value and comparing it but this time I am using find operation to look for the desired value in each line. And it is surprisingly fast!!.
You can read more about it here

Here is the code!

def Function5():
output="result.txt"
output_file=open(output, "wb")
value=int(3100.10)
value=(str(value)+".").encode()
with open(file, "r+b") as f:
mmap_file=mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
for line in iter(mmap_file.readline, b""):
find=line.find(value)
if(find>=7 and find<=11):
output_file.write(line)
mmap_file.flush()

f.close()
output_file.close()

This time the code took 6.22 s ± 55.8 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)
It’s 221.5% performance increase from the previous code.
This is almost 4.7 times increase in performance from where we started.

Hardware Used :-
->Legion 5 15ACH6
->AMD 5800H
->16 GB RAM

Here below you can see the comparison between Windows 10 Vs ubuntu (22.04 LTS)!

Optimisation LevelWindowsLinux029.318.2127.517.3222.819.332018.946.228.88

Thankyou for reading! 😄

--

--

Shekhar Verma

I Write about whatever i find interesting in domains of embedded systems and python programming.