Python 3.9 StatsProfile — My first OSS Contribution to cPython

Daniel Olshansky
Analytics Vidhya
Published in
4 min readFeb 17, 2020

You can try out all of the code in this article yourself using this Google Colaboratory notebook.

If you’ve ever tried to debug and optimize your python application, it’s likely that you stumbled upon Python Profiles to understand where most of the execution time is being spent. You enable the profiler at the beginning of a code segment you’re interested in profiling with pr.enable(), and call pr.create_stats()at the end.

Afterwards, you can create a Stats object, and print the results in a human readable format with ps.print_stats().

The output above is quite useful and can take you a long way. However, what if you don’t know what kind of data inputs cause a bottleneck in your application? What if you’re interested in aggregating and evaluating profiling data over some period of time? What if you want to profile your application while your team is dogfooding it? I found that there isn’t an easy way to use this data in an ETL pipeline where you’d be able to do further offline analysis over a larger dataset.

I recently made my first open source cPython contribution which adds a dataclass called StatsProfile to address this. After you created your stats object, you can retrieve all the information in the above screenshot by calling ps.get_stats_profile() and analyze it in a programmatic way.

If you’re not on Python3.9 yet, the following code snippet is a slightly modified version of the code in the pull request that you can start using today by importing it directly into your project.

Now, rather than inspecting the profile of a single execution of our code snippet, we can aggregate and analyze the profiles over several different iterations. In a real production service, depending on which logging tool you use, you would likely need to format and stringify the StatsProfile dataclass before logging it, but for the purposes of this example, everything is stored in memory.

To simulate timestamped logging, (timestamp, stats_profile)tuples are appended to a timestamped_stats_profile list with every execution of the loop.

After the data is logged, it needs to be aggregated over a certain timeslice. Most logging/visualization platforms have their functions to process timeseries data, so this would be platform specific. Sumologic has the timeslice function, Elasticsearch has examples of how to do date histogram aggregation, Datadog has an aggregate across time dropdown, etc...

For the purposes of this example, I’m doing the aggregation manually in python. I bucket all the logged (i.e. saved) StatsProfile objects over 10 second intervals, aggregate the cumulative execution time, cumtime, per function call and store the resultant counters in time_slices_counters. If you’re interested in inspecting the number of calls to certain functions rather than the cumulative execution time spent in it, you would simply modify the parameter being access on line 21 in the code snippet below.

In my opinion, a stacked bar graph is a great way to visualize and easily interpret this data. Using the following code snippet:

We can generate a graph that looks like this:

The results aren’t very interesting or surprising given the simplicity of a script calling sleepa bunch of times, but hopefully it’ll be more useful in more complex applications.

It’s important to note that you probably should not be doing this in production. It could be useful on your local or development environments, and might be worth enabling in a single canary, but could have adverse effects in prod. You would be polluting your logs with large StatsProfile structures, and I have not investigated if running cProfile in prod could potentially downgrade your service’s performance.

As a side note, though there is some overhead and a small learning curve, I was very pleased with how easy it is to contribute to cPython. Aside from publishing the actual PR, you have to sign the PSF Contributor Agreement, open a on bugs.python.org, and nudge a few people to make get your code looked at. There is a great developer guide on how to run things locally and execute tests. I recently also came by this doc, which is a good starting point if you’ve never contributed to cPython before.

Huge thanks Gregory P. Smith for reviewing and approving my cPython PR! Also, thank you to Сергей Яркин for proofreading my article, and a special shoutout to Manuel Dell'Elce who built a really nice chrome extension that made embedding code snippets in this medium article a breeze.

--

--