The toolbox of the Herpetologist Social Scientists

Giulio Gabrieli
The Herpetologist Social Scientist
3 min readFeb 23, 2020

As I introduced in a previous post, Python is great for analyzing and visualizing your data. But is it really good on its own? Well, it is if you wan to code every single function on your own, but if you want to speed up your tasks, you can rely on different tools and packages.

In this page, I’ll introduce you to my favorite tools and packages, that I use almost daily within my research work.

What are the tools you need? Photo by Fleur on Unsplash

Where should I write my code?

When it comes to my coding environment, I have no doubt. My setup includes Jupyter Notebooks and Spyder.

Jupyter Notebooks

Jupyter notebooks are great tools if you need to run your data analysis. This open-source application allows you to create real notebooks where you can mix code and markdown, obtaining clean pages that can be interpreted even by those who doesn’t know how to code. Moreover, it allows you to export everything as a PDF file, that you can share with your colleagues.

Spyder

Spyder has become, the facto, my favorite Python editor. With an interface similar to MATLAB and R-Studio, Spyder is an Open Source editor with a focus on scientific computation. It comes with a variable explorer, a files tab, support for notebook, code analysis and much much more. It’s extremely lightweight if compared to other editors, such as Atom or VSCode, and it’s available as a standalone tool or even thought Anaconda. If you have to try a single Python editor in your life, try Spyder.

Packages, packages and more packages

Packages are an essential part of a Python environment, and new packages are released daily for us to download and use. While I am writing this, the Python Package Index PyPi.org counts more than 200.000 packages, but which one should you use? Take a look at the list below.

Numpy: numpy allows you to do everything related with math, and it also supports matrices and vectors. A must have in your setup.

Scipy: a close friend of numpy, scipy extends your Python environment with functionalities to compute scientific and statistic computations, from simple models to more advanced implementations to verify the existence of significant differences within your groups.

Pandas: we love reptiles, but we love cute and fluffy pandas too. This library allows you to create data frames for your data, simplifying your comparisons and statistical analysis. Not only, but it also easily import data from CSV and other tabular formats.

Matplotlib: the most used package for data visualization. Matplotlib supports a variety of different charts type, making it your perfect companion to produce high quality pictures.

Pysiology: my go to for physiological signal processing. Not only because the guy who designed this library is really cool, but also because it works great! With Pysiology you can process your ECG, EMG, and EDA signals in 3 lines of code.

pyphysio: similar to pysiology, this is a more advanced tool for physiological signal processing. Pyphysio supports multiple signals, and give you extreme control on your analysis process.

mne: mne is a great tool for the analysis of EEG and MEG signals. The learning curve is a bit step, but the results are great.

PsychoPy: python works also great if you need to implement your experimental paradigms for data collection. PsychoPy is a perfect tool to design your data collection stages, as it supports the presentation of different media, synchronization of multiple machines, and it also packs a visual editor for the design of your studies.

This is the list of my favorite packages to collect, analyze, and visualize my data using Python. For each of those I will write a post with some examples and use cases. Make sure to follow “The Herpetologist Social Scientist” to receive our new stories straights to your inbox. Any package I missed in the list above? Let me know in the comments!

--

--