Poutyne: A Simplified Framework for Deep Learning in PyTorch

Frédérik Paradis
Jan 29 · 7 min read

Authors: Frédérik Paradis, David Beauchemin, Mathieu Godbout, Jean-Samuel Leboeuf, François Laviolette

Image for post
Image for post

Poutyne is an open-source Python framework that simplifies the development of neural networks with the PyTorch library. This framework eliminates much of the boilerplate code required when training a neural network without sacrificing the flexibility of PyTorch. Poutyne is part of the PyTorch ecosystem and allows researchers as well as developers in the industry to quickly iterate and focus on their respective objectives.

In this article, we show how you can develop and train neural networks efficiently using Poutyne. We go through (1) the basics of Poutyne, (2) the management of metrics, and finally we discuss (3) how you can easily customize your training and act on the values of your metrics.

Lightweight Framework for Faster Prototyping

Big portions of a PyTorch project’s code are the same no matter what project you are working on (e.g. backpropagation loop, data transfer to correct device, logging, etc.). Poutyne lets you hit the ground running when starting a new project by taking care of this boilerplate code for you so you can focus on what your project is truly about. Whether you are introducing a new architecture, proposing a new data augmentation procedure or even designing a new optimizer, Poutyne lets you use PyTorch in a cleaner, more efficient way.

In order to do so, Poutyne proposes a two-level abstraction which decouples the training loop from functional features. The following figure presents a schematic view of this decomposition.

Image for post
Image for post
Overview of the Poutyne artifacts

At the core of the Poutyne framework lies the Model class (in light blue), which encapsulates a PyTorch network coupled with its optimizer and loss function. By pairing those items together, the Model class now allows you to fit the network with a data loader or make predictions with a single line of code. Moreover, the Poutyne Model is fully compatible with NumPy: all input data and output predictions can be NumPy arrays.

To avoid the loss of flexibility such an encapsulation can incur, Poutyne relies on a system of plug-in objects named Callbacks (in light purple). These objects, that can be added and removed at will, are called at the appropriate moment in the training loop and can be customized as desired to fit our needs. For example, callbacks can be used to make checkpoints of the model after each epoch, or to adjust the learning rate. Callbacks are discussed in more detail later.

Finally, to keep reducing the boilerplate code, Poutyne introduces a second abstraction layer with its Experiment class (in light orange), which purpose is to automate the portions of the prototyping pipeline not solely related to the training loop. It is a tool that handles automatically the metrics to be monitored, sets the model on the correct device and initializes multiple useful callbacks out-of-the-box. For instance, training can automatically be resumed from the best model checkpoint. Moreover, all the added functionalities of Experiment only require you to provide a working folder path to work.

Let’s look at how one could train a basic PyTorch network on the MNIST dataset using (1) pure PyTorch, (2) the Poutyne Model class and (3) the Poutyne Experiment class. Here we omit code that is shared by all implementations and only focus on how the training loop differs.

(1) Implementation in pure PyTorch (See fully functional code)
(2) Implementation using the Poutyne Model class (See fully functional code)
(3) Implementation using the Poutyne Experiment class (See fully functional code) with automatic logging and checkpointing in addition.

Moreover, Poutyne comes with the fun side-effect of automatically logging progress to the console:

While all those code snippets accomplish the same thing, the ones using Poutyne do it in a much leaner way. It also comes without saying that this is only a toy example. Any real project would include other artifacts like metrics logging or model checkpointing and the pure PyTorch would quickly become a mess.

As you can see, using Poutyne only requires minimal effort and allows for much sleeker code, while providing many more functionalities out of the box. This is what Poutyne is all about: faster prototyping without any sacrifice on flexibility.

Managing Metrics

One aspect of machine learning that any good practitioner never neglects is metrics monitoring. In your usual hand-crafted training loop, adding a new metric often amounts to (1) finding where, in your code, metrics are computed, (2) adding your new metric with all its specificities, (3) finding everywhere metrics are referred to and (4) deciding if your new metric belongs there or not. These are multiple steps that are easy to overlook by mistake and can be costly when forgotten.

Poutyne simplifies the usage of metrics in a declarative way: you define your metrics, give them to Poutyne and it takes care of computing and using them at the right moments. Indeed, as you will see in the next section, the callbacks in Poutyne are able to perform actions based on the values of the metrics. For now, let us look at how metrics are defined and used in Poutyne.

Poutyne offers two kinds of metrics: batch metrics and epoch metrics. The main difference between them is that batch metrics are computed at each batch whereas epoch metrics compute the metric at the end of the epoch. This distinction is useful because not all metrics can be computed for each batch and averaged at the end of an epoch to obtain the aggregated value of the metric for the whole epoch.

For instance, in a classification problem, you can compute the accuracy of each batch and average the batches of an epoch to obtain the accuracy for the whole epoch. However, this trick does not work with metrics such as the F1-score which is computed with a combination of the number of true/false positives/negatives. Thus, in Poutyne, the accuracy is a batch metric whereas the F1-score is an epoch metric.

It is in the Model and Experiment classes discussed in the previous section that you tell Poutyne the metrics you desire. Here is an example with the accuracy batch metric and the F1-score epoch metric while we’re at it.

Example with pre-defined metrics (See fully functional code)

As you can see, you easily get the value of the loss, the accuracy and F1-score on the test set. The example uses predefined strings but it is possible to define your own metrics via the interface proposed by Poutyne:

Example with custom metrics (See this fully functional example)

All PyTorch losses are available as predefined strings in addition to some commonly used metrics. Notably, an epoch metric for scikit-learn metrics is available. You can look at the documentation for more details on metrics.

Inserting Intermediate Steps in the Training Loop

So far, we’ve managed to quickly start a deep learning project and easily set our standard metrics for monitoring a model, but what about customizing the training loop? In your usual development process, you incrementally add features such as checkpointing and monitoring, and it rapidly becomes unmanageable and difficult to maintain. Poutyne introduces callbacks which are designed to solve this problem. Let’s use an example to explain the concept of callback.

Say you want to train a large model for days on a server and would like to monitor it without constantly looking at it to know when training is done. You come up with a ‘simple’ solution: sending yourself an e-mail message after any given epoch. You try to manually add code into your training loop to send that message, but you are not sure whether it belongs there or not and, even worse, you return to your code a month later and do not even remember what that line was for. Your intentions were good, but the execution was painful. All you wanted was an alert at the start and end of the training and every N steps.

Poutyne callbacks are classes that are called at a specific moment during training, such as the start or end of an epoch. This means you can focus on defining the logic you need instead of finding all the places it has to be called in your code; Poutyne will handle that for you. Callbacks make Poutyne more flexible without the burden of complexifying your code. They also let you access the values of the metrics at a given time (e.g. after each training batch), letting you define logic with that information (e.g. early stopping). Let’s take a look at what a training alert callback would look like:

Example with a custom callback (See fully functional code)

But one does not have to implement all the needed callbacks; Poutyne offers numerous callbacks such as checkpointing, logging, gradient clipping, early stopping, gradient tracking, and many more. You can look at the documentation for a more detailed list.

Conclusion

We presented some of the numerous features of Poutyne that allow you to write better and more efficient PyTorch code. For complete examples, take a look at Poutyne’s website.

Poutyne is used by hundreds of people around the world. We are actively improving Poutyne and adding more features. Since it is an open-source project, anyone who wishes to contribute is welcomed to join us on our public GitHub repository.

PyTorch

An open source machine learning framework that accelerates…

Frédérik Paradis

Written by

Applied AI Consultant and Researcher at Baseline (baseline.quebec) | Lead developer of Poutyne (poutyne.org) | PhD Student in Deep Learning at Université Laval

PyTorch

PyTorch

An open source machine learning framework that accelerates the path from research prototyping to production deployment

Frédérik Paradis

Written by

Applied AI Consultant and Researcher at Baseline (baseline.quebec) | Lead developer of Poutyne (poutyne.org) | PhD Student in Deep Learning at Université Laval

PyTorch

PyTorch

An open source machine learning framework that accelerates the path from research prototyping to production deployment

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store