Track and organize fastai experimentation process in Neptune
fastai library is quickly becoming the go-to tool when it comes to training deep learning models. With a few lines of code, you can train top quality models for vision, text and structured problems. Wouldn’t it be great if by adding just 4 more lines, you could track your entire experimentation process? This is what NeptuneMonitorcallback, that we have just open-sourced, is all about.

tl;dr
Add Neptune callback:
Go to Neptune:
go to https://neptune.ml/ and have your runs:
- hosted,
- backed-up,
- organized,
- easy to share and discuss with others.


Check the example versioned fastai project here.
Step by step
How you do things now
Let’s take a slightly modified mnist example from fastai. As per usual, with the fastai library, it only takes a few lines of code to train your model.
However, typically you train more than one version of your model before you are finished with the task. You may try different architectures, various training schedules or preprocessing methods in the hopes of optimizing the problem metric.
With every idea that you develop and experiment with, comes a set of additional knowledge that ideally you would like to store. This includes the code, hyperparameters, data versions, learning curves, prediction distributions charts, model weights and more. If you are not careful, all this knowledge could be lost.
There are many ways of tracking your experiments, but I would recommend you go with Neptune. It is open, integrates with any framework or library and lets you share your work with anyone. I know, my inner sleazy salesman got the best of me but it doesn’t make it any less true.
We built a simple callback that makes things super easy. Let me guide you through it.
Before we start
Well, if you want to use Neptune, you need to register and download the library first. On a good day, it takes 1min, on a bad day it takes 3min, so my best prediction is that it will be pretty quick.
- Create a Neptune account on https://neptune.ml/. Don’t worry it is absolutely free.
- Create a project named for example
YOUR_COOL_PROJECT.Read how to do it here. - Install
neptune-clientandneptune-contriblibraries. Simply run:
pip install neptune-client neptune-contribAdd a monitoring callback
You need to tell Neptune what to look at. Learner’s callbacks argument is the perfect place to do so. Instantiate our NeptuneMonitorcallback that takes and pass it to your fast.ai learner. Remember to install neptune-clientand neptune-contrib libraries.
It will log your code, learning curves and metrics automatically to Neptune.
I suggest that you play around a little bit. Change the number of epochs from 2 to 10, learning rate from 1e-2 to 1e-3 or the architecture from resnet18 to resnet50.
See your project in Neptune.
Now your experiment metadata is safely stored and backed-up in Neptune. Go to your project on https://neptune.ml/ or check the example versioned fastai project here.

You can add comments or tags and organize your runs in whichever way you like.
Explore experiments.
You can see the charts:

or code:

Share and discuss it with friends.
What is important is that you can share it with others and discuss it. You can point to a chart or code with no problems and ask your colleague for the feedback!

You may want to invite people to talk with you first :)

Sales pitch… kind of
If you liked what you saw register to Neptune on https://neptune.ml/ and use the community version absolutely free*!
Also, check the neptune.ml documentation and neptune-contrib documentation pages to see what we are about.
- no gotcha here, seriously it is free, no strings attached.
If you liked this, you can find more posts like this on our Neptune blog.
You can also find me tweeting @NeptuneML or posting on LinkedIn about ML and Data Science stuff.
Originally published at medium.com on February 15, 2019.

