Intro to Quantyca and fast.ai

Our approach and some of the tasks we have solved with the package

Published in
5 min readJun 11, 2019

--

At the beginning of the year, together with Quantyca colleagues who are part of the analytics community of practice, we decided to invest part of our time to deepen our knowledge of fast.ai, a rising deep learning library developed on top of PyTorch.

We have therefore formed a study group also open to colleagues from other centers of competence (SRE, Software engineering, data engineering) and the opening of the group was immediately a success. So we found ourselves having to manage a very heterogeneous group in terms of backgrounds and skills: this is obviously a great plus, but at the same time is not easy to build a Learning Path that fit in perfectly with everybody!

In this introduction post we are going to describe our experience using the library and in the next posts we will show you some interesting applications of it.

Fast.ai

First of all we should precise that fast.ai is not only a library but also a research lab founded by Jeremy Howard and Rachel Thomas, faculty member and professor at the University of San Francisco.

Fast.ai has the mission to make deep learning easier to use from people from all backgrounds, in fact its ambitious motto is:

In particular, they developed a library and a course that aim to make Deep Learning (and Machine Learning too) as accessible as possible in terms of skills and computing power required.

Fast.ai is built on the top of Pytorch, an open source deep learning platform that provides flexibility and scalability and that can be seen as a competitor of the most popular Tensorflow. The previous is developed by Facebook and the latter by Google, they are both Python based and can use graphs to represent the flow of data and operations.

Fast.ai is to Pytorch as Keras is to Tensorflow, they provide high level api that simplify and speed up time to develop deep learning models.

The library is open source and you can download it from conda, pip and github. The documentation is available here and one of the authors, Jeremy Howard, is publishing three MOOC in which he uses and explains the library:

The mix of achievable performances, best practices and clear explanations have convinced us to use it, so next we will show you what we have done.

Let’s Dive into the Course!

The teaching approach is clearly top-down and we found it perfect to be combined with our calendars, quite busy with activities of projects to deliver to our clients.

We organized our learning path scheduling a meeting after every couple of lessons (there are 7 lessons overall).

The first two lessons capture the attention with practical examples of deep learning in the computer vision field.

Each of us have developed a model for image classification (single label and multiple labels) or image segmentation. The use cases were really different, some of them were built on images downloaded from Google Images and some from public datasets, including Kaggle ones.

To give you some examples related to image classification, we tried

and, related to image segmentation,

  • building localization from satellites images
  • salt identification from Kaggle
  • newspaper areas recognition
  • people recognition using cocodataset

reaching good levels of accuracy (and other metrics) in relation to time spent in developing. We are progressively publishing some of our works in this GitHub Repo. Check them out if you are curious.

To speed up the prototyping phase we used Google Colab, an environment that offers python notebook (like Jupyter) that you can save on Google Drive and that run on machines with GPU or TPU without the need to worry about creating machines on a cloud environment.

A snapshot of a Google Colab notebook for image classification

As described in Pietro’s post we also developed the serving part, building some webapps for some of the previous examples (for the inference use of the models, that doesn’t need the GPUs).

Going on with the lessons other topics are covered, such as Natural Language Processing. Pietro and Francesco developed a model for predicting the next word in an Italian sentence and use it to classify public reviews about restaurants.

The lessons go on with tabular data and GANs and when all the fields are covered more theory insights are given. We addressed this part deepening each of us a different topic.

Conclusions

The takeaway is that transfer learning is one of the keys for reaching good results both in Computer Vision and NLP, together with the choice of the best architecture and the integration of the best practices discovered in the latest scientific papers for regularization and optimization and for improving training performances. All these are ready to use in the library (although some tuning is obviously necessary).

We have arrived in the end of this post giving some examples of what we have done; if it sounds interesting as we hope, check out the next posts that go deeper into the applications!

--

--