Photo by Louis Reed on Unsplash

transformers go brum brum

Hi guys! Today we are going to implement Training data-efficient image transformers & distillation through attention a new method to perform knowledge distillation on Vision Transformers called DeiT.

You will soon see how elegant and simple this new approach is.

Code is here, an interactive version of this article can be downloaded from here.

DeiT is available on my new computer vision library called glasses

Before starting I highly recommend first have a look at Vision Transformers

Introduction

Let’s introduce the DeiT models family by having a look at their performance


Photo by eberhard grossgasteiger on Unsplash

Hi guys, happy new year! Today we are going to implement the famous Vi(sion) T(ransformer) proposed in AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE.

Code is here, an interactive version of this article can be downloaded from here.

ViT is available on my new computer vision library called glasses

This is a technical tutorial, not your normal medium post where you find out about the top 5 secret pandas functions to make you rich.

So, before beginning, I highly recommend you to:

- have a look at the amazing The Illustrated Transformer website - watch…


Photo by Eric TERRADE on Unsplash. The most famous face in the world!

A deep learning approach

All the code can be found here. An interactive version of this article can be downloaded from here

Today we are going to use deep learning to create a face unlock algorithm. To complete our puzzle, we need three main pieces.

  • a find faces algorithm
  • a way to embed the faces in vector space
  • a function to compare the encoded faces

Find Faces

First of all, we need a way to find a face inside an image. We can use an end-end approach called MTCNN (Multi-task Cascaded Convolutional Networks).

Just a little bit of technical background, it is called Cascaded because it…


Photo by SpaceX on Unsplash

torchserve to the rescue!

All the code used in this article is here

Recently, PyTorch has introduced its new production framework to properly serve models, called torchserve.So, without further due, let’s present today’s roadmap:

  1. Installation with Docker
  2. Export your model
  3. Define a handler
  4. Serve our model

To showcase torchserve, we will serve a fully trained ResNet34 to perform image classification.

Installation with Docker

Official doc here

The best way to install torchserve is with docker. You just need to pull the image.

You can use the following command to save the latest image.

docker pull pytorch/torchserve:latest

All the tags are available here

More about docker and torchserve…


A semantic browser using deep learning and elastic search to search COVID papers

Today we are going to build a semantic browser using deep learning to search in more than 50k papers about the recent COVID-19 disease.

All the code is on my GitHub repo. While a live version of this article is here

The key idea is to encode each paper in a vector representing its semantic content and then search using cosine similarity between a query and all the encoded documents. This is the same process used by image browsers (e.g. Google Images) to search for similar images.

So, our puzzle is composed of three pieces: data, a mapping from papers…


A clean and simple template to kick start your next dl project 🚀🚀

The template is here

In this article, we present you a deep learning template based on Pytorch. This template aims to make it easier for you to start a new deep learning computer vision project with PyTorch. The main features are:

  • modularity: we split each logic piece into a different python submodule
  • data-augmentation: we included imgaug
  • ready to go: by using poutyne a Keras-like framework you don’t have to write any train loop.
  • torchsummary to show a summary of your models
  • reduce the learning rate on a plateau
  • auto-saving the best model
  • experiment tracking with comet

Motivation

Let’s face it, usually…


Today we are going to implement the famous ResNet from Kaiming He et al. (Microsoft Research) in Pytorch. It won the 1st place on the ILSVRC 2015 classification task.

ResNet and all its variants have been implemented in my library glasses

Code is here, an interactive version of this article can be downloaded here The original paper can be read from here (it is very easy to follow) and additional material can be found in this quora answer

Introduction

This is not a technical article and I am not smart enough to explain residual connection better than the original authors. …


Photo by Ricardo Rocha on Unsplash

There is one famous urban legend about computer vision. Around the 80s, the US military wanted to use neural networks to automatically detect camouflaged enemy tanks. They took a number of pictures of trees without tanks and then pictures with the same trees with tanks behind them. The results were impressive. So impressive that the army wanted to be sure the net had correctly generalized. They took new pictures of woods with and without tanks and they showed them again to the network. This time, the model performed terribly, it was not able to discriminate between pictures with tanks behind…


Photo by Markus Spiske on Unsplash

Updated at Pytorch 1.7

You can find the code here

Pytorch is an open source deep learning framework that provides a smart way to create ML models. Even if the documentation is well made, I still find that most people still are able to write bad and not organized PyTorch code.

Today, we are going to see how to use the three main building blocks of PyTorch: Module, Sequential and ModuleList. We are going to start with an example and iteratively we will make it better.

All these four classes are contained into torch.nn

Module: the main building block

The Module is the main…


Photo by Vincentiu Solomon on Unsplash

Three different ways

You can find the Jupyter notebook for this article here

Today we are going to see how to create words embedding using TensorFlow.

Updated to tf 1.9

Words embedding is a way to represent words by creating high dimensional vector space in which similar words are close to each other.

Long story short, Neural Networks work with numbers so you can’t just throw words in it. You could one-hot encoded all the words but you will lose the notion of similarity between them.

Usually, almost always, you place your Embedding layer in-front-of your neural network.

Preprocessing

Usually, you…

Francesco Zuppichini

“quam minimum credula postero” https://francescozuppichini.carrd.co/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store