Introduction To Deep Learning đŸ€–- Chapter 1

satyabrata pal
ML and Automation
Published in
10 min readSep 26, 2020

My Learnings from the fastbook chapter 1

courtesy:https://pixabay.com/images/id-3501528/

👉Try out the code at Kaggle

About The Work Related To This Course

This work is based on the draft of the fastbook -Deep Learning for Coders with fastai and PyTorch and fastai course-v4

This is a condensed version of the fastbook draft and it also contains some of my own thoughts on the original material.

Deep Learning Myths

  • Myth 1 — You need big data to do deep learning -Not true. Using certain techniques Deep learning can be done with less than 50 data points.
  • Myth 2 — You need to know advance maths to practice deep learning — Not true — High school maths is sufficient.
  • Myth 3 — You need to have a phd to practice deep learning — Not True — Many people in this field don’t have a phd.
  • Myth 4 — Expensive hardware is needed — Not true — State of the art hardware is available for free here at kaggle or at google colab.

How It All Began

Well it began before the time of the internet, 1943 to be precise.

In 1943 Warren McCulloch, a neurophysiologist, and Walter Pitts, a logician, teamed up to develop a mathematical model of an artificial neuron.

McCulloch and Pitts realized that a simplified model of a real neuron could be represented using simple addition and thresholding.

  • In the below figure the top one is an actual brain neuron.
  • The bottom figure is that of a simple artificial neuron otherwise known as Perceptron.

As the years passed by and more researchers poured out their intellect on the research of neural networks, it was found that when we add more neurons in layers then the performance of the algorithm increases.

It was already shown 30 years ago that more neurons would equal to more performance but that would also need more powerful hardware.

As time passed by and more and more powerful hardware became available at low cost, the field of deep learning took off and mark the beginning of the golden age of deep learning. One that we are currently in.

The Traditional Way Of Learning Deep Learning

Many courses, lecture in deep learning focus on the theory and math first. Over there you learn the theory first, math first and maybe translating the math to code. Yet, it is very far down the line when you actually start building a neural network and that too on a very small toy dataset.

In all the fast.ai courses and in the fastbook things are not done in the traditional way. Let’s see what kind of teaching philosophy is followed by fastai.

Teaching Philosophy

Code first, peel the layers later — Fastai always follows the code first approach i.e. start with working example of a state-of-the-art deep learning network to solve a practical problem. Then peel the layers one by one to peek under the hood.

Learning by example — Use examples for every concept.

Simplifying as much as possible— Simplifying as much as possible so that the knowledge sharing happens without any barriers.

The Tools Of The Trade

We will use the following tools.

  • Python — Programming language of chocie for this course.
  • Pytorch — Deep Learning library developed by Facebook.
  • Fastai — Deep Learning library built on top of pytorch with the intention of making it easy to build deep neural network with less lines of code.

From Traditional Programming To Machine Learning

A traditional computer program has an input which is fed to the program and then it spits out some output.

If we create a flow chart then it would look like this-

This is all fine when you are writing down code for day-to-day simple or complex tasks but what about recognizing a cat in an image ?

What do you see when you look at the below picture ?

Well you see a cat. How would you write a computer program to recognize a cat in this image?

One way would be to translate the steps occurring in your brain to recognize a cat into code. The thing is. We don’t know how the brain does it. So, how to build a program to recognize a cat?

That is a good question and this brings us to someone known as “Arthur Samuel”.

The Computer That Learns

In 1949 Arthur Samuel proposed that instead of teaching a computer the detailed steps of solving a problem, show it examples from which it can learn.

In 1962 he wrote an essay “Artificial Intelligence: A Frontier of Automation” where he summarized the following concepts-

  • The idea of a “weight assignment”
  • Every weight assignment has some “actual performance”
  • The performance testing needs to be automated.
  • There should be an automatic means to improve the performance.

Now, if we change our earlier idea of a computer program using these concepts hen it would look like this-

A 50,000 Feet Look At How Image Recognition Works

Let’s see how to build a quick image recognition model looks like from 50,000 feet and how you can build a neural network with as few lines of code as possible.

We will first import the necessary modules.

from fastai.vision import *

This gives us access to all the functions inside the fastai.vision module.

Fastai provides some ready to use datasets which can be used as an example for different machine learning tasks. This makes it easier for you to get started as you don’t have to search around the web for data for your very first deep-learning project. One such dataset is the “Pets” dataset which contains images of đŸ± and đŸ¶ of different breeds.

This and other datasets can be downloaded using the following code.

path=untar_data(URLs.PETS)/’images’

This will download the pets dataset from it’s url (this is stored inside fastai library) and then extract it in the ‘images’ folder. This then returns you the complete path of the extracted data.

Next, we need something to tell the model how to recognize a cat. To do this first we will see how the files are created. For this we will use the following code.

path.ls()

You see? how the path.ls()` displays the list of filepaths from our data path. For now we will consider that the above code is some magic which returns us the filepaths.

Next, we will create a little function to fetch the labels from the file names.

We will do this with the following code.

def is_cat(x): return x[0].isupper()

This we plug into the below code which tells fastai how the data is structured

dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224)

The above line tells fastai the following stuff →

  • Which is the path where data is located.
  • What kind of file it is ? is it a image file?
  • How to divide the data between testing and training data.
  • How to extract the labels? This is where we use the is_cat function.
  • Finally, we tell fastai how do we want to transform our data. Here we want to resize it to 224*224 square image.

Before we proceed further I would like to mention a quick note on the importance of dividing the data into train and validation sets.

If you decide not to divide your data into into train and validation sets then also your model will perform good.

The question is what is the definition of good? how do you know how good your model is ? how do you know if it is not just memorizing all the data points instead of generalizing over key features? how does it perform on unseen data?

To answer all these questions you got to have a validation set which would serve as unseen data on which you can test you model.

Next, we create a learner which tells fastai the following →

  • Which architecture to use.
  • What data to train on.
  • What metric to use.

learn = cnn_learner(dls, resnet34, metric=error_rate)

What is resnet34 in the above line? It is an architecture of a neural network which was already trained on some other dataset by someone else and then it was made available to the public for further use. We use this pre-trained a to provide our learner with the knowledge of recognizing images.

Why we do this? Well! training a neural network from scratch is fun but it’s time consuming and needs lots of data. Taking knowledge from another neural network which already has certain set of related knowledge is always better.

We will dive into how such technique works later but for know just know that it works.

We also need to know how our model performs. For this we use the error _rate. This tells us how many times our model goofed up. This is a report card for our model.

Next, we use the following code to start training the learner.

learn.fine_tune(1)

The fine_tune is a fastai magic which displays the pretty table displayed above and other “important things” but before we take a look into what these “important things” are let me tell you what goes on in a neural network when we use a pre-trained model like resnet34.

Consider this. On a very high level a neural network can be considered to be composed of a body and a head like this →

When we get a pre-trained model then the knowledge is collected in the body of the neural network. The head is from where we get the predictions.

In our case we need a new head for our set of predictions and thus we need to train the head(i.e. our new networks head) from scratch. We are going to use the body of the pre-trained network as we want to utilize the knowledge collected here.

This is where fastai’s fine_tune comes into picture. We will deep into the internals of this functions at a later point but on a very high level this function tells fastai to look through the images once and fits those parts of the model which are required to correctly work with your dataset. After this it uses the number of passes requested to it to update the entire model.

By the way fine_tune updates the head faster than the later layers.

All the above steps combined together helps us to use fastai to build and train a model to recognize which image is đŸ¶ and which is đŸ±.

Behind The Scenes

To understand what goes behind the scenes when a neural network is learning we will take a look into the work done by Matt Zeiler and Rob Fergus in the 2013 paper named “Visualizing and Understanding Convolutional Networks”.

In this paper they created visualization about what a neural network sees.

The model used by them used the 2012 model which won the ImageNet competition.

The following are the results from the paper.

First layer

The image on the right side are the images of the reconstructed weights of a subset of weights in the first layer and the left side are the actual images which are the close matched to the reconstructed images. The “reconstructed images” are the ones which the network has learned to recognize. As you can see, the network has learned to recognize some sort of diagonal lines.

Second Layer

The second layer has started to recognize some sort of standing lines, some horizontal lines and edges of circles. The images which closely match to these are some fractals, sunsets etc.

Third Layer

Now, the third layer can detect faint edges of hexagonal shapes, flower edges, human facial features etc.

Fourth Layer and Fifth Layer

The fourth and fifth layer recognizes some high level features from the images.

This is how our image recognizer learns to recognize images. The only difference is that modern day neural networks have thousands of parameters and are able to recognize more intricate features.

Conclusion

This is all for this chapter. Till now we have just scratched the surface of what can be done with deep learning and we haven’t peeled the layers which we are going to do in the upcoming chapters. Also, in future discussions we would see the other areas where deep learning can be used.

How To Show Your Support To The Publication

Creating a content requires a lot of research, planning, writing and rewriting . This is important because I want to deliver practical content to you without any fluff.

If you like my work and my content and want to support me then the following are the ways to show your support →

  • If you like my work then click on this link to Buy me a coffee.
  • Buy my deep learning course at udemy. Just click on the course link in the show notes and get awesome discount on my deep learning course.
  • Subscribe to my publication and share it across so that more people can discover it.
  • Subscribe and share my podcast “SimpleAI” on google podcast or any other podcast player of your choice. Don’t forget to give it a 5 star.
  • Subscribe to my newsletter.

--

--

satyabrata pal
ML and Automation

A QA engineer by profession, ML enthusiast by interest, Photography enthusiast by passion and Fitness freak by nature