Deep Learning 2: Part 1 Lesson 1

Hiromi Suenaga
Jan 12, 2018 · 12 min read

Lesson 1

Getting started [0:00]:

Introduction to Jupyter Notebook and Dogs vs. Cats [12:39]

%reload_ext autoreload
%autoreload 2
%matplotlib inline
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz=224
!ls {PATH}models	sample	test1  tmp  train  valid
!ls {PATH}validcats  dogsfiles = !ls {PATH}valid/cats | head
files
['cat.10016.jpg',
'cat.1001.jpg',
'cat.10026.jpg',
'cat.10048.jpg',
'cat.10050.jpg',
'cat.10064.jpg',
'cat.10071.jpg',
'cat.10091.jpg',
'cat.10103.jpg',
'cat.10104.jpg']
img = plt.imread(f'{PATH}valid/cats/{files[0]}')
plt.imshow(img);
img.shape(198, 179, 3)img[:4,:4]array([[[ 29,  20,  23],
[ 31, 22, 25],
[ 34, 25, 28],
[ 37, 28, 31]],
[[ 60, 51, 54],
[ 58, 49, 52],
[ 56, 47, 50],
[ 55, 46, 49]],
[[ 93, 84, 87],
[ 89, 80, 83],
[ 85, 76, 79],
[ 81, 72, 75]],
[[104, 95, 98],
[103, 94, 97],
[102, 93, 96],
[102, 93, 96]]], dtype=uint8)
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(resnet34, sz))
learn = ConvLearner.pretrained(resnet34, data, precompute=True)
learn.fit(0.01, 3)
[ 0. 0.04955 0.02605 0.98975]
[ 1. 0.03977 0.02916 0.99219]
[ 2. 0.03372 0.02929 0.98975]

Fast.ai Library [22:24]

Analyzing results [24:21]

data.val_yarray([0, 0, 0, ..., 1, 1, 1])
data.classes['cats', 'dogs']
log_preds = learn.predict()
log_preds.shape
(2000, 2)log_preds[:10]array([[ -0.00002, -11.07446],
[ -0.00138, -6.58385],
[ -0.00083, -7.09025],
[ -0.00029, -8.13645],
[ -0.00035, -7.9663 ],
[ -0.00029, -8.15125],
[ -0.00002, -10.82139],
[ -0.00003, -10.33846],
[ -0.00323, -5.73731],
[ -0.0001 , -9.21326]], dtype=float32)
preds = np.argmax(log_preds, axis=1)  # from log probabilities to 0 or 1
probs = np.exp(log_preds[:,1]) # pr(dog)
# 1. A few correct labels at random plot_val_with_title(rand_by_correct(True), "Correctly classified")
# 2. A few incorrect labels at random
plot_val_with_title(rand_by_correct(False), "Incorrectly classified")
plot_val_with_title(most_by_correct(0, True), "Most correct cats")
plot_val_with_title(most_by_correct(1, True), "Most correct dogs")
plot_val_with_title(most_by_correct(0, False), "Most incorrect cats")
plot_val_with_title(most_by_correct(1, False), "Most incorrect dogs")
most_uncertain = np.argsort(np.abs(probs -0.5))[:4]
plot_val_with_title(most_uncertain, "Most uncertain predictions")

Top-down vs Bottom-up [30:52]

Course Structure [33:53]

Image Classifier Examples:

Deep Learning ≠Machine Learning [44:26]

A better way [47:35]

Infinitely flexible function: Neural Network [48:43]

All purpose parameter fitting: Gradient Descent [49:39]

Fast and scalable: GPU [51:05]

Putting all together [53:40]

Diagnosing lung cancer [56:55]

Convolutional Neural Network [59:13]

Linear Layer

Nonlinear Layer [01:02:12]

Sigmoid and ReLU

How to set these parameters to solve problems [01:04:25]

Visualizing and Understanding Convolutional Networks [01:08:27]

Dog vs. Cat Revisited — Choosing a learning rate [01:11:41]

learn.fit(0.01, 3)
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.lr_find()
learn.sched.plot_lr()
learn.sched.plot()

Choosing number of epochs [1:18:49]

[ 0.       0.04955  0.02605  0.98975]                         
[ 1. 0.03977 0.02916 0.99219]
[ 2. 0.03372 0.02929 0.98975]

Tips and Tricks [1:21:40]


Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade