Almost any Image Classification Problem using PyTorch

Prakash Jay
Dec 20, 2017 · 5 min read
Image for post
Image for post
PyTorch Logo

This is an experimental setup to build code base for PyTorch. Its main aim is to experiment faster using transfer learning on all available pre-trained models. We will be using the plant seedlings classification dataset for this blog-post. This was hosted as a play-ground competition on Kaggle. More details here.

The following pre-trained models are available on PyTorch

  • squeezenet1_0, squeezenet1_1
  • Alexnet
  • inception_v3
  • Densenet121, Densenet169, Densenet201
  • Vgg11, vgg13, vgg16, vgg19, vgg11_bn. vgg13_bn, vgg16_bn, vgg19_bn

The three cases in Transfer Learning and how to solve them using PyTorch

  1. Freezing all the layers except the final one
  2. Freezing the first few layers
  3. Fine-tuning the entire network.

This is very much straight forward in PyTorch if you know how the models are structured and wrapped. All the models used above are written differently. Some use Sequential containers, which contain many layers and some directly contain just the layer. So it is important to check how these models are defined in PyTorch.

ResNet and Inception_V3

if resnet:
model_conv=torchvision.models.resnet50()
if inception:
model_conv=torchvision.models.inception_v3()
## Change the last layer
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, n_class)

Lets check what this model_conv has, In PyTorch there are children (containers) and each children has several childs (layers). Below is the example for resnet50,

for name, child in model_conv.named_children():
for name2, params in child.named_parameters():
print(name, name2)
## A long list of param are listed, some of them are shown below,
conv1 weight
bn1 weight
bn1 bias
....
fc weight
fc bias

Now if we want to freeze few layers before training, We can simple do using the following command:

## Freezing all layers
for params in model_conv.parameters():
params.requires_grad = False
## Freezing the first few layers. Here I am freezing the first 7 layers ct = 0
for name, child in model_conv.named_children():
ct += 1
if ct < 7:
for name2, params in child.named_parameters():
params.requires_grad = False

Changing the last layer to fit our new_data is a bit tricky and we need to carefully check how the underlying layers are represented. We have already seen for Resnet and Inception_V3. Lets check for other networks

Squeeze-Net

model_conv = torchvision.models.squeezenet1_1()for name, params in model_conv.named_children():
print(name)
'''
features
classifier
'''
## How many In_channels are there for the conv layer
in_ftrs = model_conv.classifier[1].in_channels
## How many Out_channels are there for the conv layer
out_ftrs = model_conv.classifier[1].out_channels
## Converting a sequential layer to list of layers
features = list(model_conv.classifier.children())
## Changing the conv layer to required dimension
features[1] = nn.Conv2d(in_ftrs, n_class, kernel_size,stride)
## Changing the pooling layer as per the architecture output
features[3] = nn.AvgPool2d(12, stride=1)
## Making a container to list all the layers
model_conv.classifier = nn.Sequential(*features)
## Mentioning the number of out_put classes
model_conv.num_classes = n_class

Dense-Net

model_conv = torchvision.models.densenet121(pretrained='imagenet')
num_ftrs = model_conv.classifier.in_features
model_conv.classifier = nn.Linear(num_ftrs, n_class)

VGG and Alex-Net

model_conv = torchvision.models.vgg19(pretrained='imagenet')# Number of filters in the bottleneck layer
num_ftrs = model_conv.classifier[6].in_features
# convert all the layers to list and remove the last one
features = list(model_conv.classifier.children())[:-1]
## Add the last layer based on the num of classes in our dataset
features.extend([nn.Linear(num_ftrs, n_class)])
## convert it into container and add it to our model class.
model_conv.classifier = nn.Sequential(*features)

We have seen how to freeze required layers and change the last layer for different networks. Now lets train the network using one of the nets. I am not going to mention this here in detail as it is already made available in my Github repo.

Base code

  • Define a network
  • Load pre-trained weights if available
  • Freeze the layers which you don’t want to train (freezed layers act as feature vector extractor)
  • Mention the loss
  • Choose the optimizer for training
  • Train the network until your defined criteria is met.

Now lets look how this done for inception_v3 in PyTorch. We will be freezing first few layers and train the network using an SGD optimizer with momentum and use Cross-Entropy loss.

Dataset Used

Metrics:

Update-1

  • resnext101_64x4d
  • resnext101_32x4d
  • nasnetalarge
  • inceptionresnetv2
  • inceptionv4

I am facing issues with bninception and vggm. Will update soon

TO DO:

  1. Ensembling model outputs
  2. Model stacking
  3. Extracting bottleneck features and using — ML to train the model
  4. Visualization using T-sne
  5. Solve issue with bninception(Model is not training)
  6. Train Vggm network
  7. SE-Net implementation and training.

Final Submission Results:

GitHub Repo and stuff

This blog-post is not yet complete. I will add more stuff to this, Stay tuned.

Update-2

Co-Contributors: Vikas Challa and Sachin Chandra

Image for post
Image for post
Image for post
Image for post
Image for post
Image for post

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store