Training an ML classifier model

Prakharpandey
DeepKlarity
Published in
4 min readOct 8, 2020

This blog is a part of series Clean or Messy Classifier . Check out the entire series for detailed implementation of deploying a machine learning classifier into web app using various frameworks.

In this article, We will discuss how two different libraries, namely FastaiV2 and Keras, can be used to train an image classification model.

Fastai V2

Dataset:
To begin with training an ML model, the first step is collecting a Dataset. In this project, we wanted our data as images of both, clean as well as messy surroundings. You can refer to this link for creating a good quality Dataset of images. Once the dataset was ready, we had to split it into training and validation set which was done by the following piece of code.

Loading the images:
With the help of ‘ImageDataLoaders.from_name_func’ we load all the images from a specified path and resizing it at the same time.

dls = ImageDataLoaders.from_name_func(
'', get_image_files(path), valid_pct=0.15, seed=42,
label_func=is_clean, item_tfms=Resize(224))

The ‘is_clean’ is a helper function imported from utils file that will help the model to know whether the image is clean or messy. Since, in our Dataset we had two kinds of images, clean and messy. Clean images were named as , ‘Clean_1.jpg’ i.e. the first letter is uppercase and messy images were named as, ‘messy_1.jpg’ i.e. the first letter is lowercase. This is how our model differentiated between a clean and a messy image. below given code is the is_clean function,

def is_clean(x): 
return x[0].isupper()

Making a model learn from labelled data:

model = cnn_learner(dls, resnet18, metrics=error_rate)

ResNet-18 is a CNN based architecture that is 18 layers deep.
Once the model was fitted, we used it to predict the responses for the images in the validation dataset.

Testing the model in Test dataset:
After getting results from validation data set, it was important to test the model in a Test data set to make an unbiased evaluation.

Saving the model:

os.makedirs(“models”, exist_ok=True)
model.export(‘models/model_v0.pkl’)

Keras

For working with Keras, we used the approach of Transfer Learning for training a model. Transfer Learning is very popular in deep learning as it can train deep neural networks in a relatively less data.

Data Augmentation:

Data Augmentation means to make data “greater” or “increase”. In Keras this is done by ImageDataGenerator function. This function basically accepts the batch of image and apply a series of random transformations to each image(including random rotation, resizing, shearing, etc.) in batch and then replace the original batch with the new, randomly transformed batch.

Similarly validation_generator is also made.

Training the model:
The model is trained using following piece of code

model.fit(
train_generator,
epochs=EPOCHS,
validation_data=train_generator)

Evaluating the model:

Testing the model:
Testing the model in a Test data set to make an unbiased evaluation.

Saving the model:

keras.models.save_model(model, "models/keras_model.h5")

Loading the model

There are various approaches to load a ML model for an end user product. To get the understanding of loading an ML model, you can refer to the links given below. The models discussed in this blogs are the same that we loaded.
This link will show you how to load a model in a backend API and return appropriate response.
This link will show you how to load a model in frontend of a web app using tensorflowjs.

Refer to this link to download the model.

That’s it for this article. Please comment and share if you face any issues or errors using this approach. If you have used an alternate approach please share.

--

--