Week 4: Blood Cell Classification

Tolga Furkan Güler
bbm406f19
Published in
4 min readDec 22, 2019

Team Members: Emre Tunç, Muhammed Sezer Berker, Tolga Furkan Güler

Hello everyone, this is our fourth article of series of our Machine Learning Course Project about Blood Cell Classification.To remind, our purpose is classifying the blood cell images and predict the possible disease according to the blood cell we have detected.

Leukemia patients see far more Lymphocytes in their blood.

Before that, after reviewing our data set, we decided which method we should use and used some preprocessing techniques on our dataset to positively affect the performance of our model.

This week, we searched the some related works and classification techniques.Based on these related works, we discussed which techniques we should apply and tried to determine the optimal hyperparameter values.After our conclusions, we decided to train our data and look at the results.Based on our results, we have decided to perform various optimization operations, if necessary.

Related Works

VGG16 — Convolutional Network for Classification

When we examine other studies, we found that vgg16 is more successful on our data set than other models.Therefore, we used VGG16 model with all the pre-trained weights ready to be used.

The type of study using the pretrained model is called transfer learning.
In short, to define this terminology, transfer Learning differs from traditional Machine Learning in that it is the use of pre-trained models that have been used for another task to jump start the development process on a new task or problem. Since there are generally no millions of data points for real-world problems, it makes sense to use this approach in a train.

VGG16 Architecture

This architecture is from VGG group, Oxford. It makes the improvement over AlexNet by replacing large kernel-sized filters(11 and 5 in the first and second convolutional layer, respectively) with multiple 3X3 kernel-sized filters one after another. With a given receptive field(the effective area size of input image on which output depends), multiple stacked smaller size kernel is better than the one with a larger size kernel because multiple non-linear layers increases the depth of the network which enables it to learn more complex features, and that too at a lower cost.

Built using:

  • Convolutions layers ( 3*3 size )
  • Max pooling layers ( 2*2 size)
  • Fully connected layers at end

VGG-16 has 13 convolutional and 3 fully-connected layers,total 16 layers, it takes input image of size 224 * 224 * 3 (RGB image).

Experiment Result

We will use VGG16 model with all the pre-trained weights ready to be used.

Training a model from scratch requires a very large dataset and takes a lot of time to train.So we determined to use the features extraction part of a pre-trained model since all the extracted features from images are similar (edges, circles, lines, etc.).

We will observe the effects of hyperparameters such as batch size, number of epochs and learning rate, and determine the best values of these parameters. We use ADAM optimizer and Cross Entropy loss in this step.

Adam is an optimization algorithm that can be used instead of the classical stochastic gradient descent procedure to update network weights iterative based in training data.In the ADAM optimizer, the learning rate is changed according to the loss values until the optimal value is found.

Cross-entropy loss, measures the performance of a classification model whose output is a probability value between 0 and 1.

While training our model, we set batch size to 4 and trained our data at 10 epochs, and our results were as follows:

This week we examined some related works and made some inferences.We then trained our data and achieved some results.In the coming weeks, we will work to improve the performance of our model. Thank you for reading and for your time.See you next week.

--

--