Week 5: Blood Cell Classification

Tolga Furkan Güler
bbm406f19
Published in
4 min readDec 29, 2019

Team Members: Emre Tunç, Muhammed Sezer Berker, Tolga Furkan Güler

Hello everyone, this is our fifth article of series of our Machine Learning Course Project about Blood Cell Classification.To remind, our purpose is classifying the blood cell images and predict the possible disease according to the blood cell we have detected.Some diseases associated with blood cells are as follows: Anemia, Leukopenia, leukocytosis, Platelets.

Last week, we trained our data with the VGG-16, a CNN model, and achieved some results.The accuracy and lost values we obtained were not unsuccessful, but since we thought they could better represent our data, we wanted to try other CNN models and see their results.We will compare all the latest models we used in many ways and choose the most optimal one for us.

Last week we tried the VGG-16 and the results were:

VGG-16

This week we used a model that we observed in a related work, Resnet50, and AlexNet.

ResNet-50

ResNet-50 is a deep residual network. The “50” refers to the number of layers it has. It’s a subclass of convolutional neural networks, with ResNet most popularly used for image classification.

ResNet-50 Architecture

We freeze the weights in the feature extraction layers found in the ResNet-50 CNN architecture.We’ve trained our data set with the transfer learning method using the Fully connected layers and the results we get are as follows:

AlexNet

AlexNet was much larger than previous CNNs used for computer vision tasks ( e.g. Yann LeCun’s LeNet paper in 1998). It has 60 million parameters and 650,000 neurons and took five to six days to train on two GTX 580 3GB GPUs.

AlexNet consists of 5 Convolutional Layers and 3 Fully Connected Layers.

Multiple Convolutional Kernels (a.k.a filters) extract interesting features in an image. In a single convolutional layer, there are usually many kernels of the same size. For example, the first Conv Layer of AlexNet contains 96 kernels of size 11x11x3. Note the width and height of the kernel are usually the same and the depth is the same as the number of channels.

AlexNet-50 Architecture

As with ResNet and VGG-16 we freeze the weights in the feature extraction layers found in the AlexNet CNN architecture.We’ve trained our data set with the transfer learning method using the Fully connected layers and the results we get are as follows:

CNN architecture used in Related Work

The architecture consists of 7 layers, the first 5 layers feature extraction, the last 2 layers fully connected.

In the first convolutional layer, kernelsize = 5, stride = 1, padding = 0, number of filters = 16. Then Maxpooling is applied.Kernel = 2, stride = 2

In the second convolutional layer, kernelsize = 3, stride = 1, padding = 0, number of filters = 32. Then Maxpooling is applied.Kernel = 2, stride = 2

In the third convolutional layer, kernelsize = 3, stride = 1, padding = 0, number of filters = 64.

ReLu was used as activation function in each convolutional layer.

Finally, Softmax is used as the classifier in the fully connected layer.

The number of class in related work class but the number of classes in our project 4 and in order to use this model, we resized the pictures to 100x100 format.

After using this architecture and training our data set, the results are as follows:

After building our models, we tested the success of our models with our test dataset. The results are like the table below.

This week we trained our data set with different CNN models and looked at the test results and We have found that the most suitable model for us is the VGG-16. Thank you for reading and for your time.See you next week.

--

--