Week 4: Histopathological Cancer Detection

Tugay ÇALYAN
bbm406f19
Published in
3 min readDec 30, 2019

Hello everyone, we are Tugay Calyan, Anil Aydingun and Denizcan Bagdatlioglu. In this week’s blog post, we will give you information about VGG-16, one of the pre-trained models we use in our model. To view our blog posts in previous weeks:

VGG-16

The Vgg16 deep learning algorithm is a network of 13 fully connected layers of convolution used by the University of Oxford Visual Geometry Group to achieve better results in the ILSVRC-2014 competition. There are a total of 41 layers with Maxpool, Fully-connected layer, Relu layer, Dropout layer and Softmax layer. The image to be placed in the input layer is 224x224x3. The last layer is the classification layer. ImageNet is a deep learning algorithm that has achieved 89% accuracy in the database. https://arxiv.org/abs/1409.1556

From Neurohive

It is a simple network model with convolution layers 2 or 3. It is converted to an attribute vector with 7x7x512 = 4096 neurons in the Fully Connected layer. The softmax performance of 1000 classes is calculated at the two FC layer outputs. Approximately 138 million parameters are calculated. As in other models, the height and width dimensions of the matrices decrease from input to output, while the depth value (number of channels) increases.

From Neurohive

Filters with different weights are calculated at each convolution layer output of the model, and as the number of layers increases, the attributes formed in the filters symbolize the ‘depth’ of the image.

The main features of VGG are:

  • In VGG, each convolution step consists of 3x3 matrices.
  • 1 pixel pad is used for input images. So when we convolve with a 3x3 matrix, we get an output of the same size.
  • “Stride” parameter is set to 1.
  • Max Pooling layers are defined as 2x2 and “stride” value is defined as 2.
From Analyticsvidhya

Conclusion

In experimental part, we have separated the our dataset into 10,000 images and we work on Google Colab. We made these experiments with fine tunning operation using VGG-16 pre-trained model. As an extra we have implemented data normalization. But when we look at the previous studies, we have seen that 220 000 train data has been replicated with data augmentation. In other words, our study caused us to encounter underfitting due to insufficient data set. Therefore, we do not have good results for now.

References

--

--