Week 6— Histopathologic Cancer Detection

Furkan Kaya
bbm406f19
Published in
4 min readJan 6, 2020

Hello everyone! We will share with you today the last series of our Machine Learning Course Project on Cancer Detection with Histopathological Data. This week we have come to the end of our project and we will share with you the details of our network and the success. Let’s see what happened at the end of the six-week series that I followed all together!

Week 1 — Histopathologic Cancer Detection
Week 2 — Histopathologic Cancer Detection
Week 3 — Histopathologic Cancer Detection
Week 4 — Histopathologic Cancer Detection
Week 5 — Histopathologic Cancer Detection

Our CNN model

The readers who follow my previous blog posts know, but let us summarize the techniques we have used for those who have met us for the first time: This is a project that we started by using the convolutional neural network technique for the detection of cancer using histopathological data. Our designed model is visualized in the figure above. Now let’s examine the details and success of the model we created together!

Our model consists of 6 main layers (blocks). Our first block contains 4 sequential layers:

· Conv

Our first Convolution layer includes 3x3 filter size and 32 filters.

· BatchNorm

For better results, we performed Batch Normalization after this layer. We scaled the input layer according to our features.

· ReLU (activation)

We created a nonlinear layer after all convolutional layers so that the result is not a linear combination of outputs. We preferred ReLU activation function because it gives better results in neural networks.

· MaxPooling

After this layer, we reduced the number of parameters to be learned with Max Pooling. So the cost of computation was reduced. When applying the filter, the maximum value is taken for each non-overlapping sub-region and a new matrix is created from these values. We use pooling with 2x2 pooling size. Input sizes were reduced to 32x32 by applying a 2x2 max pooling while 96x96.

We repeated the Conv — BatchNorm — Relu — MaxPool layers respectively for blocks 2, 3, 4 and 5 as we did in the first block.

Our 6th layer Fully Connected Layer is very important for CNN. After the Convolution and pooling layers are completed, the layer takes the result of the operations that took place in the previous layers and uses them to label the image. We repeated the procedures and finally used sigmoid, an effective activation function for classification.

And the final expected, our model is now ready! For results, please continue!

Experimental Results

As we mentioned last week, we used confusion matrix, AUC and ROC curve metrics to measure the success of our results.

Confusion Matrix:

Below is the confusion matrix of our model. Recall and precision values can be calculated according to this matrix.

Confusion matrix of our model

Assuming that you remember the draft and formulations we mentioned last week, the recall and precision and calculations are as follows.

· Recall: It shows out of all the positive classes, how much we predicted correctly.

Our Recall = 97.71%

· Precision: It shows out of all the positive classes we have predicted correctly, how many are actually positive.

Our Precision = 94.9 %

AUC — ROC Curve:

Validation AUC/Epochs

At the end of the 30 epoch, the AUC value obtained using the ROC curve increases up to about 97% The higher the AUC value, the better the ability to predict when classifying the model. The value we have reached is not bad at all, what do you say? :)

Quantitative Results

We’ve been working for weeks, so a little competition is our right! Let’s see what other studies we follow and where we are, the excitement is at its peak!

Comparison table of networks

Even if there is no big difference between the results, we are still ahead! You deserve to celebrate that you have been traveling with us for weeks! We have come to the end of this study, but we hope that we will meet again in different studies, even stay! Take care of yourselves :)

References

--

--