Brain Tumor Detection Using Convolutional Neural Networks

Mohamed Ali Habib
4 min readAug 31, 2019

--

Introduction:

In this blog, you will see an example of a brain tumor detector using a convolutional neural network.

Domain-related Background:

A brain tumor is a mass or growth of abnormal cells in the brain. Brain tumors can be cancerous (malignant) or noncancerous (benign).

One of the tests to diagnose brain tumor is magnetic resonance imaging (MRI).

The Dataset:

A brain MRI images dataset founded on Kaggle. You can find it here.

The dataset contains 2 folders: yes and no which contains 253 Brain MRI Images. The folder yes contains 155 Brain MRI Images that are tumorous (malignant) and the folder no contains 98 Brain MRI Images that are non-tumorous (benign).

Meaning that 61% (155 images) of the data are positive examples and 39% (98 images) are negative.

Data Preparation:

Data Augmentation:

Data augmentation is a strategy that enables practitioners to significantly increase the diversity of data available for training models, without actually collecting new data.

Since this is a very small dataset, There wasn’t enough examples to train the neural network. And, data augmentation was useful in solving the data imbalance issue.

Before data augmentation, the dataset consisted of:

155 positive and 98 negative examples, resulting in 253 example images.

After data augmentation, now the dataset consists of:

1085 positive (53%) and 980 (47%) examples, resulting in 2065 example images.

Data Preprocessing:

For every image, the following preprocessing steps were applied:

  1. Crop the part of the image that contains only the brain (which is the most important part of the image): you can read more about the cropping technique to find the extreme top, bottom, left and right points of the brain in this blog Finding extreme points in contours with OpenCV.
  2. Resize the image to have a shape of (240, 240, 3)=(image_width, image_height, number of channels): because images in the dataset come in different sizes. So, all images should have the same shape to feed it as an input to the neural network.
  3. Apply normalization: to scale pixel values to the range 0–1.

Data Split:

The data was split in the following way:

70% of the data for training.

15% of the data for validation (development).

15% of the data for testing.

Initial Experiments:

Since this is a small dataset ,it’s common in computer vision problems to work with small datasets, so I thought that transfer learning would be a good choice in this case to start with.

Firstly, I applied transfer learning using a ResNet50 and VGG-16.

I replaced the last layer with a sigmoid output unit that will represent the output to our problem. And, I froze the parameters of all the other layers.

But these models were too complex to the data size and were overfitting. Of course, you may get good results applying transfer learning with these models using data augmentation. But, I’m using training on a computer with 6th generation Intel i7 CPU and 8 GB memory. So, I had to take into consideration the computational complexity and memory limitations.

So why not try a simpler architecture and train it from scratch. And it worked :)

The Neural Network Architecture:

Understanding the architecture:

Each input x (image) has a shape of (240, 240, 3) and is fed into the neural network. And, it goes through the following layers:

  1. A Zero Padding layer with a pool size of (2, 2).
  2. A convolutional layer with 32 filters, with a filter size of (7, 7) and a stride equal to 1.
  3. A batch normalization layer to normalize pixel values to speed up computation.
  4. A ReLU activation layer.
  5. A Max Pooling layer with f=4 and s=4.
  6. A Max Pooling layer with f=4 and s=4, same as before. I’ve added another pooling layer in order to have less computation cost.
  7. A Flatten layer in order to flatten the 3-dimensional matrix into a one-dimensional vector.
  8. A Dense (output unit) fully connected layer with one neuron with a sigmoid activation (since this is a binary classification task).

Training The Model:

The model was trained for 24 epochs and these are the loss & accuracy plots:

The training and validation loss across the epochs
The training and validation accuracy across the epochs

As shown in the figure, the model with the best validation accuracy (which is 91%) was achieved on the 23rd epoch.

Results:

Now, the best model (the one with the best validation accuracy) detects brain tumor with:

88.7% accuracy on the test set.

0.88 F1 score on the test set.

As we see, the results are reasonable.

Performance table of the best model:

Conclusion:

You can find the code in this GitHub repo. Contributes are welcome!

I hope you have found this useful.

If you liked the blog, clap for it ;)

And, let me know if you have any questions down in the comments.

Till next time!

--

--