Brain Tumor Detector part 5

Nelson Punch
Software-Dev-Explore
2 min readNov 9, 2023
Photo by Dustin Bowdige on Unsplash

Introduction

Fine tuning a pre-trained model might lead to a even better performance but not always.

In order to fine tune a pre-trained model, I need to unfreeze number of layers in the model so these layers’ weights will be updated after each training iteration.

It is usually unfreeze number of top layers in a pre-trained model and then unfreeze more layers depend on the performance. Most bottom layers in a pre-trained model are freezed because they were learned to identify basic shape unlike top layers were leared to identify more complex shape. This is also known as transfer learning.

Code

Netbook with code

Fine-Tuning

Unfreeze layers

Here I freeze only bottom 100 layers and unfreeze the rest of layers.

Training

The optimizer’s learning rate is reduced by 10 for fine-tuning. And this training start from the last epoch with 100 more training iterations.

I can see that validation accuracy is reaching to 99% to 100%.

Conclusion

By using and transfer learning and fine tuning, I can improve performance of the model even more in this case. Some layers at bottom of the pre-trained model are freezed while some top layers are unfreezed.

This combination of both freezed and unfreezed layers in the pre-trained model is able to let the model to learning from dataset better.

Next

Now I have a model ready to be used for real life prediction but before that I need to measure how well the model is perfromed in more details.

part 6

--

--