Brain Tumor Detector part 6

Nelson Punch
Software-Dev-Explore
3 min readNov 9, 2023
Photo by Andrea De Santis on Unsplash

Introduction

Metrics allow me to measure a model’s performance before a model being putted into use.

I am going to measure the performance of the model I just created with various metrics methods.

Code

Netbook with code

Metrics

Evaluation

I can evalute the performance of the model quickly by using a method built with in the model.

The model is evaluated with test dataset.

It tell me the model is 98% accuracy in prediction for unseen dataset regardless to what I saw 99% to 100% during fine tuning.

Visualize training history

Here I am going to see the loss and accuracy during training and fine tuning.

For training and validation loss

Training and validation loss

For training and validation accuracy.

Training and validation accuracy

For fine tuning and validation loss.

Fine tuning and validation loss

For fine tuning and validation accuracy.

Fine tuning and validation accuracy

Precision Recall and F1-Score

I would like to have an overview of the model performance on each classes. In addition I like to know much time this model takes to make a prediction.

I use calssification_report method from Scikit-Learn to produce a report for me. In this code, it calculate average time it took for the model to make predictiona on entire test dataset.

Report

I can see it took 0.06 seconds in average to make prediction on entire test dataset which is fast enough to make a prediction.

For the precision, recall and f1-score:

  • Precision : is a measure of how many of the positive predictions made are correct
  • Recall : is a measure of how many of the positive cases the classifier correctly predicted, over all the positive cases in the data
  • F1-Score : is a measure combining both precision and recall

The index to classes

{0: 'glioma', 1: 'meningioma', 2: 'notumor', 3: 'pituitary'}

The report tell me that the model is well on identifying notumor while a bit of low precision on identifying meningioma. Overall the model’s performance is good.

The model’s performance might be lower in accuracy in real life samples. We can re-train the model with new collected data when the model’s performance is degraded gradually

Conclusion

Metrics allow me to overview the performace of a model and details before the model is used in production.

Next

Save the model and clean up.

part 7

--

--