A Close Look on Modelling Blood Cells

BCCD Dataset, IceVision Framework and YOLOv5

Maria L Rodriguez
7 min readSep 4, 2021

To recognize a diseased state, one must first know the healthy state. Before AI applications tackle disease identification, it would be a good practice to first establish a reference.

In this blog, we will use the Blood Cell Count and Detection (BCCD) dataset which contains 364 images of microscopic views of blood components. The annotations are limited to the most basic blood cell types of red blood cell (RBC), white blood cell (WBC) and platelets.

The dataset will be cloned from Github. The IceVision framework will be used for parsing, transforming and modelling. The model types to be applied are Faster R-CNN, YOLOv5, RetinaNet and EfficientDet.

We will follow this Outline:

A. Set-up

B. Dataset retrieval

C. Establish directories

D. Parsing

E. Transformations

F. Modelling

F.1. Faster R-CNN

F.2. YOLOv5

F.3. RetinaNet

F.4. EfficientDet

F.5. Final Model

G. Visualize Results

H. Save Model

* Image courtesy of Jonathan Larson on Unsplash

Open your Notebook and let flow some code!

A. Set-up

I used Colab Pro, on GPU runtime and standard RAM settings.

!wget https://raw.githubusercontent.com/airctic/icevision/master/install_colab.sh
!bash install_colab.sh
from icevision.all import *

B. Dataset retrieval

The BCCD_Dataset is a reformatted VOC version of the all_CELL_data and dataset. Personal comparison of the three sets showed that the BCCD_Dataset format resulted in better annotation processing.

!git clone https://github.com/Shenggan/BCCD_Dataset.git

C. Establish directories

Although not necessary, a preliminary exploration of folders and files directly on Github will give an idea on the data structure.

!ls                  # output: BCCD_Dataset (among others)
%cd BCCD_Dataset
!ls # output: BCCD (among others)
Path('/content/BCCD_Dataset/BCCD').ls()

Through these series of path findings, we were able to identify the route towards the folder that contain the images and annotations.

data_dir = Path('/content/BCCD_Dataset/BCCD')
images_dir = data_dir/ 'JPEGImages'
annotations_dir = data_dir/'Annotations'

The annotations are in xml files with the following configuration:

* screenshot from https://github.com/Shenggan/BCCD_Dataset/blob/master/BCCD/Annotations/BloodImage_00000.xml

It is important to note the filename with the .jpg extension, the presence of image width and height dimensions, the object name (label) as well as the bounding box coordinates in xmin, ymin, xmax and ymax format.

A single image may contain more than one object, thus a single xml file may contain more than one annotation.

D. Parsing

parser = parsers.VOCBBoxParser(annotations_dir = annotations_dir,
images_dir = images_dir)

The VOCBBoxParser class automatically arranges the information on the images, labels and bboxes, including fixing illogical coordinates. For an intro/ recap on parsing, refer to Section E in Custom Parser.

train_records, valid_records = parser.parse()
parser.class_map
show_record(train_records[1], figsize = (10,10),
font_size=16, label_color = '#ff6000')
train_records[1]
show_records(train_records[5:8], ncols=3, 
font_size=25, label_color = '#ff6000')

E. Transforms

presize = 512
size = 384
train_tfms = tfms.A.Adapter(
[*tfms.A.aug_tfms(size=size, presize=presize), tfms.A.Normalize()])
valid_tfms = tfms.A.Adapter(
[*tfms.A.resize_and_pad(size=size), tfms.A.Normalize()])
train_ds = Dataset(train_records, train_tfms)
valid_ds = Dataset(valid_records, valid_tfms)
samples = [train_ds[0] for _ in range(3)]
show_samples(samples, denormalize_fn=denormalize_imagenet, ncols=3,
display_label=False)
* Transformed versions from a single raw image

Despite the small number of initial images, the quantity of samples increase through various transformations.

F. Modelling

import matplotlib.pyplot as plt
def plot_metrics(learn, title):
plt.plot(L(learn.recorder.values).itemgot())
plt.xlabel('epoch')
plt.ylabel('mAP (green), Loss (blue, orange)')
plt.title(title)
plt.text(0,-0.2,
'Legend: mAP(green), train_loss(blue), valid_loss(orange')
plt.ylim(0,1);

For a discussion on the features of Faster R-CNN, YOLOv5, RetinaNet and EfficientDet, refer to Different Models for Object Detection.

metrics = [COCOMetric(metric_type=COCOMetricType.bbox)]

F.1. Faster R-CNN

model_type_frcnn = models.torchvision.faster_rcnn 
model_frcnn = model_type_frcnn.model(
num_classes=len(parser.class_map))
train_dl_frcnn = model_type_frcnn.train_dl(train_ds,
batch_size=16, num_workers=4, shuffle=True)
valid_dl_frcnn = model_type_frcnn.valid_dl(valid_ds,
batch_size=16, num_workers=4, shuffle=False)
learn_frcnn = model_type_frcnn.fastai.learner(
dls=[train_dl_frcnn, valid_dl_frcnn],
model=model_frcnn, metrics=metrics)
learn_frcnn.lr_find() # output: lr_min 0.0001737800776027143learn_frcnn.fine_tune(10, 2e-4, freeze_epochs=1) plot_metrics(learn_frcnn,
'Mean Average Precision and Losses for Faster_rcnn')

Faster R-CNN reached a mean Average Precision (mAP) of 0.573 after 10 epochs at learning rate (LR) of 2e-4. The mAP is increasing and both losses are decreasing. All curves have started to plateau.

F.2. YOLOv5

model_type_yolo = models.ultralytics.yolov5 
backbone_yolo = model_type_yolo.backbones.small
model_yolo = model_type_yolo.model(
backbone = backbone_yolo(pretrained=True),
num_classes=len(parser.class_map), img_size = size)
train_dl_yolo = model_type_yolo.train_dl(train_ds,
batch_size=16, num_workers=4, shuffle=True)
valid_dl_yolo = model_type_yolo.valid_dl(valid_ds,
batch_size=16, num_workers=4, shuffle=False)

learn_yolo = model_type_yolo.fastai.learner(
dls=[train_dl_yolo, valid_dl_yolo],
model=model_yolo, metrics=metrics)
learn_yolo.lr_find() # output: lr_min 0.010000000149011612
learn_yolo.fine_tune(10, 1e-2 , freeze_epochs=1) plot_metrics(learn_yolo,
'Mean Average Precision and Losses for YOLOv5')

YOLOv5 reached a mAP of 0.598 after 10 epochs at an LR of 1e-2. The mAP is increasing and both losses are decreasing. The valid loss showed a high initial loss which has significantly decreased by epoch 4.

F.3. RetinaNet

model_type_ret = models.mmdet.retinanet 
backbone_r50 = model_type_ret.backbones.resnet50_fpn_1x(pretrained=True)
model_ret = model_type_ret.model(
backbone=backbone_r50(pretrained=True),
num_classes= len(parser.class_map))
train_dl_ret = model_type_ret.train_dl(
train_ds, batch_size=16, num_workers=4, shuffle=True)
valid_dl_ret = model_type_ret.valid_dl(
valid_ds, batch_size=16, num_workers=4, shuffle=False)
learn_ret = model_type_ret.fastai.learner(
dls=[train_dl_ret, valid_dl_ret],
model=model_ret, metrics=metrics)
learn_ret.lr_find() # ouput: lr_min 8.317637839354575e-05learn_ret.fine_tune(10, 8e-05 , freeze_epochs=1) plot_metrics(learn_ret,
'Mean Average Precision and Losses for Retinanet/Resnet50')

RetinaNet reached a mAP of 0.523 after 10 epochs at an LR of 8e-5. The mAP had an increasing trend and both losses had a decreasing trend, but all curves have started to plateau.

F.4. EfficientDet

model_type_eff = models.ross.efficientdet 
backbone_eff = model_type_eff.backbones.tf_lite0
model_eff = model_type_eff.model(
backbone = backbone_eff(pretrained=True),
num_classes=len(parser.class_map), img_size = size)
train_dl_eff = model_type_eff.train_dl(
train_ds, batch_size=16, num_workers=4, shuffle=True)
valid_dl_eff = model_type_eff.valid_dl(
valid_ds, batch_size=16, num_workers=4, shuffle=False)

learn_eff = model_type_eff.fastai.learner(
dls=[train_dl_eff, valid_dl_eff],
model=model_eff, metrics=metrics)
learn_eff.lr_find() # output: lr_min 0.010000000149011612
learn_eff.fine_tune(10, 1e-2, freeze_epochs=1)
plot_metrics(learn_eff,
'Mean Average Precision and Losses for EfficientDet')

EfficientDet reached a mAP of 0.495 after 10 epochs at an LR of 1e-2. The mAP is increasing and both losses are decreasing.

F.5. Final Model

The final model chosen was YOLOv5 because of the highest mAP value with acceptable loss trends and the additional benefit of fast training run.

The training will be extended to 50 epochs and a Callback method used to retain the best model.

model_type = models.ultralytics.yolov5 
backbone = model_type.backbones.small
model = model_type.model(
backbone = backbone(pretrained=True),
num_classes=len(parser.class_map), img_size = size)
train_dl = model_type.train_dl(
train_ds, batch_size=16, num_workers=4, shuffle=True)
valid_dl = model_type.valid_dl(
valid_ds, batch_size=16, num_workers=4, shuffle=False)

learn = model_type.fastai.learner(
dls=[train_dl, valid_dl], model=model, metrics=metrics)
model_type.show_batch(first(valid_dl), ncols=4)
learn = model_type.fastai.learner(
dls=[train_dl, valid_dl], model=model, metrics=metrics)
# learn.lr_find()from fastai.callback.tracker import SaveModelCallbackfname='bccd-faster-rcnn-best'learn.fine_tune(50, 1e-02, freeze_epochs=1, cbs=SaveModelCallback(monitor='COCOMetric', fname=fname))

Using the single-stage YOLOv5 model with the small backbone, the highest mAP reached was 0.633 within 50 epochs of training, at 00:08 time run per epoch.

G. Visualize Results

The model is able to reliably detect the three individual blood components. Further improvement may be done with better annotation of the data to encompass overlapping blood cells.

H. Save Model

Copy the best model generated by the train run to GoogleDrive.

from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = Path('/content/gdrive/My Drive/')
!ls models # output: bccd-faster-rcnn-best.pth!cp models/bccd-faster-rcnn-best.pth /content/gdrive/"My Drive"/models/bccd-faster-rcnn-best.pth

Summary:

A relatively well-annotated basic dataset of blood cell components were uploaded, parsed and trained using four models. Final modelling with YOLOv5 showed a mAP of 63 with good detection of the different cells.

Recommendation:

The model should be run on a dataset with more detailed annotation of overlapping objects to better establish a baseline reference prior to tackling diseased states.

I hope you enjoyed viewing the code! :)

Maria

LinkedIn: https://www.linkedin.com/in/rodriguez-maria/

Github repo base for this blog: IV BCCD new in IceVision_miniprojects

Twitter: https://twitter.com/Maria_Rod_Data

Photo by Billy Huynh on Unsplash

--

--