Week 5 — DASM — Enlarging The Dataset and ConvNet Classification

Nisa Hopcu
bbm406f19
Published in
2 min readDec 30, 2019

Hi everybody, let’s look at the latest news in our Damage Assesment System For Imagery Data (DASM) project. In this week, we implemented a CNN Classification and downloaded more images and tagged the images to expand our data set. Unfortunately, we could not use expanded dataset when training Convolutional Neural Network Classification algorithm.

Number of Images According to Class

We split our dataset into three categories: training(%65), validation(%15) and test(%20) and trained CNN Classification algorithm.

Experimental Results of CNN Classification Algorithm:

  1. Firstly, we used pre-trained AlexNet as CNN architectural model with learning rate: 0.005, epoch: 32, batch size: 64 and Adam as an optimizer. Result of this implementation is:
Results of Pre-trained AlexNet

2. Secondly, we used pre-trained VGG16 (also called OxfordNet) as CNN model with learning rate: 0.005, epoch: 32, batch size: 64 and Adam as an optimizer. Result of this implementation is:

Results of Pre-trained VGG16

According to these results, using VGG16 gave better result than AlexNet with same conditions. All accuracy values(Train, Validation and Test) of VGG16 is greater than accuracy values of AlexNet.

We plan to improve our CNN Classification algorithm usign different models, optimizer methods,learning rate, epoch, batch size and expanded dataset.

Thank you so much for reading, and stay tuned for the coming blogs!

Hope to see you next week!

--

--