Week 5 — Waste Classification

Hasan Akalp
bbm406f19
Published in
6 min readDec 31, 2019

WEEK-5 WASTE CLASSIFICATION

Hi everyone. We are Hasan Akalp, Umut Piri and Dilara İşeri. As a machine learning course project, we are trying to implement a waste classification system. We continue our machine learning course project in the 5th week. Continue reading our blog to learn about our progress on the project over the past week and the latest status of the project.

In the meantime, if you have not read our previous blogs, you can reach our previous blogs by clicking the links below.

week-1 https://medium.com/bbm406f19/week-1-waste-classification-dde0aaf12ccb?

week-2 https://medium.com/bbm406f19/week-2-waste-classification-5a79b37e2b75

week-3 https://medium.com/bbm406f19/week-3-waste-classification-9aa89d173bc0

week-4 https://medium.com/bbm406f19/week-4-waste-classification-817206392f33

Last week we talked about pre-trained models that we can use. In this week’s article, we will talk about the models we use for waste classification and the results of our experiments with these models. We have tried to determine which model is more suitable for this application with our experiments. Read on to find out which model gives higher accuracy and how effective the models used in these experiments are for the garbage classification system.

Before moving on to our experiments, it will be useful to talk about the data set used in these experiments. We used Gary Thung’s TrashNet dataset in our experiments. This dataset spans six classes: glass, paper, cardboard, plastic, metal, and trash. The dataset consists of 2527 images: 501 glass, 594 paper, 403 cardboard, 482 plastic, 410 metal, 137 trash. The images in the data set were taken on a white background and in the sunlight. In addition, each image was resized to 512 x 384. We divided the dataset into 50% training, 25% verification and 25% testing.

We used the fastai library built on the Pytorch library to create the models. Fastai was founded jointly by two San Francisco University employees. Rachel Thomas is a professor at the university. Jeremy Howard is a research scientist Based on the top of PyTorch, Fastai includes some of the important algorithms for image classification and natural language tasks.

Using the fastai library, we tried to implement many CNN models easily and choose the most suitable one for our project. Now let’s talk about the models we create and the performances of these models.

We have implemented 9 different models for this project. The models we trained and the graphics we received as a result of model training are as follows.

ResNet152

Learning Rate

Train Loss vs Validation Loss

Top Losses

Confusion Matrix

The accuracy we obtained: 0.9338582677165355

ResNet18

Learning Rate

Train Loss vs Validation Loss

Top Losses

Confusion Matrix

The accuracy we obtained: 0.9039370078740158

DenseNet121

Learning Rate

Train Loss vs Validation Loss

Top Losses

Confusion Matrix

The accuracy we obtained: 0.9354330708661417

GoogleNet

Learning Rate

Train Loss vs Validation Loss

Top Losses

Confusion Matrix

The accuracy we obtained: 0.9165354330708662

ResNet34

Learning Rate

Train Loss vs Validation Loss

Top Losses

Confusion Matrix

The accuracy we obtained: 0.9385826771653544

ResNet50

Learning Rate

Train Loss vs Validation Loss

Top Losses

Confusion Matrix

The accuracy we obtained: 0.9417322834645669

SqueezeNet

Learning Rate

Train Loss vs Validation Loss

Top Losses

Confusion Matrix

The accuracy we obtained: 0.8110236220472441

VGG16

Learning Rate

Train Loss vs Validation Loss

Top Losses

Confusion Matrix

The accuracy we obtained: 0.9244094488188976

AlexNet

Learning Rate

Train Loss vs Validation Loss

Top Losses

Confusion Matrix

The accuracy we obtained: 0.7858267716535433

Conclusion

As can be seen in the graph, we obtained the best results from the training of ResNet34, ResNet50 and DenseNet121 models.

In general, our models have been found to have difficulty in the separation of glass-metal-plastic. We believe that we need to improve our data to improve this distinction. In addition, we can make our models more detailed.

Thank you for reading! See you next week!

--

--