[Week 7— SeeFood]

Gökberk Şahin
bbm406f18
Published in
2 min readJan 14, 2019

Theme: Food Calorie Estimation

Team Members: Okan ALAN Gökberk Şahin Emre Yazıcı

https://gph.is/2VU17mV

This week we wrapped up our work and focused on final touches and report. We used some feature engineering magic to increase the accuracy of our random forest model. Mean error is now decreased to 15.6. Also, we implemented a k-NN model which gives mean error between 17.6-25.3 when k=6. Randomness in k-NN can be eliminated with random_state=42, however, when we do this we get a mean error of 20.91. Since this is reproducible we will prefer a uniform random state variable among all of our models. Features are combined with each other to mimic the formula in our baseline work which increased the accuracy dramatically. We started combining simple features like height and width and used these features while combining others. For instance, We combined this area feature with the ratio of the top side coin which again calculated from the formula of our baseline work. We kept combining features until our accuracy got stable and no additional feature can increase the test accuracy.

Also, we tried a neural network model but it performed so bad that we don’t even bother mentioning that. Since we have relatively small set of features it’s no surprise that neural network performed poorly.

Lastly, if we compare our random forest model to our baseline work, our model slightly outperforms the baseline work which volumes are estimated with formulas. Random forest’s mean error was 15.6 whereas our baseline work’s mean error was above 18.

We’ll conclude our work with plotting graphs, tables and commenting on the results in our paper.

See you all on the presentation day!

--

--