Counting calories with food pics

It’s January again, so I’m thinking about weight loss. I bet you are too. I bet you’ve even convinced yourself that this year is finally gonna be your year. Not only are you going to drop the “3” (admit it, 5) lbs you gained gorging yourself on turkey and grandma’s Christmas choconutter cookies, you’re gonna keep going until you’re as buff as post-Parks and Rec Chris Pratt. You can do it. You know you can.

If you’re anything like me, you’ve completely bypassed the workout plan part and gone straight to researching what technology will work best for the newer, better you. At the top of my list is a Fitbit (complete with accompanying scale) and some sort of calorie counting/food tracking app.

It’s the app part where I’ve reached my first stumbling block. I mean, can’t there be a better way to record my food consumption? There are plenty of apps out there that help track calorie intake: Calorie Count, MyFitnessPal, Noomcoach, Fatsecret etc. But they’re all very time intensive and require a lot of constant attention.

People already are very keen on taking photos of their food. It is a usability pattern that everyone is familiar with.

Lately, I’ve been dreaming about a calorie-counting app that’s as simple as a point and shoot camera. I mean, point, shoot, magic of computing, presto calorie count. Sounds like science fiction? Well, what if I told you I’d been doing some research, and it turns out this technology (almost) already exists.


So why are food recognition apps are not applied in the consumer market yet?

An article from Pop Science reports that Google’s research scientist Kevin P. Murphy has been working on an app called Im2Calories. The app is able to recognize food and count calories just by looking at still photography. It sounds promising, but according to another article from the CBC, the app so far only works around 30% of the time.

Diagram of Im2Calories from Murphy’s Google Research presentation. This is the best image I could find.

Google has gone as far as patenting the algorithm that is able to recognize food type and mass. It basically compares the photo to all the images that it already knows. After it classifies the food type it is able to assign the density. Depth and volume are calculated by the shadows that the object is casting. After it knows the volume and density it figures out the mass, which in turn allows it to assign the caloric value to the mass after it has recognised the food type.

More players in the game

SRI (Stanford Research Institute) has been working on a similar project. They are also patenting technology that can recognize and measure the volume of food on your plate.

Like Google, SRI technology both recognizes food and estimates volume. Unlike Google, it employs contextual clues and user profiles to incorporate data from menus of restaurants in which the image was taken.

This technology is also a work in progress. In a Time magazine article, Executive Director of SRI Dror Oren admits that you “probably can’t get an exact count, you can still get a fairly accurate range”.

Likely, it will take a while before we start seeing anything with similar capabilities in the Apple App Store or Google Play.


Since there is no food recognition apps available to the public, I decided to take a stab at prototyping a mobile application.


What would the consumer app look if it existed?

I wanted the experience to be as simple as snapping a quick photo. Unlike Instagram, it’s not about getting a beautiful food shot — so the user can feel free to go full on Marta Stewart if they want. No need to cancel or reshoot. It’s not the looks that count in this scenario, it’s the content and the processing speed. An app like this would also have more traditional methods of logging calorie intake, such as text input or barcode scan.

Humour is good UX

Using humour can make loading times more bearable. That is why I included some tongue-in-cheek lines like “Looking at your food,” “Judging your eating habits,” and “Doing some math.”

Simplicity

In favour of simplicity, I intentionally left out common computer vision GUI patterns, like displaying the percentage of accuracy or borders around the recognized food items. Instead, the objects are marked with tags that are easy to remove in case a mistake is made.

Creating a forgiving UX — for both users and AI

UI for AI is not perfect, but we can make it work by making it easier for people and robots to work together. For example. Say the tech didn’t capture the right amount of food from your photo. You can make it easier for people to correct such numbers, which in turn will also help the machine learning algorithm to make better decisions next time around.

Final thought

While computer vision for food is still in it’s beginnings, it is still worth thinking about the next steps of its applications in the real world.


Thank you for reading.

Feel free to leave comments here or say hi on twitter @TaulantSulko

If you have an Interaction Design project that you need help with, send me a note at taulant@sulko.co

Show your support

Clapping shows how much you appreciated Taulant Sulko’s story.