Mapping nutrition with “What did I eat?”

Cognitive Build project simplifies food logging

Daryl Pereira
IBM Data Science in Practice
4 min readNov 23, 2016

--

Visual recognition gives computers new eyes into the world. Whether it’s mapping the drought in California or maintaining telecom towers, cognitive systems are helping computers derive more meaning from images, providing more value to us.

“What did I eat?” is a finalist project in the Cognitive Build, IBM’s internal team innovation challenge looking at one very specific use case: helping mobile devices make sense of what’s on our plates.

Oded Dubovsky, project lead from IBM Research in Haifa, Israel points out that we’re increasingly more interested in what we ingest. This could be a general interest in nutrition, but also touches on deeper societal issues, such as the growth in Type 2 Diabetes.

Our interest in what we eat is reflected in the apps on our phones. About 58% of people use some form of health tracking, but here’s the rub: only 3% of us use these apps for more than a week. Actually inputting the different components of a meal represents a real user experience problem. Given there are millions of foods out there, you could be hunting through multiple menus just to record what you had for brunch.

This is where “What did I eat?” comes in.

The app helps this logging issue using the camera on your phone. Just snap a picture of your food and the app will identify it based on the similarity to images in its database, much as Facebook may identify you and friends when you upload a picture. This is what the team presented at the prototype stage earlier this year:

Oded explains the solution and provides a demo:

One of the key points here is that this isn’t a standard data problem. A database of everything we eat is not feasible for a solution like this. And think of food you cook at home. There are so many variations that you cannot categorize everything.

This is where cognitive systems can really excel. You can train the app to understand the foods that are applicable to you. This level of personalization is required for the app to be functional. You train the app on what are the foods that you eat, so over time it will better understand and spot common foods in your diet. The app effectively uses computer vision to learn on the fly.

Interface design is also incredibly important here. It is important for the app to be able to ascertain whether it guessed the food correctly, and if not, you can click a few pictures of the food to help it correctly identify it in the future.

Oded and team turn to the Internet of Things (IoT) to help you remember to use the app. A wearable like a smart watch or FitBit can notice the movements you make from holding a fork and prompt you to use the app to record your meal.

The app is a great demonstration of how Watson Services work in concert to simplify the user experience and provide new value.

With “What did I eat?”, the team see potential here to develop a personal dietary coach that can help you understand more about the advantages of eating something, and how it commonly affects our bodies. The team are also looking to include cross references to food intakes during the day, so there is stronger confidence of recognition if the app knows you are eating breakfast rather than dinner.

As Oded says,

“We’re putting Watson in your palm to help you make educated decisions about what you eat”

--

--

Daryl Pereira
IBM Data Science in Practice

A senior content strategist with a passion for sustainability and tech focused on the intersection of marketing, media and education.