EML Blog #8: Design Thinking Empathy

This is the tale of the first phase of our Design Thinking project. Our topic concerns itself with applications for computer vision. The EML characteristic I will be writing about is “creating value.”

Empathy Maps

The process of creating the empathy maps was fairly straightforward. My teammate and I each interviewed two people, and from this we identified two areas of interest:

  • Sports Analysis and Prediction
  • Object/Person Identification

My two interviewees both expressed interest in a product which could analyze the events of a sports game and either analyze it for the purpose of more advanced broadcast overlays, or to do analysis to assist coaches in assessing their athletes’ performance. This performance analysis could also have an application for viewers at home who might be betting on the outcome of games.

My teammate’s interviewees were more interested in object identification in photographs. People tend to take a lot of pictures, and being able to accurately identify the objects and people in those pictures could be useful in two major ways. First, unknown objects can be identified by a camera which is able to recognize them. Second, the photos themselves can be sorted and easily accessed according to their contents.

Point of View

We decided to focus on this photo sorting and easy-access concept for our product, and we generated three POV cases to flesh out the potential customers:

  • Mother needs to find pictures of her child with her parents because she wants to create a photo calendar to give as a gift.
  • Professional photographer needs to mass sort photos by subject because he takes hundreds of photos every day of various subjects and identifying and sorting by the people in the photo.
  • Young woman needs smart access to large photo library because she has tons of pictures of her and her friends and she wants to find pictures of her and specific friends.


This last POV seemed most compelling to our potential customer base. From this we imagined the persona of Amy Gillespie, a 20 year old college student attempting to balance her busy school schedule with her equally busy social schedule. She takes a lot of pictures with her smartphone when she’s out with her friends, and spends a lot of time going through those photos for uploading to Facebook, tagging her friends, adding to albums, etc.

The Product

We imagine a product to quickly and easily automate Amy’s photo sorting process. Using facial recognition, the software would be able to identify the everyone in her pictures and use this information to allow queries for pictures based on specific criteria.

Suppose Amy went to a party on Saturday and wants to find the pictures of her BFF Jill and her boyfriend in order to get prints for the gift scrapbook she’s making. She could go through the dozens or hundreds of pictures she and her friends took the night of the party by hand to find the ones of her friend and her boyfriend, or she could make a simple request of this app (“SELECT PHOTO IF Jill AND Jack IN PHOTO”).

The criteria could be as extensive as needed or desired. She could select photos containing only her, her and her friend, her and anyone except her friend, photos containing exactly three people, photos containing only women, photos containing only men, photos containing one man and between three and five women, etc. The possibilities are endless when the identity determination is automated and the user is pretend with a scriptable query interface (perhaps hidden behind a GUI for the less technically savvy people).

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.