Empowering the Grocery List

A more pleasant experience to crossing off items on the go

This post marks the completion of my first semester at Georgia Tech as a Human-Computer Interaction masters student. Having gone through sleepless hours to obtain formal design training from an architecture school, and hustled at a break neck speed for maximum productivity at a non-profit startup, I figured I had seen and been through it all. But the work I have accomplished in the past 4 months have exceeded both my expectations and my imaginations, in the best sense possible. Time and urgency took on a whole new level of meaning as I was continuously challenged to learn new information, simultaneously integrate that new information with my existing knowledge, and apply that new information in a rapid and iterative fashion to arrive at convincing user experiences for all of the projects I undertook.

Most notably, I am proud to present to you Rist™, an Apple Watch app prototype created with a user-centered design process from user research, design iteration, to rapid prototyping for usability tests. It is my favorite project thus far, as I not only got to work with an extremely talented team, but was also able to bring together all of the skills I have acquired previously to work in tandem — web technologies, visual design, and project management.

Here is Rist in action.

The Problem

A remarkable aspect about this project is that it actually seeks to deliver a fundamental value to our target users — parents with young children who shop. Grocery shopping with young kids can be very stressful. Just consider the time constraint of perishable goods and the likelihood of children wanting to interact with, pick up, or buy whatever they see desirable. Indeed, after conducting market research as a team with real shoppers in grocery stores, our data backs up the fact that parents have trouble keeping their child behaved while finding the right products in the store, in addition to completing their checkout quickly.

Affinity mapping with post-it notes to make sense of our problem space. With an informed assumption, we arrived at our target age group of parents with children under the age of 7.

Most mobile applications are born out of opinions, my team started with present-day market research to validate the problem we had identified.

Our Process

With the knowledge that there are actual customers in need of help, we thought of three design alternatives as a solution before narrowing down on an appropriate concept for execution — a smart goggle, a smart cart, and a smart article of clothing. The smart goggle would be a pair of glasses with a heads up display that allows a parent to look at optimized in-store pathways proposed by the system. The smart cart, the second runner up, is the age old idea of allowing the user, in this case the parent, to checkout faster by eliminating the hassle of waiting in a checkout line — achieved with a payment system embedded in the shopping cart. Lastly, the smart article of clothing would be an interactive toy for the parent’s children that responds to movement or touch. The article will have music, lights, or tactile feedback to keep the child entertained or calm while the child’s parent attends to his or hers shopping duties.

Storyboard for the smart goggle. Courtesy of Katherine Kenna.

We chose to move forward with the smart goggle, since working with children would be an IRB nightmare in the short timeline of a semester and the smart cart idea was a tad boring. With this move, we sharply focused our efforts to smooth the shopping experience with the very real issue of navigating in grocery stores. Have you ever gotten lost in a grocery store because you wandered around for way too long in search of an item on your shopping list? That’s right.

However, a few quick assessments with a Google Glass prototype revealed to us that an eye wearable is not ideal. Our users were reporting discomfort and “weirdness”, and literature reviews showed us studies on social acceptability issues that proved hard to overcome. As a result, we decided to pivot and turned towards a wrist wearable, the Apple Watch, to leverage the fashionable and common comfort of wearing a watch or bracelet.


Screen shots of our alternative Google Glass prototype. Courtesy of Katherine Kenna and Reema Upadhyaya.
The brainstorming of pivotal ideas through sketching. Each team member came up with as many ideals as possible, then funneled them selectively for refinement as a team.
One of the many workflow sketches I made. The context of use begins with the need to put together a grocery list and ends once the user pays a visit to the grocery store and checks out.

At this point of our project, the goal to provide parents with a smoother experience in stores with their children remains motivating — we know that they are likely to experience increased levels of stress with child related distractions out of responsibility. Yet my team only had a little more than 1 and a half month left to tackle this goal with due progress, and everyone on the team were feeling loads of stress from other midterm exams and assignments that were piling up. Explicitly, we had to craft our design in a timely manner, find more time to recruit and test with users, iterate on our design accordingly with the feedback obtained, and repeat as much as possible in this short time crunch. Therefore, a significant amount of effort went into ensuring that our analysis and evaluation of a solution were conducted in a rapid and efficient manner.

Lets get prototyping.

Iteration 1

Our system is conceived as an interface worn on the wrist where information can be displayed or hidden away from the user’s awareness at a moment’s notice. Therefore, we first created a precise Cartesian map similar to Google Maps that would give users detailed instructions for navigation. For example, a typical instruction would be “walk 100 feet and turn right to reach aisle 1”. But we quickly realized the amount of information was unnecessarily complex for the user to absorb and digest on the go. Plus nobody could really read the graphical cues we provided as they appeared minuscule on the Apple Watch Screen.

First version of our prototype.

In addition, through expert evaluations given by multiple seniors from our program, we realized that the segmentation of our screen interactions had to be rethought. We had broken down the manipulation of the grocery list into granular options such as insert and delete for our rudimentary prototype — thinking that it would convenient the user. However, in reality, it only served as a source of confusion as the user had to spend extra mental efforts to learn how our system behaved. It was simply too much work.

It’s a start.

Iteration 2

Moving forward, we created a more abstract and relative navigational system and tidied up how our users would access the grocery list. Users are shown the position of the aisle in relation to their own position, represented by a pin. And grocery items are organized with sorted cards, with the intention of affording the users direct manipulation of the grocery items with a swipe or touch gesture. The navigational interface would be used as the primary screen, with the grocery cards in support as the secondary screen. However, without an iOS developer on the team, we were not able to get the new navigational system to function at a high fidelity. It became difficult to communicate our design intentions to our users.

How the map ideally works. The “Aisle 2” icon moves relative to your position (center mark) in store.
The second version of our prototype.

Even though we created a GIF to remedy the situation, the user experience was already lost between the static image on our Apple Watch prototype and the computer screen that presented the GIF. Moreover, on top of having our motion design being hard to understand, we encountered new issues from this iteration. For instance, users wanted a readily available overview of their grocery items in conjunction with the app’s navigational guidance. In hindsight, it was a no brainer. Shoppers spend as much if not more time checking out items than navigating. But at this stage of our prototype, the mental model we introduced to our users was against that. We had mistakenly set up this dichotomy between the task of navigating and checking items.

More work needs to be done here.

Iteration 3

With our system riddled with usability issues, we remained steadfast in our learning from failures. Eventually with a few more sessions of user testing and usability inspection, we arrived at a design that resolved most of the issues we were facing — aisle based navigation. We decided to integrate the task of navigating and item checking by only providing aisle numbers under which a given grocery item is stored. In turn, we got rid of attempts to provide directions in our prototype, and instead presented the user with bright colors and big fonts the most relevant aisle number.

Third version of our prototype.

We then tested this iteration with users in stores. We found that our users were reluctant to speak to our interface under high noise levels in a public space, in addition to general social acceptability issues. Therefore with the data we gathered, we chose to have touch gestures and voice commands as our prototype’s interactivity. Furthermore, to our delight, users begin to report solid levels of satisfaction and high ratings of usefulness in ensuing sessions of user testing.

Almost there.

Final Deliverable

At this point, we have reached the end of our over-a-month-and-a-half-long home stretch to deliver an insightful solution. Indeed, we found that since aisles are adjacent and organized for wayfinding already in stores, our users only needed help in knowing which aisles to move towards. Moreover, we were able to demonstrate functional as well as consistent gestures and animations for the system’s learnability with the use of Framer.js. We also further simplified the number of steps the user has to take to access voice commands.

A GIF is worth a thousand words.
Thanks for helping out Siri.

We presented publicly the newest version of our prototype to our fellow HCI students and faculty. And was able to invite critical evaluations from experts with fresh eyes for feedback.

Warning, incoming HCI jargon.

Issues such as scalability (how to address a large amount of aisles), generalizability (can the app support non-­aisle based stores such as IKEA?), the dialog initiative (the deletion of grocery list items must have a confirm button), and predictability (many people wanted to click on a non­-clickable item label and be taken to the associated aisle’s list) were brought into light. Not to mention other miscellaneous details such as the orientation of the Apple Watch screen relative to a user’s arm position when pushing a grocery cart. My team took copious amount of notes and planned to continue to enlist the help of our users to improve our application interface.

End of the semester demo day for 1st year MS-HCI students at Georgia Tech.

Future Plans

For the future, we plan to develop a higher fidelity prototype to include the additional features and address the newer concerns. The complete system would require a companion iPhone app to import grocery lists and manage the Apple Watch app, as well as secure connections to the inventory database of respective grocery stores. We look to achieve this through business partnerships with respective grocery stores, or the execution of a crowdsourcing initiative. And of course, we plan to perform benchmark tests to measure the time difference reduced by using our app. This will serve as more evidence to continuously enhance our application’s user experience.

Many thanks to my teammates, their talented contributions made this project possible. Featured in the photo: Meeshu Agnihotri (Right 2nd), Katherine Kenna (Right 1st), Reema Upadhyaya (Left 1st), Me (Left 2nd).

TLDR;

Rist™ is a hands-free smart watch app that enhances your grocery list with an aisle based wayfinding design. Items are added on a separate mobile application or through voice on the watch with the “+” icon, and the application automatically organizes items in the most efficient path algorithmically. Additionally, the user can check or uncheck items and swipe to delete them permanently.

Our target users are parents who need to expedite their grocery shopping trip when their young child is accompanying them. The problem addressed is how to find the most efficient in-store navigational pathway that helps a user complete his or her grocery errand. Our hands free interface organizes the items by aisle, viewable all at once or individually. This takes away the intrusiveness experienced with hand-held application interfaces currently dominating the market.


If you enjoyed this article please recommend it to others with the below or write me a comment to share your thoughts!

Like what you read? Give David Chiang a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.