Final Project

Concept

A recipe app that pairs with a motion-detecting speaker to enhance and optimize your cooking experience!

Our project features an mobile app that pairs with a motion-detecting device. It provides a more interactive cooking experience and gives users the opportunity to follow along step by step at their own pace. Users will be able to sync their phone to the device, select a recipe and then have it read it out. The device will also feature motion detection so that the user can easily swipe between steps and be prompted with the next directions when they are ready. This is helpful in cases where the user’s hands are messy and they don’t want to touch their phone directly.

CooCoo concept image

The speaker was created via model prototyping, and the mobile app was created by first making a wireframe prototype and then a high-fidelity prototype. We wanted to give some weight to our product so that model had a good physical representation. We did this through a variety of ways, such as adding buttons for the user to press. We wanted to give users an efficient cooking experience that would make it easy to follow along and navigate through directions.

Goals

We created our Coo-Coo prototype with the intention of evaluating the usability of our app and speaker by testing a way to sync recipes and efficiently follow recipe steps while cooking. Syncing between various devices means that recipes can be bookmarked on-the-go, and can be followed through in the kitchen in real time, and we wanted to see how effectively these features would work.

Implementation

What Did We Implement?

Our project consisted of two main parts: a “speaker” made via a model prototype, and a mobile app that was created as an interactive, hi-fi prototype. To create the Coo-Coo speaker, we gathered materials such as felt, foam, glue, tissue paper, and an unused pencil container.

Model prototype, initial sketches and wireframes

What Does It Do?

As stated above, Coo-Coo is a recipe app that pairs with a motion-detecting speaker. The first component is the mobile app. The app allows users to search for recipes, favorite and add recipes for later use, and keep track of what their friends have made. In this way, there is a personal and social component. In keeping track of what their friends have made, they can also view the ratings that their friends gave the dishes to inform them of a potential meal they could make in the future.

How Does It Work?

Once a recipe is added to the Recipe List, the user can sync their phone to the Coo-Coo speaker, and select the read aloud option for that recipe. Since the speaker is motion activated, users can go through the instructions of the recipe by swiping their hand to the right or left (Right being the next step, and left being the previous step). When cooking, people are usually left with messy hands that may not be able to scroll through a device. By incorporating a motion detecting speaker, a user can go through a recipe without getting their phone or electronic device dirty. The speaker starts reading the recipe when the user says ‘Coo-Coo, I’m ready’ to the speaker. Once the recipe is completed, the speaker says “All steps have been read”, so the user gets auditory feedback about what step in the cooking process they are on.

Demo-ing the CooCoo speaker and mobile app

This was our final prototype:

Evaluation

A usability test was conducted with a UW student and we took note of their feedback as they completed our task while using our final prototype. We used the “Wizard of Oz” testing method to do this, by having one member be the “wizard” and act as the voice controls of the Coo-Coo speaker. Our evaluation was based on the likeability, intuitiveness, and effectiveness of our prototype. At the end of the test, they rated their experience on a 1 to 5 Likert scale. We also asked the user to verbalize/explain their actions while interacting with the product via the “think-aloud” method so that we could take notes during the session.

During the test, we gave participants the following task: Using Coo-Coo app and the Coo-Coo speaker, make a peanut butter & jelly sandwich according to the recipe “Best PB&J in the World”

Our script followed as such:

  • Powering up: “Hello, I am the Coo-Coo speaker”
  • Syncing: “Coo-Coo speaker synced with ‘Ally’s phone’ ”
  • When user pushes button for reading out steps:
  • “When you’re ready to start, say ‘Coo-Coo, I’m ready’ ”
  • “Step 1: Spread a half teaspoon of butter on each slice of bread”
  • “Step 2: Spread one slice of bread with peanut butter”
  • “Step 3: Swirl jam onto peanut butter”
  • “Step 4: Cover with other slice of bread”
  • “All steps have been read”

The voice interaction with Coo-Coo was pre-recorded and we used a bluetooth speaker placed behind the model prototype to simulate speaking from it. The bluetooth was connected to a phone which had the pre-recorded elements so that depending on how the participant interacted with the mobile app and model prototype, the correct response could be given.

After the test, the user gave some overall positive feedback — the feedback from this session can be seen below in the next section.

Analysis

What Worked Well

  • The participant was able to successfully create a PB&J sandwich according to the recipe
  • The participant told us the PB&J tasted delicious!
  • During the 451 showcase, people thought this would be useful and liked the overall concept
  • Several people thought the chicken-like Coo-Coo logo was eye-catching and “cute” — this helped with making our Coo-Coo app more memorable

What Needed Improvement

  • Responsiveness of voice (slight delay between interaction & voice)
  • Remembering to turn on the candle inside of the Coo-Coo speaker (which we forgot to do for both the “Wizard of Oz” evaluation session and for the actual video itself)
  • Consistency with the direction of hand movements for the speaker (rightwards vs. leftwards)

What to Change + What to Do Next

Since we realized that the responsiveness of the commands after user was a little delayed, we would probably implement some sort of subtle signals between proctors to make sure that the commands were being prompted on time. Also, the recordings themselves had pauses in the beginning of the audio, so editing those out for a more immediate response may have also helped. If we had time, we would probably be able to incorporate the audio editing and run additional user testing for more data and results.