Prototyping for Augmented Reality

Irene Alvarado
ARrange: Family Sleep (Team Aasma)
8 min readOct 25, 2016

How do you prototype when designing for augmented reality (AR)? This was an important question our team grappled with while ideating solutions to tackle sleep problems in families. We had been working on user research and studying the problem space of family sleep for a few weeks, but after sketching and storyboarding a few design ideas, we felt ready to start prototyping and exploring the affordances of AR in the context of our design problem.

What follows is a short discussion of how we approached different prototyping techniques and what we learned from each of them. The process wasn’t necessarily linear — many of our prototypes were created in parallel and we didn’t strictly move from lo-fi to hi-fi, the process would loop back at times. Moreover, we tried to keep the “What do Prototypes Prototype” paper by Stephanie Houde and Charles Hill in mind. They describe a framework with which to evaluate the intended goal of a prototype based on four categories:

  • Role prototypes: explore what an artifact might do for a user
  • Implementation prototypes: used to explore the technical feasibility of a design idea
  • Look and feel prototypes: explore what it would be like to look and interact with an artifact
  • Integration prototypes: represent the full user experience of an artifact
Four categories of prototypes. From “What do Prototypes Prototype?”

These categories were useful in refining what we were exploring through each prototype and keeping each experiment small and manageable.

Start with paper and simple objects

It should come as no surprise that we started the process much like any other design process: by picking tools that would allow us to prototype cheaply and quickly. Paper, scissors, tape, markers, and other simple objects were good enough for our early explorations.

Our very first paper prototypes were crude and simple, but that was the point. They were quick and cheap to make.

We wanted to achieve two things during this process: a) gain an understanding of the affordances and possibilities of AR and b) test which of our ideas merited further exploration through higher fidelity forms of prototyping. The former was important because none of us had worked with AR in the past. And apart from trying out games such as Pokémon Go, there weren’t any AR experiences we could point to in our daily lives for inspiration. The latter was important because we had a long list of needs and pain points to possibly address from our research, but very little intuition for which problems to prioritize — or which ones we could reliably address given our project’s time constraints.

One direction we wanted to explore involved the combination of augmented reality with tangible objects. In the early phases of the project we looked at various kinds of medical breathing devices and sleep gadgets (CPAP machines, special alarm clocks, air purifiers, sleep monitoring wearables) and whether their use could be improved or replaced with AR. Pasting paper onto objects or tethering it to strings so we could then hover forms in midair were great ways to simulate interaction and use cases. We used simple blocks of wood, cardboard, and other common household objects as replacements for devices we didn’t own (like a CPAP machine) to explore the role of different tangible objects in our lives.

Using simple objects to brainstorm design ideas

Bodystorming on site

Our paper prototypes and toys became especially useful once we moved them into the environments for which we planned to design for: bedrooms, kitchens, a car, living rooms. It sounds obvious and that’s the beauty of bodystorming. Situating our design activities outside of the studio and into the right context had important consequences for how we thought of AR spatially. And again, this gets back to the importance of prototyping beyond 2D.

For example, we acted out the process of fetching a container with food from a refrigerator then heating it up in a microwave. These simple actions made us realize we’d have to consider how a user might come to know that an object is embedded with AR in the first place. If only two out of five similar containers in the fridge were tagged with AR (assuming the AR can be triggered in some way and isn’t always visible) — how would the user know which one is which?

On the other hand, the use of the microwave was just an afterthought that came about while naturally acting out the scenario, but the interaction led us to realize that there might be a better way to deliver the “metal warning” message on a microwave with AR. Again we were exploring the role this technology could play in our day to day. The takeaway was that well-positioned and context-aware AR messages might serve as useful nudges towards good behavior or against bad behavior.

Moving on to video prototypes

Were there any well-accepted interaction techniques for AR we could leverage? We read through some of the academic literature (most notably from the MIT Media Lab and the HITLab) and found fantastic technical examples and vision-driven propositions for combining AR with tangible objects. But many of the interaction techniques used have yet to be tested on a wide variety of users within a commercial product. In the consumer world, most AR applications (AccuVein, Google Translate, Hyundai Virtual Guide) are screen-based. It’s difficult to know what might translate from that experience into one that does away with a phone or a tablet.

At this point video prototypes to explore role and look/feel came in handy. Though ultimately video remains a 2D medium, the process of making the videos required a form of bodystorming and attention to spatial detail that really surfaced some opportunities and problems with our design ideas.

As an example, we used video to explore how it might feel to have objects push out messages or warnings when they’re associated to different members of a household. We envisioned a scenario in which a sick family member consumes tea from a cup that then starts to emit an AR glow when other family members try to drink from it. The idea being that the “contaminated” cup should deter healthy family members from drinking from it.

“Visible Germs” video prototype

Shooting the video helped us think through exactly where and how an AR glow might appear around the cup. How should the ‘healthy’ actor hold the cup with respect to the glow? When should s/he start to perceive the glow and possibly react to the warning — when grabbing the cup, when walking up to it, when glancing at it? Who gets to see the AR warning — just the ‘healthy’ actor or the ‘sick’ actor that originally drank from the cup? Furthermore, adding the actual AR effects in post helped us think through what the glow should look like, whether it was more appropriate as text, some kind of overly, or some other 3D form, and even question whether AR was the best medium to tackle the scenario.

The point was not just to explore whether the idea had value and the scenario was meaningful — we actually ditched both the scenario and the ‘smart contextual object’ idea in our final solution. Instead, the video helped us discover dozens of small details we hadn’t even realized we had to think through when designing for AR.

Another example video prototype exploring the use of physical blocks and gestures to trigger AR content

Exploring Gesture

Midway through our project we found ourselves wondering about gestural interactions and wanting to get a sense of their potential. Specifically, we wanted to explore the feeling of what it might be like to control or interact with AR via gestures as well as the technical feasibility of doing so.

Our first prototype allowed a user to scale or change the size of an AR hologram by pinching to zoom, a gesture familiar to most smartphone and trackpad users today. To build it we did delve into a little coding, but otherwise used relatively inexpensive and accessible hardware and software: a depth-based gestural Leap Motion controller, real-time interactive machine learning software powered by the Wekinator library, and Unity. Using the Wekinator GUI and the Leap Motion, we trained a very basic neural network to produce a single continuous output representing a ‘closed’ or ‘open’ pinch gesture in a hand. The input from the Leap consisted of the x, y, and z coordinates of each finger in the hand. We then streamed the output to Unity in real-time using open sound control (OSC). A low value would reduce the size of a given digital model in Unity (in one test we were using a 3D model of a snail), whereas a large value would increase the model’s size.

Pinch-zoom-like gesture to control AR content size

The prototype itself helped us explore whether haptic-less gesture might be a useful way of controlling and interacting with AR content. In combination with the next prototype, we decided to stick with direct interactions with physical objects.

Moving AR content in space via tangible objects

For our second technical prototype, we used similar tools to add AR content to individual blocks of wood so that movement of the physical artifact would generate movement in the digital content. We relied on Unity and a platform for creating AR markers called Vuforia to paste two unique AR markers (in the form of image targets) onto two different blocks of wood.

Prototyping interactions with AR through physical objects

Though simple and technically crude, the experiment helped us experience what it would be like to natively combine AR content with specific physical objects. We were able to get a sense of the field of view of the AR content while twisting or turning around the blocks of wood, as well as the kinds of things one might expect when bringing the blocks together (should the AR content disappear when stacking one block on top of another?).

This prototype also helped us compare direct touch manipulation to the gestural pinch and move experience of the Microsoft HoloLens. HoloLens lets users select or activate holograms using air-tap gestures and absolute movement of the hand in space. It was important to get a sense of how our interaction technique ideas stacked up to this first generation commercial AR product.

Examples of gestures using the HoloLens

Conclusion

“Listen to the medium,” is the advice that we tried to keep in mind from James Tischenor and Joshua Walton, two lead interaction designers from the Microsoft Hololens team who recently spoke at CMU. This is not to say that we were solely inspired by the technology — our prototypes were partly useful in gaining an intuition for the possibilities of the medium and especially useful in discarding suboptimal ideas. More importantly, prototyping became a crucial way to communicate our design ideas to our academic advisor and client. The crucial difference when involving AR was that slides, sketches, and 2D mockups could only take us so far; we felt the need to build in 3D as much as possible to let our design ideas speak for themselves.

Note: this is a good resource for more thoughts on prototyping for AR: http://www.slideshare.net/marknb00/rapid-prototyping-for-augmented-reality

--

--