ARrange: Process Documentation

Carnegie Mellon MDes/MPS IxD Studio: Project 1

Allison Huang
ARrange: Family Sleep (Team Aasma)
23 min readOct 24, 2016

--

Project Overview

For our first Interaction Design Studio project, entitled “Screens & Beyond, a Philips Collaboration”, we were given a design brief about designing digital sleep solutions for family health. Most digital sleep solutions today focus on a very individualized sleep experience, but what could a solution look like that is designed specifically for a family’s sleep health? The prompt gave us the freedom to explore all sorts of questions about defining families, sleep, and health while also encouraging us to design with future-facing technologies in mind.

To that end, we designed arrange: a holistic digital platform for families that places embedded information at our fingertips. With arrange, family members can connect the information they find most meaningful to the objects that surround them in the contexts in which they want that information. It combines the intuitiveness of the physical world with the richness of the digital world. The arrange system consists of two components: a mobile application to tag objects with new information and access previous tags, and the physical objects that can reveal information upon physical interaction.

L: mobile splash screen. R: physical object to be tagged.

The power of arrange lies in its ability to embed information directly at our fingertips in the contexts that we want them. For families, this means parents can tag objects with information to bring their children’s data out of their phones and into the environment where they can keep an eye on it. It also means that parents can leave reminders and warnings for their children when they need it and where they need it: don’t forget to brush your teeth for two minutes, lock the door behind you when you come in, don’t leave a spoon in your soup when you microwave it. In addition, it gives families the ability to tag personal and sentimental objects with nostalgic memories or little notes for each other, bringing in an opportunity for deeper meaning.

Exploratory Phase

We began by looking at family and sleep as a whole: doing secondary research and pulling out observations and takeaways from our own lives and those of the people around us. Three main questions quickly emerged as points of interest to our team:

  1. What factors cause sleep disruptions? How might those be alleviated through technology?
  2. How might different family situations (like living situations or levels of health) affect family sleep?
  3. What is an appropriate use of technology in this sensitive space?

To explore these questions, we challenged ourselves to each come up with scenarios (written or sketched) where a family’s sleep could be disturbed and our solution could support the family. We scoped down to a scenario where one member of the family member was ill, although we purposefully didn’t specify the family situation or the type/level of illness to allow for further exploration. In this early phase, our scenarios ended up showing where a designed intervention could prevent or alleviate a sleep disruption in situations that ranged from an oncoming cold to a child in the hospital. Some ideas we continued to flesh out included interacting with medical professionals in the middle of the night, how diet affects sleep and vice versa, the bedroom environment, and monitoring/storing/communicating data.

We also began to think about how we could design for multiple family situations. We started to experiment with visualizing how sleep was affected by the previous day and vice versa in different families (single parents, children with chronic illnesses, etc.) so we could keep an open mind about what kind of family we were designing for.

After we finished going through our scenarios, we synthesized our ideas and interests into the following design questions:

  • How might we incorporate information about daily habits/routines (food, work) into sleep and vice versa?
  • How might we speed up recovery of a sick patient?
  • How might we use the sensory and physical aspects of the environment to promote good sleep?
  • How might we anticipate disruptors to sleep and routines?
  • How might we play to the benefits of a family?
  • How might we help family members support each other and be supported?

We also came up with these design principles:

  • Be appropriately urgent based on the hierarchy and urgency of information
  • Information should be immediately digestible
  • Move easily between levels of attention (related to the first)1
  • Work even when technology fails (fail gracefully, have backups)
  • Keep in mind the 24/7 nature of how sleep affects our waking life and vice versa
  • Address a variety of family structures and situations
  • Instill confidence (help parents feel like good parents)
  • Keep family transitions in mind

Interviews

At this point in the project, we needed to speak with families to get a better understanding of our problem space and where we could situate our solution. We scheduled interviews with five participants: parents of different ages with children in different life stages and in different professions. In preparation for the interviews, we wrote an interview protocol that aimed to understand each family’s everyday, ideal, and disrupted sleep/routines and how those affected and were affected by waking life.

While each interview provided unique insights (see more in-depth summaries here), each of our interviewees said that communication around time and routines and health issues deeply affected their family’s sleep.

One participant said that he’s much more time-conscious than his wife and kids and wants to help them learn how to be aware of time. Similarly, another participant saw how intensely time affected their entire family’s next day and said bedtime routines are “super critical, number one priority.” Communicating both with other adults and with children about time and routines, especially when it comes to bedtime and waking up, was a common priority.

Our interviewees had some pretty dramatic stories about how health-related issues affected their family’s sleep. Two spoke of respiratory issues that their children suffered from in their sleep. On a less dire note, some parents also talked about how seemingly innocuous sicknesses affected both their sleep and the sleep of their children: coughing can keep people awake, and stomach bugs are common sleep disruptors for everyone involved.

Visualizations of stories we heard from interviewees about sleep disruptions related to respiratory issues.

Four out of our five participants mentioned that either a family member or they themselves had experienced a potential sleep disorder. One research participant spoke of a potential sleep disorder that went overlooked, even after his wife spoke with him about it, until he saw his own sleep data. Because it seemed like an area with so much depth, we decided to look into this space some more.

Visualization of how data + wife noticing sleep deprivation led to sleep testing for one participant

Sleep Disorders

From some secondary research we did on sleep disorders, we learned a lot about how much they affect the lives of Americans (our main research was from .gov sites and American organizations).

1. A lot of people have sleep disorders: The CDC has said that about 50–70 million US adults have sleep or wakefulness disorder. 1 in 3 adults don’t get enough sleep (at least 7 hours).

2. Complicated relationship amongst sleep disorders: There seems to be a relationship between sleep apnea and insomnia. Does insomnia cause sleep apnea or does sleep apnea cause insomnia?

3. Disorders often go undiagnosed: Sleep apnea often goes undiagnosed. You must visit a sleeping clinic (there aren’t too many) and stay the night. There is no blood test that can help diagnose the condition.

4. Undiagnosed but confused with insomnia is not necessarily good: Sleeping pills are not necessarily a permanent answer.

We also visited a store in the Pittsburgh area called REMWorks. Two main principles we gleaned from our discussion with the staff was to empower people with sleeping disorders and to not make them feel like they’re sick. The store provides customers with more options than are typically available. The premise of a storefront rather than an overtly clinical setting can make people feel more comfortable and less like “patients” as well. The store’s staff also provides a service of regular check-ins after a fitting and purchase to help their customers feel comfortable with their treatment (and maintain compliance).

Design Territory Presentation

We closed out the exploratory research phase of this project with a presentation of our design territory. In the presentation, we introduced our findings from interviews and secondary research and discussed our potential user groups, scoped problem space, design principles, and technology ideas.

At this point in the project, we wanted to focus on designing for families with members with sleep disorders. We saw a powerful relevance to our design brief and a lot of reasons why this space is hard to design for. Sleeping disorders often fly under the radar but can deeply affect a person’s mood, state of mind, and even physical health.

“How Might We” questions, part 2

We took another look at our original “how might we” questions to formulate ones that were more relevant to the direction our interviews and further research were taking us. The ones we presented in our presentation asked how we could…

  • Improve the sleep quality of patients and family members with
    sleeping disorders?
  • Help people be aware of their own sleep patterns and those of the
    people they live with?
  • Collect data, both quantitative and qualitative, that would aid in the diagnosis and treatment of sleep disorders in a way that is intuitive and non disruptive?
  • Reduce the stress caused by health-related sleep disruptions?
  • Leverage the intuitiveness of the physical world along with the richness of the digital world?

Design Principles

We also came up with four design principles based on our research and early explorations we had already done in the problem space:

  • Holistic approach: Keep in mind the 24/7 nature of how sleep affects our waking life and vice versa
  • Design for failure: Technology should fail gracefully, especially when the stakes are high like with health. (Think about connected lights: what happens when you can’t access the digital controls?)
  • Calm technology: It should be intuitive and non disruptive. Our system should be useful and usable possibly even when users are sleepy.
  • Hierarchy of information: We want to design for different levels of attention and hierarchies of information.
Design principles

Technology

The last bit of our presentation focused on our ideas about technology that we could take forward into designing our final solution. We were already thinking about how to merge the digital and physical worlds that we have at our disposal. We worked with three main technologies:

  • Tangible: We were interested in bringing the intuitiveness of physical objects through tangible technology mediums. The naturalness of these interactions have a humane charm.
  • Internet of Things (IoT): We also tried to take an IoT perspective, thinking about the environment and the availability of sensors to complement, facilitate. Using IoT can allow us to render the solution almost invisible in space.
  • Augmented Reality (AR): We wanted to to combine tangible technologies and IoT with augmented reality — something which we hadn’t seen done before.

Generative Phase

Moving forward, we did some more research on what exists currently: both technologies that are widely adopted and novel technology or prototypes. We found some examples of technology to track and set reminders about sleep more holistically (sleep functionality is now integrated into Apple’s native iOS Health app). There are apps now that use your smartphone’s camera to read your heart rate. MIT had just released a way to detect emotions using wireless signals. We were also interested in exploring new ways to communicate data, specifically through augmented reality, that isn’t restricted to two dimensions.

We referred often to work coming out of MIT Media Lab, particularly Hiroshi Ishii’s musicBottles and Sublimate and Valentin Heun’s Reality Editor. These projects brought tangible objects to life in a digital world, connecting the physicality of our environments to information that is usually hidden beneath two-dimensional screens. In addition to being inspired by their concepts, we took hints from their explorations, prototypes, and concept videos to help us think about how we could best communicate and prototype our own work.

Reality Editor concept video

How do you prototype when designing for augmented reality (AR)? This was an important question our team grappled with while ideating solutions to tackle the problem of sleep in families. It should come as no surprise that we started the process much like any other design process: by picking tools that would allow us to prototype cheaply and quickly. Paper, scissors, tape, markers, and other simple objects were good enough for our early explorations.

We wanted to achieve two things during this process: a) gain an understanding of the affordances and possibilities of AR and b) test which of our ideas merited further exploration through higher fidelity forms of prototyping. The former was important because none of us had worked with AR in the past. And apart from trying out games such as Pokémon Go, there weren’t any AR experiences we could point to in our daily lives for inspiration. The latter was important because we had a long list of needs and pain points to possibly address from our research, but very little intuition for which problems to prioritize — or which ones we could reliably address given our project’s time constraints.

In the early phases of the project we looked at various kinds of medical breathing devices and sleep gadgets (CPAP machines, special alarm clocks, air purifiers, sleep monitoring wearables) and whether their use could be improved or replaced with AR. Pasting paper onto objects or tethering it to strings so we could then hover forms in midair were great ways to simulate interaction and use cases. We used simple blocks of wood, cardboard, and other common household objects as replacements for devices we didn’t own (like a CPAP machine) to explore the role of different tangible objects in our lives.

We kept bringing it back to the design brief, our research, and our design principles, asking questions like:

What would actually be helpful for families as a unit? What could be used to improve sleep quality–for the user themselves or the user’s family members? What would fade nicely into the background or be integrated effortlessly into people’s lives?

Shaping Our Concept

As we explored the space where technology and sleep meet, we continued to fall back on the four design principles we developed early on — holistic approach, design for failure, calm technology, and appropriate information. They were an integral part of developing our design as we often found ourselves referencing back to these principles — sometimes creating self-imposed challenges. Balancing our far-reaching technological ideas with the needs of the families we were designing for was a constant give-and-take on our team, but our parallel processes of prototyping and writing relevant scenarios brought us to a final solution that is both compelling as a family health and communication solution and technically innovative.

Bodystorming and Video Prototypes, Round 1

Our paper prototypes and toys became especially useful once we moved them into the everyday environments we planned to design for: bedrooms, kitchens, a car, living rooms. It sounds obvious and that’s the beauty of bodystorming. Situating our design activities outside of the studio and in the right context had important consequences for how we thought of AR spatially. And again, this gets back to the importance of prototyping beyond 2D.

One of our first steps was to understand how physical interactions take place on various platforms. Since we were designing for new technologies which are not widely available commercially, we lacked knowledge of interaction patterns in such an ecosystem. We went to MacKenzie’s house to bodystorm with objects that would be found in the house, getting us out of the studio and into the real world. We worked with physical objects throughout the house, bringing our attention to how we interacted with a wide range of objects: from personal or sentimental to the mundane. As we bodystormed we found ourselves noting not only the affordances of each object, but the type of information they might present. Would the more nostalgic objects present important information or a message of a more sentimental nature? Would a toothbrush, being used multiple times a day, be an appropriate place for richer information such as the news or an important reminder?

Using physical objects to inform our interactions

Throughout this process, we wanted to know if there were any well-accepted interaction techniques for AR we could leverage. We read some academic literature (most notably from the MIT Media Lab and the HITLab) and found fantastic technical examples and vision-driven propositions for combining AR with tangible objects. However, many of the interaction techniques they use have yet to be tested on a wide variety of users within a commercial product. In the consumer world, most AR applications (AccuVein, Google Translate, Hyundai Virtual Guide) are screen-based. It’s difficult to know what might translate from that experience into one that does away with a phone or a tablet.

At this point video prototypes to explore role and look/feel came in handy. Though ultimately video remains a 2D medium, the process of making the videos required a form of bodystorming and attention to spatial detail that really surfaced some opportunities and problems with our design ideas.

We created five video prototypes (edited in AfterEffects and Premiere) to explore some of our early ideas of how physical interactions might reveal different pieces of data in augmented reality. Three of them began to explore our idea of using an aura as a signifier, which ended up carrying all the way through to our final solution. Inspired by Hiroshi Ishii’s Magic Bottles project, we also created one prototype that augmented a meaningful object and leveraged a pre-existing affordance in that particular object: another idea that made it all the way through. Finally, the fifth video prototype showed the ability to use gestural interactions to move a hologram around, which did not make it to the final; after creating both video and working prototypes, purely gestural interactions no longer seemed compelling.

One of the five video prototypes we created.

While we were creating these first video prototypes, we brainstormed scenes for a concept video that would both capture the technology concept and how it’s relevant to family health and sleep. Bringing it back to our original idea of designing for a family with a sick family member, we grounded our technology ideas in a day-to-day scenario of two parents with a feverish child. The three main scenes we wanted to focus on were:

While we were creating these first video prototypes, we brainstormed scenes for a concept video that would both capture the technology concept and how it’s relevant to family health and sleep. Bringing it back to our original idea of designing for a family with a sick family member, we grounded our technology ideas in a day-to-day scenario of two parents with a feverish child. Our three main scenes we wanted to focus on were:

  • As Dad goes to his car after work, he is reminded to pick up the child’s medicine.
  • The parents tag sick kid to memento on nightstand and go to bed. The child’s status appears in the night as a hologram when the object is flipped over.
  • When the parents wake up, they tag medication and chicken soup with directions. When the child wakes up later that morning, he takes the medicine and gets directions for reheating the soup.

Video Prototypes, Round 2

The next video prototype we created focused on the second scene. When we got together to film the scene and edit it, we were inspired by wooden blocks and the innate interactions we’ve been using since childhood with similar toys. It was at this point where we decided to use a simple touch to activate AR holograms and use the affordance of sliding the two blocks together to create a unique, combined hologram to reveal a new layer of information.

Making to Learn: Exploring Gesture

Throughout our project, we found ourselves wondering about gestural interactions and wanting to get a sense of their potential. Specifically, we wanted to explore the feeling of what it might be like to control or interact with AR via gestures as well as the technical feasibility of doing so.

Irene working with the Leap Motion prototype

Our first prototype allowed a user to scale or change the size of an AR hologram by pinching to zoom, a gesture familiar to most smartphone and trackpad users today. To build it we did delve into a little coding, but otherwise used relatively inexpensive and accessible hardware and software: a depth-based gestural Leap Motion controller, real-time interactive machine learning software powered by the Wekinator library, and Unity. Using the Wekinator GUI and the Leap Motion, we trained a very basic neural network to produce a single continuous output representing a ‘closed’ or ‘open’ pinch gesture in a hand. The input from the Leap consisted of the x, y, and z coordinates of each finger in the hand. We then streamed the output to Unity in real-time using open sound control (OSC). A low value would reduce the size of a given digital model in Unity (in one test we were using a 3D model of a snail), whereas a large value would increase the model’s size.

Final Concept Scoping

We let our ideas about the technology and our scoped-down problem space shape each other. These how-might-we questions, along with the ability to tag discrete and personal objects already in the environment with unique AR, helped us scope the design of our solution: how might we…

Ease parental anxiety, especially when it comes to children’s sleep?

Facilitate contextual communication between family members?

Bring data and messages to the environments and objects we interact with already?

Refinement Phase

These questions led us to create arrange: a holistic digital platform for families that places embedded information at our fingertips — connecting the information we find most meaningful to the objects that surround us. It combines the intuitiveness of the physical world with the richness of the digital world. The arrange system consists of two components: a mobile application and pre-existing physical objects tagged with information in AR.

Mobile Tagging System

The mobile app enables users to photograph and tag objects with personalized AR information. After taking a photo, users name and assign holograms to their objects. They can choose from a variety of categories, which range from sleep and health to messaging and games. arrange also gives users the ability to assign tags to specific family members, preview and confirm holograms. Once an object has been tagged, it is cataloged within the app — users can see the most recent objects they’ve tagged as well as edit and activate older tags.

AR Tags

arrange allows objects that have been tagged with AR holograms to be discreet and meaningful additions to the family ecosystem. Objects with active tags emit an aura in AR, signaling to the user that information is present. The viewer can use the intuitive affordances of touch and natural interactions to bring the AR to life–when and where they want it.

Interaction Design Specifics

Mobile App

The purpose of the arrange mobile platform is to tag objects with AR as well as catalog a library of tags, active and inactive. When the user first opens the app, the home screen presents them with two options, create a new tag or choose one from the existing library. By selecting a new tag, the user will be prompted to take a photo or upload an image from their camera roll. Once a photo is taken or upload, the user is able to name the object. This can be as specific or broad as the user wants. This naming feature was included to keep with the personal nature of the system, by not providing standardized naming, we want to empower the user to further their connection with the object.

From this point, the user will begin to set the AR features of the newly tagged object. From sleep to weather, from messages to games, the user will choose from a range of categories to assign AR to the object. The AR hologram options vary from category to category, however we kept several features the same. Member of a user’s network, AR timeout, and read receipt. Giving users the ability to set who will see the AR, how long they want the AR to be present, and receive notification if the AR has been viewed. After selecting the options, users are then able to look at the overview screen and confirm their settings. Once confirmed the tag is automatically activated and catalogued in the library.

The tag library contains a user’s active and inactive tags. Once an inactive tag is selected, the user can simply toggle between inactive and activation. The option to edit a newly activated is tagged is presented, if clicked the user can journey through the settings options again. If the AR settings remain the same, it is simply added to the active tags.

Final mobile app flow

Through multiple iterations in the wireframing phase, we were able to simplify the tagging process. In earlier iterations, users were able to select multiple categories and assign them to one object. This interaction became cumbersome, and ultimately we settled on one category per object. We also had many discussions of the interactions within the category settings. Where to assign a network member, should a user be able to set a timeframe of when the AR could activate and be viewed, the inclusion of a read receipt that would provide a level of assurance that the AR was received.

Wireframes, round 1. Can select multiple holograms
Wireframes, round 2. One hologram only. Slightly higher fidelity

Visual Design

We developed the user interface over the course of a week, discussing and designing some of the fundamental interactions. While this is something we hope to further develop in future iterations, the version we present here showcases the mobile function of the arrange system. The visual design for arrange plays to the holographic forms of the past, taking inspiration from the blue/green hues of the holograms. Keeping a simple color scheme, and sparingly using the gradient holographic cube form. We chose the cube shape in order to represent the three dimensionality of physical objects as well as the AR forms we sought to embody. It was important to show the physical affordances of objects as well as the AR hologram that can be produced, in a visual language that made sense. To do this we use solid and dashed stroke to indicate the subtleties. It was also important that the mobile app be relatively clean and simply in order to highlight the objects being tagged. Photography only comes into play when the user adds objects, either by taking or uploading an image. All other imagery is iconography, drawing a clear distinction between the system and the user’s own objects.

Iterations of visual design

Finally, we created an interactive prototype in InVision to test interaction flows and demonstrate its intended use.

Note: some iconography created by Sumana Chamrunworakiat on the Noun Project

Physical Interactions

Once an object has been tagged, it emits an aura visible in a user’s AR. The aura’s glow is a subtle, as well as peripheral, signifier that gives the first level of information in our communication hierarchy: that there is an active tag nearby. Touching the object reveals the full hologram in AR, showing data relevant to their family and home or showing messages that family members leave for each other.

Objects with active tags can be combined to reveal new pertinent information. For example, if an object has been assigned to include the sleep quality of a loved one and another object has been assigned to monitor the room’s temperature, a user can bring the two objects together to combine the AR and reveal the loved one’s body temperature. After the AR has been viewed, the hologram can either time out, as specified by the user setting up the tag, or it can be double-tapped to turn it off.

Progression of interactions: 1. Tap to activate when aura is present; 2. Holograms reveal separate information; 3. Combined holograms reveal another level

Concept Video

For our concept video, we took our learnings from our prototypes and finalized scenarios that were both relevant to our design brief and killer applications of our system. We decided on the three main scenarios in the video (all around a family with a sick child) and then created holograms for each.

  • Scenario One: blocks overnight. Mom sets up the block AR and places the block on her nightstand. She goes back to tuck her child in. Overnight, she checks the AR to check on the child.
  • Scenario Two: kitchen. Mom has tagged certain dishes in the fridge with instructions about how to heat them up. The kid chooses the soup and then tries to heat the soup up while it’s covered with metal. There’s a warning on the microwave, so he removes the cover and heats it up.
  • Scenario Three: Dad’s message. Dad can tag a message to an object from afar. He records a video message of himself and signs off with a handwritten note. The message is tagged to a book he knows sits on his child’s nightstand. As the child is getting ready for bed, he sees the aura on the book. He opens the book and gets the message, and closes the book to deactivate it.

Live Demo

For our live demo, we used similar tools to add AR content to individual blocks of wood so that movement of the physical artifact would generate movement in the digital content. This technical demo was relevant to some of our scenarios where objects with AR content were moved around in space. We relied on Unity and a platform for creating AR markers called Vuforia to paste two unique AR markers (in the form of image targets) onto two different blocks of wood.

Though simple and technically crude, the experiment helped us experience what it would be like to natively combine AR content with specific physical objects. We were able to get a sense of the field of view of the AR content while twisting or turning around the blocks of wood, as well as the kinds of things one might expect when bringing the blocks together (should the AR content disappear when stacking one block on top of another?).

This prototype also helped us compare physical interaction to the gestural pinch and move experience of the Microsoft HoloLens. HoloLens lets users select or activate holograms using air-tap gestures and absolute movement of the hand in space. It was important to get a sense of how our interaction technique ideas stacked up to this first generation commercial AR product.

Future Work

One area of future explorations would focus on experimenting with the visual presentation of a hologram. Would an AR oven or microwave be more appropriately displayed in 3D or 2D form? Should it rotate or be placed at an angle to suggest its 3D shape? Should the ‘front’ of the object follow the user as s/he moves around? On a related note, what does appropriate type look like in AR? In a scenario that makes use of arrange, presumably there is an advantage in moving an object around as well as its AR content. Should type change color then — depending on what kind of backdrop it is displayed against? Imagine the problem of legibility if an AR text in white is suddenly placed against a white wall. These are the types of questions we’ve grappled with and may continue to explore with future prototypes and user tests.

As we explore further, we also look to address the interactions surrounding gaze control, specifically the layer of data it presents as a the user quickly glances over our tagged objects. We anticipate holding user testing sessions that compare the utility of various pieces of metadata included within a tag. Some pieces of information we believe could be helpful in this secondary layer of interaction are the tagger of the information, who the tag is intended for, when the tag was activated, and/or what category the hologram falls into.

We had a little fun as well… Sure, we worked hard, but we played hard too. Every meeting was met with fun times and lots of laughter.

We’re friends now.

--

--

Allison Huang
ARrange: Family Sleep (Team Aasma)

obsessed with humanity | @cmudesign MA 2016/MPS 2017, summer 2016 intern @adaptivepath