Prototypes and next steps (2018.10.17)

Shengzhi WU
Shengzhi’s MDes Thesis
4 min readOct 19, 2018

I made 4 prototypes to mock up the interactions to explore different types of affordances in augmented reality. The prototypes are still at the early stages, and this article is mean to just document the prototypes and some thoughts behind it as well as the next steps to iterate.

1. Turning a everyday object into a dial

The first prototype is a dial, which leverage a physical object’s affordances of turning into an AR dial. It makes no sense to control a dial in AR through your phone, but the interaction is supposed to be implemented in Head Mounted Display (HMD) AR, thus turning an everyday object into a AR dial that a user can use his hand to control becomes useful.

Thoughts:

  1. It can be used in any cylinder shape object, e.g. a bottle, a cup etc to leverage its physical affordances to rotate the dial.
  2. It can control IoT system, e.g. speaker volume, smart light brightness, and it can be also used to select a menu. (it can function the same way of Microsoft surface studio dial.
Microsoft dial

2. AR Slider that utilising a physical book mark

This prototype also uses the physical affordances of a book mark that can slide in and out as a control. When the book mark is pull out, certain action or information can be triggered.

Thoughts:

  1. This could potentially be used as a toggle switch or a slider button, and since the affordances of slide already exist, so it’s intuitive and easy to understand.
  2. This type of interaction exists in many places in our daily lives, and any object that supports slide in and out can function like this.
  3. I prototyped it using Vuforia (an image maker based SDK), but since it can only get the position relative to screen, rather than the absolute position in the environment, so it doesn’t work properly every thing, but this technical issue can be solved further by the capability of environmental understanding.
  4. The information displayed for now is just placeholder, and I am still thinking what is the best information showing up that makes perfect sense.

3. Using shadow to make a object float

This concept is inspired by Dixon Ho’s thesis research “ShapeShift: Mediating User Interaction Through Augmented Shading and Shadow”.

I use a ray cast to trigger the shadow effect, and the ray casting is very similar to the hover interaction on Desktop App or Web, which provides an affordance that it can be clicked or pressed down. And the shadow effect can also trace back to Google’s Material Design, which describes how to use shadow to convey the Z axis movement on a 2D screen.

Google Material Design

Thoughts:

  1. The shadow can convey many useful meanings, such as indicating a selected state, an emphasis to draw attention etc.

4. Use proximity to communicate levels of details

Showing a lot of information in AR can be distracting and overwhelming. And we naturally have a tendency that we get closer to examine and inspect the details of an object due to our eyes’ biological limitation. Even though the proximity is not exactly a type of affordance, I still feel there is still connection in between.

Thoughts:

  1. This interaction would be very useful for searching information among many items, and we can use computer vision to recognition object, and then use the small “i” icon to indicate the recognition information hidden inside, once the user approach the object within certain threshold, the information becomes screen-locked, displaying on user’s screen.
  2. This may be useful for shopping use case, showing information of an items, e.g. the price of a bag of instant noodle, the review of a book etc.

Next Steps:

  1. Continuously making and iterate the prototypes, and one of my main design methods would be design through prototyping.
  2. Conduct a brainstorming session with one of my classmates, to think about other types of affordances (maybe from other everyday objects) can be used as well as what are the best use cases for those interaction patterns.
  3. Collecting the existing AR prototypes and demos made by other designers and AR creators, mapping them into categories, trying to get a holistic understanding of what have been made, what is missing.
  4. Start to recruit people for my diary study and user testing.
  5. Continue writting medium post to document and updating my thought process, potentially think about how to turn the medium post into a research paper eventually.

--

--

Shengzhi WU
Shengzhi’s MDes Thesis

I am a UX designer, an artist, and a creative coder. I am currently pursuing my master degree @ CMU, and interning @Google Daydream.