Thesis Update (Nov, 2nd)

Shengzhi WU
Shengzhi’s MDes Thesis
4 min readNov 3, 2018

Since last week, I considered a lot whether I should put the research of AR affordances into a specific context, such as Smart Home Control and IoT, or I should focus on the general interaction patterns that could potentially be useful for all kinds of scenarios.

I realized that putting AR into a IoT scenario may not be so necessary, especially after reconsider and reflect on some AR and IoT projects, I found the use case of them are still not so convincing for me. For instance, the Reality Editor is a very famous project from MIT Media Lab couple years ago, in which researchers create an AR application that can make every smart devices connect to each other, like a user can drag a slider to control the light in the kitchen. I have to say the promised future is amazing, but if further re-examine it, I think the use case of keeping connecting different things together is actually not so often. A user may want to do it once, and seldom want to change how things connect to each other, otherwise it cause a lot cognitive load to remember which controls which. As such, I think since connecting different things together in AR is a not frequently used feature, so the convenience if provides becomes minimal. Moreover, I still haven’t found a solid use case to combine the AR with IoT system.

a result, I think it would be more meaningful to concentrate on the general interaction perspective of affordances in AR, rather than putting them into a certain context. I was also inspired by Microsoft’s Fluent Design System, and it categorised the design principles into 4 aspects: light, material, motion, depth, and scale. Since Microsoft’s design system gives special thoughts on its Mixed Reality System, thus I found it’s quite useful for my project as well.

Therefore, I re-categorised the project into following 4 aspects, which I believe could cover most aspect of AR affordances: forms and shape, material and color, haptic, light, and motion.

Forms and shapes

Forms and shape can be the most intuitive way conveying the affordable interaction, and it has been done a lot in industrial design fields, therefore I can in-heritage a lot from industrial design. In addition, I made some sketches that summarise possibly the most frequently used types of affordances for AR user interface, which contains, rotate(spinning a dial), press(press a button), pull(pull out the menu or information cards), slide(control a slider to adjust volume), also one from Google home, using a touch pad to slide on a surface to control the volume. There are definitely more types of affordances, and I will add more over time, especially after I conducted the diary study.

Light

Lighting can highlight important information, and different directions of lighting may also convey different perceived affordances, so what types of lighting makes a button feels most pressed-able? When lighting changes, the shadow of course will change accordingly, so how it affects the perception, especially I found it’s hard to distinguish the depth in AR, would the direction of lighting makes a difference? The lighting would also change based on the environmental lighting, current ARCore and ARKit can estimate the environment lightings, how it impacts the perceived affordances?

Material and Color

Material can determine how it looks in AR, and for most 2D user interface, material mianly represents as color and transparency, but in AR and immersive computing, different material can vary dramatically. For instance, a unlit material only shows its own color, without receiving any lights and being affected by other objects in surroundings. Whereas a transparent material only look differently with different reflection volume. But what types of material work for a interactive UI element? Does it need to be real? Or should it be something purely unlit(only the color from itself)?

Motion

Different from a physical product, AR interface is dynamic and can react to user’s behaviour and movement. Motion can also effectively reinforce the perceived affordances, makes the affordances easy to understand. One example here is that a handle bar(or controller) can react to user’s hand position, and it snap to user’s hand when the distance reaches certain threshold, so the user would understand the affordances much faster.

This is from Car Explore in VR and LeapMotion, the shape and size of the “button” changes when a user’s hand gets closer, it emphasises the interactive affordances.
Text can change its direction based on user’s point of view, so it makes sure it’s always readable.
The handle bar pops into user’s hand when the proximity reaches a certain threshold, it becomes clear that it is interact-able.

Haptic

I think haptic in AR is something not many people have explored much yet, but I believe haptic can dramatically enhance the perceived affordances in AR, after all, touch and haptic are also one of our important perceptions. Apple did a great job using haptic to convey meaning, and I plan to use a particle photon board and vibration motor to add the haptic feedback to my prototypes, so users can not only see the feedback, but also feel it.

Apple’s Human Interface Guideline — Haptic feedback

--

--

Shengzhi WU
Shengzhi’s MDes Thesis

I am a UX designer, an artist, and a creative coder. I am currently pursuing my master degree @ CMU, and interning @Google Daydream.