Designing an interaction flow in Mixed Reality

Information Architecture and wireframe

Since we have not experience designing the visual component for mixed reality environment, we first researched on guidelines and existing applications. By looking at some useful articles, design guidelines and user interface in Hololens; we created a rough flow as an start and did iteration on it.

Articles & guidelines:
https://developer.microsoft.com/en-us/windows/mixed-reality/case_study_-_3_holostudio_ui_and_interaction_design_learnings

https://developer.microsoft.com/en-us/windows/mixed-reality/interaction_fundamentals

https://medium.com/microsoft-design/how-to-think-about-designing-3d-space-b88faf609df4

Then, we sketched out key interaction and built micro interaction on it. We also consider the on-boarding process when we introduce the new ways of interaction.

Rough sketch of interaction flow

Since our “feed” is consistently updating based on voice recognition feature, we tested out various visual style using Holosketch. It was a bit challenging to imagine the opacity and colors in 3d environments compare to the 2d environments, so testing out using Holosketch was helpful to get a sense of what to do and not do in a mixed reality environment.

Based on our rough prototype, we figured out the overall look of our system, amount of visual components in the feeds, key colors, etc. It helped us finalizing our interaction flow and creating a wireframe.


Import:
For Moment, the user can input any form of data such as pictures, videos, maps, memo, etc to our system. We tried to make as less burdensome as possible for the user to input their datasets, so we thought of an embedded function within the photo album in mobile. By clicking the moment button, the user can select which nodes she/he wants to save that photo. They can either create new nodes or simply add into existing one. We also thought of plug-in ways to import the video or any other contents of the web application into our system.

Wireframe:
Using our key colors and shapes, we created the main system flow. When the user starts the system, Moment will provide a video tutorial that user can follow and learn how to use our system. We imagine our feed is already fed with input datasets such as 360 photo, videos, songs, 3d objects, maps, etc. The users can view the nodes and bring specific nodes to our feeds to nurture their conversation. When they finish their conversation, the system asks the users to save it and the user can respond by hand gesture or click the button in our microcontroller.

Show your support

Clapping shows how much you appreciated Rossa Kim’s story.