Create a mixed reality experience that provides a tour for the School of Design.
Consider the specific capabilities of MR and create interactions that take advantage of them.
Who are the people who will take this tour?
When creating interactions, consider each persona and diverge the tour experience in specific portions if it better communicates to the personas.
The Main interaction of the experience:
The main concern of my initial storyboard is instructions. In order to make the experience more seamless, there needs to be a balance between giving instructions and making the interactions clear enough to understand.
To provide instructions when it benefits the experience as a whole, voiceover and text instructions are not appealing to me.
Instead, I was encouraged to look towards how video games have approached this concept. I found the concept below particularly interesting.
The Legend of Zelda presents a ‘sidekick’ named Navi (navigation). Navi is more than just a ball of light that provides instructions. Navi contributes to the story by being a character in the storytelling.
I experiment with this concept in my interactions. Specifically, in the part of the tour in which the user interacts with the plant project.
Marco Polo with Navi:
In the part of the tour in which the user tries to find a plant they want to draw in a studio full of plants growing from the ground, I realized there is much more potential for this experience.
We are introduced to the narrator of the tour before entering the studio, but what if I gave the narrator form, like Navi?
Giving the narration form would make instructions more fun while also providing an experience to explore the studio. This exploration is achieved through the game Marco Polo.
Instead of picking any plant from the ground, you are challenged to find a very specific plant. This process occurs like the game Marco Polo, in which you look around the room and use your sense of auditorial distance to find the plant where the narration is coming from.
Through this process, you will interact with parts of the room. The narrator will tell you about anything you run into.
“Polo! Also, what you see in front of you is a student desk. Each student is provided their own table to use throughout the school year.”
This way the plants growing all over the room gamified the experience of looking around the room (learning about freshman studio along the way).
Once you find the narrating plant, the plant glows and turns into a ball of light which will be the guide.
When Planti moves, a trail of flowers follow in its trace, similar to how a VR brush trails.
This enables the user to follow Planti even if they aren’t looking at it. (They can just follow the trail Planti leaves behind.) The trail of flowers radiates waves of light in the direction Planti went, so you can follow the trail correctly.
This also compensates for a limitation of MR: because you aren’t looking at one screen but rather 360 surround experience, it can be easy to lose track of guidance.
Storyboarding in Oculus:
Storyboarding in VR has a learning curve. The tools are often collected in hard to understand categories, and drawing/writing with precision is difficult. What does it take to draw clear images in VR drawing apps?
Storyboarding is no longer linear in VR, so it cant be created the same way comic books are written. However, a potential solution would be to animate the scenes fading in, but that would hinder the storyboarding process. In my storyboard, I put large numbers next to each step, but I’m still concerned that it’s hard to follow my thoughts if you cant find the right numbers.
This is the first storyboard I created using VR:
Post Critique: Reconsidering Planti + Turning Point?
Through our first critique, I learned that Planti isn’t the best representative of the school (and lives a surprisingly divisive existence). I consider that potential that the instructions can be provided to the user more intuitively. This would require transitions between each interaction, employing symbols of communication. Each interaction will be designed to require fewer instructions, as the user should have the least amount of confusion through the process.
Narrowing down the Interactions
There will be three main interactions, which allows me to focus on each and produce prototypes.
Interaction 1: Walking into the room
When the user walks in the room they see plants growing all over the floor of the studio. A voice tells them to pick a plant. Through this process, they move around the room and explore the space, while picking a plant.
Interaction 2: Drawing your Plant
Through a transition, the plant you picked will be taken to a desk, where you find a piece of paper and a sharpie. You are told to draw the plant, while an example of student work (MR) rests next to your physical paper.
Interaction 3: Pin-up your plant and watch a small critique
You will pick up your paper and physically pin it up. Once you do, past students' work appears on the wall with yours. An AR professor appears by the board and gives a small critique.
Reality Composer: HiFi Prototyping Tool
Using Reality Composer for the first time was fluid and straight forward. Once I understand how to use the tool, I was able to prototype the first interaction.
Interaction 1-RC HiFi
Using reality composer allowed to test my prototypes in physical space using AR. I was also empowered to quickly animate the objects in the scene:
Once you tap on the plant you would like to draw, the Action Sequence hides the rest of the plants by shrinking them until they disappear.
Video of application in physical space:
This is "Interaction 1" by Raymond Pai on Vimeo, the home for high quality videos and the people who love them.
Gestures for Plant Selection:
Consider: How does the selection process happen technically?
The Xbox’s Motion Sensing was among the first consumer products to use hand gestures to control the interface:
All services and games are active. See details When the Kinect sensor is active, a window appears in the lower-right…
EDIT: Timed Gazing
To clarify the selection process, looking at a specific plant for long enough highlights that plant. This is expressed through a circle that fills up gradually.
This is a prototype:
The plant will anchor onto your hand for transportation.
Instructions are needed to guide the user to the next station. This is achieved through some sort of location pointer. The message uses color (orange) to communicate which table to go to (the noticeably orange one).
Google Maps pin style:
EDIT: This is vague. Where are we going? Does this direct me to the specific point at the end of the pin? Am I using the whole surface of the table or just part of it?
This edited version specifies the surface to place in the plant. In addition, it asks the user to clear the space in case there’s anything else there. (it is a studio in-use after all).
The orange color communicates that this is the orange table to place your plant onto.
Interaction 2: Drawing the Plant
This interaction involves giving the user a blank piece of paper and having them draw the plant.
The image next to the blank sheet was a later addition. It’s a student example used to offer some guidance to the user to understand how to draw the plant.
EDIT: This process should be more clear and supportive, not intimidating.
Addition: Offer the user to try drawing the plant themselves, then give guidance on how to draw (have them feel that the school is here to teach them and help them grow, not intimidate them.
Optional Guidance-Dotted Lines:
Some of the personas may not feel confident in drawing. This gives them confidence and prevents them from feeling lost or demoralized.
Example of Guidance-
Ask if they need help, avoid feeling patronized
Overlay Mixed Reality dotted lines on the paper: