How to evaluate design in VR, AR, and MR

My opinions are my own, and do not reflect my company.

Credit: https://pixabay.com/en/users/Leamsii-4568039/

Historically, when evaluating or providing feedback on designs, you might typically hear, “Well, that is your opinion.” Unlike the subjective nature of art, the science of design has been maturing over the last decade and arguably, it is now possible to objectively evaluate many designs due to the rise of the User Research discipline.

Here is a framework that can help you objectively evaluate the design of your next product:

  1. Who is the target audience, and what use case are we solving? Regardless of experience, designers sometimes fall into the trap of designing for themselves. It is critical that very early in product development, the first thing you do is create a Persona (a fictitious character or person that represents your target audience) and Use Cases (the specific problems, pain points or activities that your product will solve or improve). At every step of the design and development process, if the product does not solve the pain points or improve the use cases for the Persona, you most likely need to re-visit your fundamental design. The earlier you test your product with external users, the higher the chance your team can correctly identify fundamental design issues and make a successful product; if you are too far into the development process and have started creating beautiful visuals for your product, a potential risk is that stakeholders may not be able to distinguish the difference between a “great” design, and beautiful visuals. Unfortunately, no matter how beautiful the visuals are, users will not use or enjoy your product if it does not solve a fundamental problem or improve their use cases.
  2. How do the Key Performance Indicators (KPIs) stack up across mediums? In modern design thinking, design teams have to consider that hundreds of products across mediums are essentially competing for a user’s attention and time in any given space; if you are creating a computer game, you are not necessarily justcompeting against other computer games. You are also competing against other forms of entertainment such as mobile games, escape rooms, TV, movies and concerts. Understanding what mindspace your product occupies in the user’s daily life will help your team identify the competitors across mediums to compare against. In virtual reality (VR), augmented reality (AR), and mixed reality (MR), a common mistake is to only compare features or KPIs within mediums. For example, your product team might be developing a new text input method for virtual reality. A month later, they may have invented a new design that allows users to input text at 10 Words Per Minute (warning: completely fictional statistic). Then, the product team compares text input methods across all known competitors in virtual reality and realize that they have beaten all known text input methods by 20% — they are thrilled! However, the team failed to realize that users rarely just compare within mediums. 10 Words Per Minute in virtual reality is not impressive to users and can even feel like an awful experience if you consider that users average ~40 Words Per Minute on a PC and 25–44 Words Per Minute on a Mobile Phone. When evaluating your designs and KPIs, always remember that competitors across mediums might be raising the expectations and bar of what users expect — be wary of just focusing on your genre or medium!
  3. How were the interactions tested? When evaluating the design of your interactions in VR, AR or MR, common variables to measure might be: (i) Accuracy (how often a user’s intentions match the outcome; or, how precise a user is with a given interaction), (ii) Ergonomics (or, sometimes referred to as the “energy cost” or “energy expenditures” associated with the interaction given the muscles being utilized), (iii) Learnability (or, sometimes referred to as how intuitive an interaction is to learn), (iv) Time to Complete (referring to the amount of time it takes to complete a particular action). Depending on your product, you might prioritize these variables differently; however, if any of these factors score extremely poorly, you most likely have a poor design. In VR, AR and MR, there are two nuances around testing interactions properly. One, many experiences currently do not stress test interactions for enough time. If your intended user experience is 1 hour, your product team should stress test the interactions for 1 hour. Interactions that feel OK and have low energy expenditures in the first 5 minutes might be unusable by the 1 hour mark. Secondly, learnability varies greatly depending on the video game experience of the users. Studies have shown that users who frequently play video games tend to learn faster, and improve their ability to do various cognitive tasks which map to interactions used in VR, AR and MR. Because of this, some product teams may accidentally conclude that their designs have excellent learnability when in fact, they had a biased sample of video gamers for their user tests. For VR, AR, and MR, it is critical to run all user tests with randomly sampled users with a mix of video gamers and non-gamers, all with no prior context or experience with your product. As tempting as it is to always run user tests internally, your co-workers most likely will have enough context and familiarity that they will always bias the user test. After all, any developers who have had experience developing in VR, AR or MR will most likely learn other interactions in these mediums faster than a user with no experience in these mediums.
  4. How many features contribute to a sum greater than its parts? Great design will often weave together several different features that amplify each others’ effects. For example, multiplayer games will often give rewards or additional incentives for playing with friends; in addition, a Friends List feature might recommend potential friends to play with, and an eCommerce feature might give discounts when a group of friends buy a “Friend’s Bundle” together. Individually, these features will have positive KPIs for the product; but together, they combine powerfully to create a rich, social experience. Conversely, if a product has numerous features that have positive KPIs but very few have strong synergy, you may want to re-scope your design to include some more synergistic features, and cut the standalone features that have the lowest value for the Persona and the Use Cases you are aiming to solve or improve.
  5. How are the Core Loops? Defined as the series of repeatable actions in your product, assessing the core loop is one of the most important aspects of a product. If you are creating a Match-3 Puzzle Game, the most frequent action your users are going to be doing is shifting blocks or tiles around — how does that feel? Do users enjoy doing it? Are the sound effects satisfying? How often do users make a mistake when moving their tiles? Product teams typically spend the vast majority of their time iterating and user testing their core loops. As I have mentioned before, for VR, AR and MR, product teams must test their core loops repeatedly with the same external users over time. If you are currently creating a VR/AR/MR product, the novelty effect is so powerful that every user that steps into your offices to test your product will most likely give you glowing results; however, that is a red herring. For current VR/AR/MR products, it is critical to do user testing focused on repeated sessions — get the same user to try your product an hour a day for multiple days, weeks, or months. For now, this is the only way to get accurate feedback for VR/AR/MR products.

Although the visual design and art style of a product can be subjective, hopefully this framework allows you to objectively evaluate the core design of a product and create something compelling for your users.

ABOUT THE AUTHOR:

Jeffrey Lin, Ph.D.

Dr. Jeffrey Lin is currently a Design Director at Magic Leap, leading design teams that are paving the way for the first generation of mixed reality content. He was a Lead Product Owner and Lead Designer of the award-winning PC game League of Legends at Riot Games, one of Fortune’s Best Companies to Work For. He was also a Research Scientist and User Researcher at Valve Software, makers of the award-winning PC game Portal 2, and creators of the Steam platform. He obtained his PhD in Cognitive Neuroscience from the University of Washington where he was funded by the Howard Hughes Medical Institute. His design work has been featured in Wired Magazine, MIT Tech Review, The Verge, Scientific American, Times Health & Science, and Re/code. His research has been featured in numerous peer reviewed journals, including Nature.

Dr. Jeffrey Lin, PhD

Written by

Design Director of Unannounced @ Magic Leap. Former Lead Product Owner and Lead Designer of League of Legends @ Riot Games, Research Scientist @ Valve Software.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade