Inclusive design for VR
As a part of our UX class project, Microsoft gave gave us the following problem statement:
“Design a product, service, or solution to solve for exclusion in a deskless workplace. The prompt of this challenge is to integrate inclusive design to build a solution for people with disabilities working in deskless workplace.”
Inclusive design was a new concept to most of us and we were excited about all the ideas that we could come up with for an inclusive desk less workspace.
Since we are all designers and we are in a program that involves various artists, we connected with the idea of solving for artists with limited hand mobility and trying to create content in VR. Our professor — Dana Karwas and Nick Katsivelos from Microsoft encouraged us to work in this area, since not much has been done.
Cherisha Agarwal, Joanna Yen, Pratik Jain, Raksha Ravimohan, Shimin Gu, Srishti Kush
Approaching this idea was not an easy task and it involved us talking to a lot of designers at IDM and understanding their workflow. We spoke to them about how they use softwares and the potential problems that people with limited hand mobility would run into. We were very fascinated by the idea of creating a multi-modal application that uses eye tracking and voice commands to enable people to draw in VR. To dive deep into the technology and understand its use, we read a few papers listed here and got great insights from professors at NYU like Dana Karwas, Todd Bryant, Claire K Volpe, experts in UX research like Serap Yigit Elliot from Google and Erica Wayne from Tobii. Every person we spoke to gave us more insights to tackle the challenge at hand from a user experience, VR technology and accessibility perspective. Here are some pictures :
Getting our hands dirty
After an extensive research, we decided to get our hands dirty and build our very first prototype. How do we make a prototype for an art tool in VR? We figured out that the best way was to convey our interaction is through a role play video. We did some rapid prototyping using some paper, sharpies and clips to bring our interface to life.
Watch the video here-
Next, we went out to meet potential users from the Adapt community. Adapt formerly known as — United Cerebral Palsy of New York City, is a community network is a leading non-profit organization in providing cutting-edge programs and services that improve the quality of life for people with disabilities. There we got to meet Carmen, Eileen and Chris who were very interested in painting and loved the idea of drawing in VR. This kind of experience was new to them and to our surprise we got good feedback from them. All three of them expressed interest in trying out a tool that enabled them to draw using eye movements and voice commands.
Once we got insights and whole lot of motivation from our users, we decided to give a structure to our interface. This time, we printed our tools on paper and stuck them together in an organized way. We used a laser pointer as the “eye tracker” and we were all ready for our second round of user testing. The whole process of prototyping for VR was very interesting and this process made us think clearly about details of the interactions and potential issues.
Our second round of user testing opened us to two kinds of scenarios based on prior experience. The users who had experience with using design software could complete the tasks well and pointed potential issues with the concept. Some of the issues were with the figuring out the z-axis, feedback for interactions, adding stamps etc. The users from the ADAPT community who did not have prior experience wanted the tools to be less ambiguous and some of the tools felt unnecessary.
Taking cues and learning from responses, we went to make the hi fidelity prototype using the Unreal engine. Unreal is a a good tool to build Virtual Reality content. To make a working prototype, there are several tasks we needed to accomplish which are as explained below -
The final prototype was done using Unreal and had the functionality for drawing using head movements, changing the stroke, changing the environment and teleporting.
The most basic function of a eye tracking painting tool in virtual reality was realized in our prototype by using head movements. User could choose tools, paint, teleport and change environment with our final prototype but it was still constrained by some technical limitations. Few functions like erase, undo, redo could not be realized by Unreal for now but we hope we can make them work by using other softwares and hardwares. We also hope to look into the technology to enable eye tracking and make it possible to select the tools using the gaze movements to draw. In our current prototype, the voice instructions are manually monitored. We would like to include this functionality as well to make a multi-modal solution for our users.
To view the complete details of this project, checkout the project booklet here.