ColorBlind VR: A colorblindness simulator for Google Cardboard

Russell Alleen-Willems
5 min readOct 9, 2017

--

Project Description: ColorBlind VR is a Google Cardboard game in a nature setting where the user is challenged to collect as many ripe fruits as possible and then repeat the task with colorblindness simulated to compare their scores and the subjective difficulty of the task.

This project was developed for the Teamworks group project in 2017, as a part of the Udacity VR Nanodegree program. The challenge for the Teamworks projects was to include elements that met the theme of “Color.”

As a team, we decided to create an experience that demonstrated the effect of colorblindness on perception and educate users about the prevalence of colorblindness and the subjective experience of having the most common form of colorblindness: Protanopia.

2D Gameplay Video of ColorBlind VR

Resources Used: Google Cardboard SDK, Unity 2017.1.1f1, TextMeshPro plugin, Blender, custom 3D models, and some free sounds and music were used.

The game was developed in around 60 hours.

Team Art3mis Members

Please rate and provide any feedback on the team’s project through this Google Form.

Results

The final experience meets our original design goals of educating users about the basics of colorblindness and gives users a glimpse into how those affected by colorblindness face additional challenges in their lives.

While playing, users experience the same environment with both normal vision and simulated “Red-Green” (Protanopia) colorblindness.

The change from normal vision to colorblindness is very pronounced in VR and gives users a sense of how people who have colorblindness experience the world differently. Because the colors used for the fruits are similar in brightness, they appear to be almost identical when the red-green colors are removed during the colorblindness simulation. The small contrast difference in brightness makes it hard for users to distinguish between the ripe and unripe fruits and results in generally lower scores compared to playing the game with normal vision.

Development Process

The initial concept was to simulate the difficulty people with colorblindness may have in completing simple tasks where color was a key element. The team discussed multiple methods for communicating this difficulty and decided on a simple collection mini-game where users were tasked with collecting only ripe (red-colored) fruits.

Work Plan:

We divided our work into manageable chunks and met weekly so that each team member could take charge of particular tasks and report on their progress during the next meeting.

Sara wrote code to randomly generate and place the fruit gameobjects and created the majority of the 3D assets, including the trees, fruits, and score and timer signs. Russell used Blender to create the park sign model and wrote code to display text information about colorblindness and the game controls on the park sign and to move the player through the different stages of the application. Gineton wrote the core of the game logic and worked on lighting and general optimizations for mobile devices. Diego collected and implemented the background music and game audio and baked the light sources into lightmaps for better quality and performance illumination and shadows. Sara and Diego also worked together to review and improve the code Russell and Gineton wrote for the game logic and user interface.

Russell worked to quickly prototype the game interactions using the “Low-Poly Park” asset from the Unity Asset Store and appraise the two different colorblindness assets: “Colorblind Effect” and “Color Blindness Simulator”. Feedback on this early prototype informed how the game needed to explain the game goals and controls for users unfamiliar with VR early on in the development process.

A prototype of game interactions was built early in the development process.
The user interface was prototyped using images on in-game UI canvasses before creating 3D assets.

Testing and Iteration

User feedback of the initial game prototype indicated that more explanation of the game controls was needed and so the team added text signs that explained how to play the game using gaze and Cardboard button controls. Diego also suggested using the TextMesh Pro plugin to provide text that was more clear and easier to read than Unity’s default UI text system.

The Colorblind Effect asset worked well when running the game in Unity on a PC, but ran very poorly on mobile devices. Gineton identified that the Colorblind Effect required Multipass rendering, which caused the framerate slowdown on mobile devices (e.g. 6–7 FPS on some of the tested devices). Diego was able to circumvent this limitation by creating new materials that duplicated the protanopia setting from Colorblind Effect and allowed the team to use single-pass rendering so the application runs at 60 FPS on mobile devices.

Breakdown of the Final Piece

After the application start, the user is presented with a sign that welcomes them to ColorBlind VR and explains the prevalence of colorblindness (e.g. up to 8% of men and 0.5% of women have some form of colorblindness). The user then presses a UI button on the sign to play the game in normal vision. The park sign is disabled and the score and timer signs appear. The user can then highlight red fruits using their gaze and collect the fruits using the Google Cardboard button. At the end of the normal vision game, the park sign reappears and the user is shown their score. Next, the user presses a button to play the collection game again with Protanopia colorblindness simulated. At the end of the colorblind game, the user is shown their scores for both the normal vision and colorblind games and is presented with links for more information on colorblindness from the National Eye Institute.

The team created a sketch showing the different stages of the application and the user flow between each.

Possible Future Improvements

While the project was fun to create and play, some possible areas to improve are:

  1. Allow user modification of game settings such as time per round and selection of different types of colorblindness (e.g. Deuteranopia).
  2. Addition of other task types such as following traffic lights or reading text with a low-contrast background.

--

--