This summer I packed up and moved to San Francisco to work on a 14-week project at Google. I joined the Material Design team as an Interaction Designer, but was also part of the accessibility team. This was a perfect fit to connect my interests in design and cognitive accessibility. I was tasked to explore web accessibility for users with Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD) using the Material Design system. Before I started any design work, I spent some time reading through the Material Design spec to gain a better understanding of the philosophy behind the system, then conducted some initial user research. I crafted a set of personas from that research which greatly informed the entire design process.
After presenting some initial prototype explorations to the team, I took the feedback I received and decided to focus on one target persona. Using the look and feel of a fake news application, I started building out some of my ideas in a high-fidelity prototype. I also found it very useful to set up some goals and metrics for this project ahead of time using the HEART Framework, focusing on happiness and engagement. This helped to provide structure along with clear goals and metrics to measure success.
The prototype went through many different variations before it arrived at the version that I used for testing. The goal was to find actual users based off the persona, to test the prototype. However due to project constraints, I had to recruit internally. Fortunately, there happen to be a lot of people who work at Google that are on the autism spectrum and/or have ADHD, so I drafted and sent a survey to invite participants. Within a week, I received over 26 replies to the survey and was able to set up in-person testing sessions with 7 people.
Before I began testing the prototype I spent the first 10–15 minutes of the session interviewing each participant to try and gather any insights around how they were currently using technology, what struggles they had, and what methods they were using to deal with those struggles. I knew from my previous research that many people were using ad blockers, but it was very interesting to learn how they were using them. Almost every participant talked about the importance of eliminating distraction, and the difficulty of refocusing after a distraction occurs. I had someone walk me through the way they use an ad blocker to assist, and found that they were turning off any element on the page that they found distracting—to help focus on the content they want to see. The lack of these tools on mobile is a major roadblock for users and therefore they primarily use the computer to read or browse content.
“The computer is preferred when available due to adblocker set up and other useful tools already configured.”
The prototype testing consisted of 5 different settings or features aimed to improve the experience for users with ASD and/or ADHD. To begin, I gave participants the test device and asked them to open the prototype, give their initial impression, and locate the preferences. Before even seeing any of the preferences, all the participants commented, right away, that the images were very distracting.
“Pictures give no useful information.”
After they located the preferences section, most participants were immediately drawn to the one labeled “Image Preference” which allowed users to turn off all images in the UI. This seemed to be the favorite out of all the settings as 6/7 people claimed they would use this all the time and 1 would most of the time. They also wanted more control over the size and placement of images.
The next task was to locate the “Iconography” setting and apply it to the UI. Participants generally prefered this setting applied for non-system level icons as they were already familiar with the hamburger and overflow menu (possibly because they were Google employees). Most mentioned that they would use this along with the “images turned off” option. In addition, some users requested the ability to turn icons off completely.
“Most icons are effectively meaningless but we just know them now, just like the floppy disk.”
The “Skim Mode” setting allowed users to convert an article to a shorter, skimmed view. Participants were generally unsure about this feature, how it was being summarized, and therefore had mixed reactions. Most preferred the text-only version where an article detail was converted to a bullet point summary.
“There’s a lot of fluff in articles, would be nice to get rid of it.”
The “Focus Mode” feature attempted to reduce distraction by allowing users to focus on one thing at a time, highlighting each item upon scrolling. This received mixed reactions, but there was a lot of interest in the idea. Participants were in favor of using this feature with the images off and also expressed the desire to block out other distracting elements, similar to the way they use ad blockers on desktop.
The last feature I tested was more speculative, but I did verify the feasibility with a machine learning specialist. The “Image Interpretation” setting would allow a user to long-press on an image to get an interpretation of the possible emotions, feelings or meaning behind that image. Participants were skeptical and nervous about the source of the “interpretation” and where the information was coming from, but were open to using it if they could trust it. There seemed to be a desire for factual information that could be taken from images, rather than stylistic imagery which they generally found useless and distracting. One participant mentioned that he preferred to browse with images off, but wanted to be able to see an image that was factually relevant to the article.
“Having someone tell you how to interpret an image is manipulative.”
Although I didn’t have a lot of time to address updates to the design due to the project timeline, there were a few ideas that arose from the testing. I heard a clear need for more control and customization of the amount of information that is shown at one time. Perhaps there could be one setting called “Information Preference” that would allow users to customize this through a segmented slider.
Participants also expressed a desire to skim content at the article detail level, rather than having it as a global setting. This would be more conducive to skimming and diving, giving users the ability to toggle back and forth.
I learned a lot from this study and the people that I tested with were very excited by the work. I have included a few of their closing thoughts:
“I’m happy that Google is considering the needs of people with cognitive disabilities.”
“Everything in the prototype was more useful than I expected it to be.”
“I’m thinking about how I can use these ideas for our app!”
“This is great! Can I get on the beta for this?”
Thoughts for the design community:
- Consider how cognitive accessibility fits into your current projects and initiatives
- Be aware of information density and how it affects people with ASD and ADHD
- Think about how we can improve the experience on mobile
- Get involved in accessibility initiatives and conversations
- Include ASD and ADHD into the greater accessibility conversation
I hope to continue exploring this topic as I begin deciding on a direction for my thesis work. I am very interested in working on something to potentially make communication easier for nonverbal individuals with autism. It is my belief that there is a real need for human-centered design in that area right now. I’m interested in exploring how a communication application interface could change or adapt in complexity for users with severe ASD. Ultimately I would like to help my autistic brother who is nonverbal. I am hopeful that the area where I could potentially make an impact will become known.