Multimodal interfaces for all

Opinion on Accessibility

Uma Anupindi
3 min readFeb 9, 2024

Interface design has come a long way on accessibility, but for true inclusion we need a mindset shift. As my professor Dr. Amy Ko notes, “Fundamentally, universal user interface designs are ones that can be operated via any input and output modality.” Wobbrock, in his work on Ability-based Design outlined thoughtful guidelines for personalized interactions based on users’ abilities. Building on this, I think the real wave of the future is multimodal interfaces.

Rather than retrofitting for disabilities, I imagine multimodal interfaces to give information to multiple senses simultaneously — imagine a 3D tactile rendering of a webpage, with true skeuomorphic buttons, that might be fun right? This may sound outlandish but research on Multisensory Design with Sensory Substitutions shows congruent visual and tactile stimuli can increase perceived quality of touch interfaces (Hoggan, Kaaresoja, Laitinen, & Brewster, 2008). This is an example of multi-sensory integration and combination processes in the brain. It’s fascinating to see how the brain processes sensory information by using spatial and temporal coincidence. By the virtue of brain plasticity, it is possible to acquire auditory information by means of vibro-tactile cues (Butts, 2015; Eagleman, Novich, Goodman, Sahoo, & Perotta, 2017). From an applied perspective, this aims to develop inclusive cross-modal interfaces delivering the same sensory information in different forms.

In the same research they note “There are various avenues to leverage interface modes to facililate sensory substitutions, not just visual to auditory conversions.” Visual-to-tactile techniques use intuitive, analogical cross-modal pairings. For example, a circle can be conveyed directly on the skin via 2D tactile cues. There are many more examples of novel interfaces that have leveraged the brain’s ability to naturally learn cross-modal associations.

The other component to this is that information should also adapt fluidly according to context and granularity needs with graphics, text, audio, touch — whatever modes suit the user. Don’t you love the fact that today there are various ways of “reading” a book, you can get the hard copy, listen to the audiobook, read a summary, or watch a short video. Sometimes the video is more effective for my context, sometimes print satisfies my senses. Shouldn’t it be the norm to consume the web too in different formats like visual, auditory or tactile in the level of granularity you desire?

I imagine that multimodal design will also distribute information across multiple senses wherever possible (subject to ability of course.) Preventing over-reliance on one mode, avoiding occupational problems like eyestrain or carpal tunnel. It also promotes better cognition by engaging more of the brain. I say let’s not put all eggs in one sensory basket!

Most of all, I imagine that multimodal interfaces work with users through life changes, long term or situational. As abilities shift, the interface seamlessly shifts information to more active senses. It doesn’t remove functionality but adjusts the optimal experience for each stage of life.

Some may say multimodal design is overkill. But by making universal function the default, not an accessibility add-on, we promote inclusion. As tech becomes more embedded in life, AI providing new opportunities to hyper-personalize, now’s the time to make flexible, multimodal design the standard. The benefits for health, cognition and independence make this an urgent need, not just a nice-to-have.

Reference:
https://faculty.washington.edu/ajko/books/user-interface-software-and-technology/accessibility

https://dl.acm.org/doi/10.1145/3148051

Lloyd-Esenkaya, T., Lloyd-Esenkaya, V., O’Neill, E. et al. Multisensory inclusive design with sensory substitution. Cogn. Research 5, 37 (2020). https://doi.org/10.1186/s41235-020-00240-7

--

--

Uma Anupindi

Pursuing Human-Computer-Interaction Design at the University of Washington, Seattle