Design for Everyone: How Universal Design Makes Technology Better for All of Us

Image from Microsoft’s Inclusive Design Toolkit (pdf download).

All of us have channels of perception that have been underutilized. In our visually-dominated world, it is time to give some more room to the senses that have extra bandwidth.

Creating technology that can be used one-handed, without visual feedback, without auditory feedback, without touch or without focused attention enables your product to be used more widely, in more circumstances, and under more conditions across the spectrum of humanity. This principle is known as universal design, the idea that products, buildings and environments ought to be created to make them accessible to all people, regardless of age, ability or other individuating features.

At times, the intelligent use of the senses means adding lights in place of tones, adding haptics instead of lights, or adding tones in place of spoken language — the key is to develop a fine sensibility for how the interface will work most successfully. The ultimate goal is to allow users to easily develop fluency in interactions with your technology, whoever they are.

Touch typing uses muscle memory, stored in various regions of the brain, to perform the task without consciously attending to the task.

The three senses that are used most commonly in our technology are sight, hearing and touch. Smell, taste, and proprioception are far less commonly used, although proprioception may be gaining steam with the use of virtual reality. Muscle memory can be exceptionally productive to use in some cases — touch typing, for example — because it is the strongest, most durable form of memory, so tasks learned this way can be carried out without conscious attention.

When translating between senses, the ideal is not to simply map from one sense into another, but to create the equivalent experience.

The performance artist Christine Sun Kim created a Face Opera where prelingually deaf performers, including herself at times, create a range of emotive facial expressions in unison, as a chorus: an interpretation of an opera for those who cannot hear music. This is one of the rare creative translations that achieves an analogous experience in a different sense.

One of the performances of Christine Sun Kim’s Face Opera.

Curiously, music does not evoke all the emotions we experience in daily life, and this is perhaps the most unique and unusual aspect of the experience of music. Although music can evoke anger, it cannot evoke either contempt or jealousy. There have been many attempts to explain the way that music evokes emotions, and relate them to pitch, tempo, timbre, major and minor chords and other aspects of music. Yet there is only one observation that seems to hold universally and it is this limited range of emotions that are born out of music. This is perhaps the strongest argument in favor of the idea that some emotions are not, in fact, natural to human beings, and that one day, we may move past them.

This is an experience, regrettably, that the Face Opera cannot capture. In Musicophilia, Oliver Sacks describes patients with musical afflictions from musical auras with epilepsy, to a continuous flow of spontaneous musical inspiration after a near-death experience. Perhaps someday, we will be able to directly induce the spontaneous experience of music in our brains, and of the deaf will be able to experience it. One patient in Musicophilia notes:

I do have fragments of poetry and sudden phrases darting into my mind, but with nothing like the richness and range of my spontaneous musical imagery. Perhaps it is not just the nervous system, but music itself that has something very peculiar about it — its beat, its melodic contours, so different from those of speech, and its peculiarly direct connection to the emotions. It really is a very odd business that all of us, to varying degrees, have music in our heads. (p. 40)

Perhaps the crowd at live concerts feels so connected to each other because music itself creates a unique state where we can feel bonded with each other without the core emotions that divide us. “People must learn to hate,” Nelson Mandela said, “and if they can learn to hate, they can be taught to love. For love comes more naturally to the human heart than its opposite.” This may explain why people fall in love with music and follow it throughout their lives.

There is a place for direct mapping between the senses as well, particularly when a crafted experience is not possible. For instance, text-to-speech applications are key for opening up access to the web for those with vision impairment and can be accomplished in as little as a single line of code.[1] Because navigating takes longer when using text-to-speech, access to a simplified navigation menu is still needed. A quick solution is to simply provide desktop access to the mobile version of the site, as this version likely offers simplified navigation already.

The first stage of design requires determining whether a given function is best carried out by a visual, auditory or haptic interface. Better yet, each design will allow the user to modify their experience to switch between the three. Being able to convert functions fluently between these three senses is an essential skill for the design of interactive technology, spaces and everyday objects.

Audio to Visual

Myles de Bastion is deaf musician and sound designer who develops technology and art installations that enables sound to be experienced as light and vibration. His work has appeared on Jimmy Kimmel Live! and he has built large format installations for music festivals and Grammy-award winning jazz artist Esperanza Spalding.

In 2012 Myles founded CymaSpace, a non-profit that facilitates arts & cultural events that are inclusive of the Deaf & Hard-of-Hearing, and in 2015 founded Audiolux Devices, a technology company that now produces professional products featuring the synergy of light and sound. They also create lights for emergency response, such as LED strips that can be used for visual directions in the event of an emergency, lighting up the most important information and directing people along the most efficient path.

Because sound is an excellent way to maintain ambient awareness, some groups are making initial efforts to translate sound into visuals for those with hearing impairment, often including icons to indicate the sound’s likely source — a human male or female, a child, a car alarm, and so on.

Such efforts are genuinely challenging. Music, for example, can feature not only a melody, but also harmonic and inharmonic overtones, percussive sounds, horns and trumpets, piano played softly or forte, human voice, and so on. Our ears translate one singular complex waveform — picked up by our eardrum — to represent many different sounds, instruments and sources: “In everyday life, we are usually surrounded by many sound sources… the ear is able to disentangle these vibrations so faithfully that we are not aware of the fact that they were ever mixed.” How can any visualization convey as much information? Through the opensource community, Audiolux is attempting to create more refined and expressive systems.

Most visualizations add a pleasant visual experience to music, but do not genuinely translate auditory information into an equivalent visual — or visual and haptic — experience. The experience of music and sound is an emotive experience. Even if all the information were present in visual and haptic forms, it is difficult to create the same emotional experience. This is why the Face Opera is such an ingenious approach because it directly cues into the viewer’s feelings and creates an analogous experience, just through other means.

There are promising avenues using visualizations to convey sound with less than a complete capture of all the information. The most direct way to visualize sound is the Schlieren method, which uses a particular setup of mirrors and lenses to capture the compression of air molecules with sound or heat. Like watching minuscule ripples in water, sound and heat waves appear like light and dark wavelets emanating from around the source. Real-time visualizations based on this aesthetic, like the linear algebra used to create computer-generated fire or smoke in movies, could be both interesting and informative.

For those who are deaf, bursts of colors on a smartphone paired with haptics could indicate the type of notification, much like distinctive sounds indicate the nature of the notification for those with hearing. Phones could come with optional settings so those who desired could turn on an option for “visualized sound” to translate audio notifications into visuals.

Visual to Audio

Partnering with an inclusive design team, the University of Colorado is working to develop a purely auditory version of their online lessons in physics. The purpose is not to simply read aloud the text of the lessons, but to convey the understanding of the concept in a separate, unique fashion solely with sound. Instead of learning of the Bohr radius through an image, what would it be like to hear it in sound?

Consider what translating visual into auditory could mean — not simply for people who are blind — but for everyone out there who prefers auditory learning. Sound is far more stimulating to the imagination than visuals. “Radio is like television,” some have quipped, “except the pictures are better.”

Podcasts are an increasingly important avenue for people to receive information.

Roughly 25%-30% of the population state a preference for auditory learning, with about 30% reporting a preference for mixed auditory, visual and kinesthetic stimulus. Combined, this represents the majority of the population.

While there is debate over whether a preferred learning style actually translates into better retention when presented in their preferred medium, researchers find ample evidence in the literature that people will express preferences for one medium over another, and that such preferences persist in individuals over time. Because preferences often determine the medium an individual will voluntarily seek out, these preferences are important in directing the development of content. The availability of auditory lessons would allow students to engage with material in different ways, perhaps boosting repetition and thus retention. The popularity of podcasts may underscore a widespread interest in auditory learning, or a natural spillover of information into a channel of perception that is less overburdened. In 2018, about 44% of Americans reported listening to podcasts at one time or another. Over a quarter of the population — and a full third of those between the ages of 24 and 55–listened to podcasts monthly.

Where visuals close possibility, sound opens possibility. Sound is not determinative and concrete in the same way visuals are. Foley artists create sounds that mimic rain or rusty doors and rarely are these sounds made according to the organic processes that normally produce them. Most people cannot hear the difference because many processes produce similar sounds. In the real world, we often don’t know until we gather more information. Was that a marble hitting the floor or was it an almond? Upon visual inspection, it was, in fact, an almond, although it sounded very much like a marble.

Podcasts have been pioneering auditory teaching and representation of data (sonification) through sound effects. Conveying quantities and proportions is a particularly important challenge because visual formats are quite effective at conveying such information. A radio segment on automated stock trading represents the time it takes to deliver a “sell” order from Chicago (“bup… … …bup”) instead of sending it from the building next to the New York Stock Exchange (“bup-bup”) to explain why an arms race of proximity led to new rules to create equality among different trading firms. Podcasts are rich sources of creative ideas for representing information with sound.

Text-to-speech for websites can use similar methods. Using long tones to represent increments of ten and short tones to represent increments of one, a listener can “hear” the proportions of different elements of a pie chart or a graph.

More difficult is translating rich visual information to sound. Soundscape is a smartphone app that uses binaural audio combined with local information to guide the blind around their neighborhoods while walking. It can identify landmarks, streets, and guide users to locations using sound cues. Might this be modified for tourists exploring new cities where a verbal guide is reassuring?

Micro-speakers, like these by JBL, open up new vistas for the creation of sound.

We now have access to high-quality sound through micro-speakers that could be placed in complex patterns on a wall or ceiling. Would it be possible to “paint with sound” to convey an abstract image in greater detail? Could we really begin to “feel” the shape and texture of the art if we expressed the sound in enough individual detail?

What about translating language to sound and music? We make cashiers memorize dozens of numerical codes when few would report this as a skill they excel at. Music, on the other hand, is something we memorize almost without effort when it is distinctive and heard regularly. What if instead of making announcements over a loudspeaker, we assigned musical phrases to common communications between employees, much like the African talking drum, which uses a set of tonal drum beats to communicate known phrases.

A new system of musical sounds would aim for something short of Morse code, with its near infinite combinations, but with at least a few dozen individual, unique messages.

Such a system of musical phrases with individual meanings was employed at a Toyota factory in Japan, although with a relatively limited palette of songs that were already well known by employees (for example, the birthday song). When the Andon chord was pulled to halt production, the musical phrase would indicate where in the line work was halted so employees could address the issue more swiftly.

Visual and Sound to Haptic

Haptics are underutilized in their versatility. It is worthwhile to consider more sophisticated ways to pair haptic stimuli with sound or visuals, using rhythm and a wider array of haptic sensations. For electronic devices, play with a wider array of haptic motors to create distinctive vibration patterns. When paired with audio and visual, these subtle haptics can create a truly immersive experience.

Haptics are a promising avenue to explore as a more natural replacement for the ambient awareness created by sound. Unlike visuals, it does not require our primary attention, or for us to be looking in any particular direction.

The Teslasuit is a haptic suit for gamers that demonstrates the full range of our ability to create nuanced and diverse haptic stimulus.

The mesh of sensors in the Teslasuit, a wearable haptic bodysuit developed for gamers, can convey not only sensations of touch, including the sensation of wind or water, but also of hot and cold. If coded instead to convey sounds picked up in the environment, haptic suits could provide an intuitive ambient awareness for different senses. Haptic compass belts provide accessible wearable options, as they can be easily looped around the waist. The belts contain multiple buzzers that are programmed to buzz in the direction of North, or they can be programmed to send buzzes for walking directions (one short buzz on the left side of the hip for left, and two short buzzes for an immediate left).

Multimodal and Switching Between Senses

More and more, our devices will use all three of these senses and the main task will be finding intelligent roles for each. Language can require too much concentrated attention at times, and sound can be intrusive. Often, spoken commands can be converted to tones, and tones can often be converted to lights or haptics. Be creative. We may find that many of our user interfaces are more complex than they need to be, and simpler solutions work better. Instead of driving directions conveyed exclusively in spoken language, perhaps tones could help indicate the distance of an approaching turn. The direction of a turn could be indicated by a vibration in your seat — a vibration on the left buttocks meaning turn left, and a vibration on the right buttocks meaning turn right. Although the street names would likely need to remain spoken, the direction of turns may be more seamlessly intuitive if integrated into physical sensations, and tones may be a more discrete reminder of the distance to an upcoming turn.

Optionality is likely to be an increasing emphasis in future devices. Being able to convert notifications between senses allows us to customize our experience for a broader range of contexts. On websites, it would be advantageous if the display could be converted to different themes adjusted for those with color blindness (for whom certain colors, like red, commonly used to make important elements visible, will not be easily distinguishable), for those who need larger text, or those who need higher contrast (which can be useful when using a laptop outside with the glare of sunlight), or converted to audible text with sonification for data or images. With the game-changing Braille tablet by Bitlab, it is increasingly likely that the internet will open up for the blind and visually impaired as never before.

Working with Limitations Can Stimulate Stalled-Out Design Processes

By widening our concept of who can be helped by technology, we can open up the work of design to conquer new challenges. When a survey about technology includes difficulties due to any cause, including arthritis, dyslexia, and any other limitation, only 21% of working age adults report being entirely free from challenges in working with technology. Our concepts of “Dis-Ability” and “Fully-Able” have been misleading us. Most of us struggle with technology in one way or another.

Luckily, difficulty exposes fruitful areas for new products and new approaches, and innovation that solves a niche problem often helps all of us. Pots and pans made for arthritic customers, designed to balance easily when lifted from the stove, also reduce the risk of spills for “fully-able” by making them easier to handle proficiently. The Ferrari Enzo was designed with subtly roomier seats and entries to spare the knees of aging pensioners, and these changes similarly would make it easier for the rest of us to get into an out of without bumps and bruises, even if we are capable of navigating ordinary car doors most of the time. Wider tolerances simply means it works successfully more of the time.

Frequently, identifying the ideal group for testing can be a highly effective way of discovering information about the effectiveness of your design, even if they do not represent the main population of intended users. Knives developed to the demands of the busiest of chefs result in better longevity, a more precise grip and better balance for casual cooks who are simply using them in their own kitchens. Designing the interface for self-driving cars using blind testers expertly identifies and reduces points of friction in the interface, allowing these to be refined and eliminated for all users.[2]

The blind are the perfect test subjects for refining the interface for self-driving cars, as they are able to pinpoint points of friction more accurately than other subjects.

Even if your product is not targeted to those with specific limitations, your products will be more innovative, more integrated, and more seamless if you develop projects in communication with a test audience with higher demands on your device or interface.

Creativity loves constraint. When given the freedom to explore new avenues, there is virtually nothing more stimulating to the design process.

Exercises in designing to a limitation offers a tool for injecting creativity and variation into design processes that have stalled. Daniel Pink highlights research in Drive: The Surprising Truth About What Motivates Us, demonstrating that extrinsic motivators — such as money and approval — are inhibitors to creative thinking, including problem-solving and other forms of knowledge work. Extrinsic motivators were quite effective for physical and procedural work — the kinds that made up the bulk of work for most of human history. Creative tasks, which are set to dominate the economy after automation comes of age, are far better driven by intrinsic motivation, such as a personal challenge, curiosity, a desire for knowledge, or altruistic motivations to create something. Variation is often in low supply as corporate cultures tend to promote safe affirmations of conventions over bold ventures in new directions.

In Grand Designs, an architect was hired to design the blueprints for a veteran who lost three limbs, including both legs and one arm, with the aim of creating suitable accommodations. The result is simply a gentler and more accommodating space. It features lower rises on stairs; two shower heads, one with a raised platform appropriate for someone sitting, or for the owner when he is not wearing his prosthetic legs; and ample attention to the height of appliances, drawers, countertops and more. Similar adaptations and attention can make housing more comfortable for others as well. Designing for a specific person, or group, allows conventions to be reevaluated and variation introduced. This is essential to allow for selection based on preference, and this selection process is key for the evolution of technology.

In many ways, the ubiquity of limitation is good news for design. In his book Change by Design, Tim Brown points out that we’re frequently so accustomed to compensating for the deficiencies in the technology we already have, it is difficult for us to imagine a form of technology that would be better adapted to us. In the first five decades after their invention, tin cans needed to be hammered open to access the food inside. Bee Wilson writes:

Food in cans was invented long before it could easily be used. A patent for Nicolas Appert’s revolutionary new canning process was issued in 1812… But it would be another fifty years before anyone managed to devise a can opener. (Consider the Fork)

A thin tin can was finally invented in 1846, which was then quickly followed by can openers and self-opening cans, as are found on sardines. We cannot know that we are missing a technology until it is invented, and until then, we simply make do.

It is also important for technology to be appropriated into new, positive, creative uses because this is another key stage in the evolution of technology. Colorblind artists can use the smartphone app Seeing AI in order to identify colors accurately, yet because stigma prevents information from being exchanged freely, the colorblind artists are unlikely to know it exists.

Companies can do good in the world and increase the size and creativity of their markets by helping to eradicate stigma against disability. One billion people worldwide experience some noticeable limitation to function, and this field is where much innovation is set to occur in the decades to come.

Universal design means designing to include the widest spectrum of humanity, which is simply good design. If we become more skilled at creating technology that works seamlessly with us — without so much compensation and adaptation — it will make all of our lives more effective.

Acknowledgements

Thank you to Jess Mitchell at the Inclusive Design Research Center for her advice and perspective on this article. Thank you also to Margaret Price, a principal design strategist at Microsoft, who gave us insight into Microsoft’s efforts in universal and inclusive design and shared important resources with us. Thank you to Aaron Day for inspiration and insight into sound design. This article was co-authored by Kellyn Yvonne Standley and Amber Case. The material is part of Chapter 8 in the upcoming book, Designing with Sound.

Designing with Sound, an upcoming book from Amber Case and Aaron Day, with structural and content editing from (me) Kellyn Yvonne Standley.

Designing with Sound

Sound is one of the most commonly overlooked components in product design, even though it’s often the first way people interact with many products. When designers don’t pay enough attention to sound elements, customers are frequently left with annoying and interruptive results.

This practical book covers several methods that product designers and managers can use to improve everyday interactions through an understanding and application of sound design. You can pre-order the book here.

Footnotes

[1]Medium does not have a direct way to add text-to-voice to articles, which is an irony not lost on the authors of this piece, who have contacted Medium with the recommendation of adding this functionality in the future.

[2] “Robot, Take me to the Pub!” Sound Design for Future Electric/Autonomous Vehicles,” by Bruel Allman-Ward & Kjaer Allman-Ward, Engineering Integrity Society, June 2018. Thank you to Aaron Day for bringing this article to my attention.

Kellyn Yvonne Standley

Written by

An analytic editor, creative thinker, content creator and strategist with a background in medicine and philosophy. Tweets @kellynstandley.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade