UCD Charette — My First Process Blog

My first experience with the design process was very inspiring, and revealed to me how much I’m going to enjoy the kind of work I’ll be doing. I learned a lot about the different aspects and phases of the design process and am looking forward to whatever comes next.

What did I do?

I had my first experience with a charette — A fast-paced run-through of the design process. I worked with several other people to identify several of the different possible clients that may have special needs within a smart car interface, and then worked on designing an interface for this kind of a device. In my case, I worked on designs for a smart car interface for a deaf client. Together with a few others, we designed an interface that included features guided towards these kinds of users, including several text-to-speech and speech-to-text features, and the use of clearly visible notifications and haptic feedback in the wheel of the car as a way to aid navigation and instruction.

What Issues did we address and how?

During our process we outlined several potential problems that might be faced by someone who is deaf, including:

  • Difficulty communicating with others
  • Difficulty receiving instructions from a maps application
  • Difficulty hearing auditory cues a driver might usually encounter
  • Difficulty following GPS instructions, since those can rely heavily on sound at times.

And the we went about trying to address these issues. In the case of the difficulty communicating with others, we discussed a few sub-scenarios. Firstly, we discussed what might happen if the driver should be pulled over by the authorities. We decided that including a keyboard that allows for text-to-speech would be helpful as a communication option. We also discussed what might happen if the driver receives a different phone call while in the car, and debated the idea allowing for real-time speech-to-text with the audio of a phone call, but decided this would be too unreliable and distracting (therefore potentially dangerous) for a driver.

Another issue we identified was that of regular auditory cues that drivers normally experience, such as car honks and sirens. We decided that it would be useful to have a ‘notification light’ above the display that would indicate different things based on sound. By placing microphones on the outside of the car, we might be able to hear things like car honks and identify them, or hear things like sirens as a way of indicating to the driver that they should probably get out of the way.

Lastly, the another thing we addressed was navigation. It has become standard for smart-car interfaces to include some sort of navigation suite, and while these do usually suit a deaf driver fine, as they can be used in a more visual way, we determined there were definitely ways it could be improved for this kind of audience. In many cases, notifications and instructions are delivered by voice, meaning that the driver doesn’t need to check the phone screen every two seconds to check if a new instruction has come in. We decided that while we certainly couldn’t make the driver ‘hear’ the instructions, we might be able to notify him of when he needs to check the display. I had the idea of including haptic feedback in the wheel of the car as a way of delivering more information to the driver, and this really opened up further ideas about the way it might be used. We debated whether different parts of the wheel would vibrate to indicate different directions, whether different types of vibrations might indicate different types of notifications, and many other applications.

All in all, I think our brainstorming brought up a lot of interesting ideas.

What about the actual interface?

Once we had generated all of these ideas, the time came to begin brainstorming our actual interface. We started by creating a scenario in which the functions of our application might be used, which can be seen below:

An example use of our smart-car system, displaying several of its functions.

This scenario helped us to further solidify some of the ideas we had, including the use of navigation, notifications, and the text-to-speech functions of our system.

Once we had created that, we went about starting to truly sketch an interface. We started by writing out the kinds of menus that one would have to go through in its use, as a way of preparing for the sketches, and then made actual sketches of what navigation through the ‘maps’ function might look like:

An example of the navigation in our application

So how did it all feel?

This process was incredibly interesting and satisfying. I really enjoyed putting myself in the shoes of a person with different needs from my own, and looking at ways to address the challenges they might face. As a psychology major, I’ve always loved trying to understand how other people think and feel, and this is an area in which I can really apply this experience. I also really enjoyed working with other people, as their ideas kept building on mine and giving me an opportunity to think of more things to add to the table. This was truly a collaborative process that I really enjoyed. Lastly, I loved being able to apply my love of tech, and my everyday interest in user interfaces and hardware to an actual project. User-centered design, I have discovered, truly is the intersection of my love of technology and design, and my love of psychology.

Like what you read? Give David Joel Carli-Arnold a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.