CSE 590w #2: Evaluating Smartwatch-based Sound Feedback for Deaf and Hard-of-Hearing Users Across Contexts
Presented by Steven Goodman, PhD student in HCDE
This article is part of a series. Previous: 590w #1 Next: none yet
This week’s seminar was presented by Steven Goodman, one of my classmates who is a PhD student in HCDE. My notes are below.
I thought his presentation was quite interesting, but his pace was pretty quick, so I didn’t get to jot as many notes as I would have liked on some sections. I think this project fits into a larger umbrella of “testing the potential of new technologies for purposes other than their initial design goals.” I’m almost positive there has to be a coined name for this somewhere, but I don’t know it. It’s an especially open field when applied to technologies with operating systems of their own, as it’s possible to develop your own applications to lay on top of them. This was true in this presentation — the team explored the possibility of using smartwatches armed with a technology that could sense noise in the environment and convert it into haptic feedback. My two biggest takeaways:
- Tactile/haptic feedback can be incredibly useful for simple, common sounds (knocking, footsteps, someone’s own name) as they can be instantly interpreted without having to look at a screen and take eyes off the environment.
- One of the biggest barriers to use currently is handling of ambient noise. What people determine as background noise vs. annoying sounds is entirely personal, and all hearing people do it naturally. However, technology has trouble distinguishing between important and unimportant sounds in noises areas like cafés, which is currently a shortcoming and a problem to solve in the future.