90 Second Interview: Jerry Schnepp

MidwestUX
90 Second Interviews
2 min readJul 27, 2018

By Eric Bjerstedt, video editing by Anja Harris

Jerry Schnepp, professor at Bowling Green State University, is working on something really cool. The American Sign Language Synthesizer Project takes audio and automatically generates an ASL animation—something that, not surprisingly, is hard to do.

Transcript:
Hi, I’m Jerry Schnepp. I’m a professor at Bowling Green State University and I teach courses in interactive media. The ASL Synthesizer Project is a collaborative effort between DePaul University, Columbia College, and Bowling Green State University. This is a group of researchers who all have expertise either in computer science, ASL interpretation, or linguistics and are working together to create a system that would take a speech input and then generate animations of American Sign Language that are acceptable to a native signer. And that’s something that’s really difficult to do — if you listen to computer generated speech, even the best systems are still laborious to use. Siri and Google Voice, these are both really great voice synthesis systems but they’re still not perfect… that same challenge applies to sign language. We’re constrained by a very realistic human form, and we can’t make it look cartoon-ish, or else it won’t serve its purpose. We have worked with really skilled animators, people who study linguistics, and people who study sign language in particular to craft animations in movement that are realistic. What we typically do is our team validates their acceptability and then we’ll do user testing with members of the deaf community. We are staying true to human centered design, staying true to focusing on the actual users and consider what people will find acceptable, what people will find usable, and what integrates into their lifestyles.

--

--