Eyes and Ears Battling Traffic for the Magic of Conversation

This update on the science and the art of conversation is written in conjunction with the launch of my new podcast, ANY HELLO: Conversations with Strangers. These conversations lead to more conversations with strangers — and it seems that all of us have a story to share. Check out www.anyhello.com/project/rocket-woman/ for Episode 1.


Consider the headline of this new paper from January’s Proceedings of the National Academy of Sciences:

“The eardrums move when the eyes move.”

Roughly, a group of researchers found evidence to suggest:

“That a vision-related process modulates the first stage of hearing. In particular, these eye movement-related eardrum oscillations may help the brain connect sights and sounds despite changes in the spatial relationship between the eyes and the ears.”

No one knows why these ear “wobbles” happen, according to coverage in The Atlantic. In fact, as The Atlantic reports:

“Until Jennifer Groh, from Duke University, discovered them, no one even knew that they happened at all.”


The reason this research is so cool is because humans and animals “can reliably perceive behaviorally relevant sounds in noisy and reverberant environment, yet the neural mechanisms behind this phenomenon are largely unknown.”

Put more simply: this process is how you can be standing at a busy city intersection and still be able to carry on a lively conversation with a friend by your side.

One of these papers actually suggests the human ear actuallyextracts“relevant sounds in diverse noisy environments.”

Pretty amazing.

There’s also a wide body of research about humans and animals NOT handling diverse noisy environments well, involving something called auditory processing disorder. It’s a “neurological defect that affects how the brain processes spoken languages” — which creates challenges for some children to “process verbal instructions or even to filter out background noise in the classroom.”

(Definition from the National Coalition of Auditory Processing Disorders.)


Untangling the complexities of speech could be the next frontier.

A team from Google will be talking about similar concepts in August when they deliver a paperat the Association for Computer Machinery’s special interest group on computer graphics (SIGGRAPH 2018) conference.

They’re working on technology that mimics what humans do in noisy situations, isolating what’s important from what’s not important — what they call “a speaker-independent audio-visual model for speech separation.”

The uses? Better closed-captioning systems for the hearing impaired or better device responses to voice-executed commands.


This is all to say that there’s a lot going on when two people sitting on a park bench are having a conversation.

And, for me … there’s even one more layer of as-yet-explained feature of human DNA in play: