Seeking the ‘Whys’ and ‘Hows’ of Music
Dr. Aniruddh Patel researches the fundamental questions behind how the brain processes music, which has led him to explore everything from dancing cockatoos to whether a composer’s language shows up in their melodies.
Cognitive neuroscientist Dr. Aniruddh Patel will be at the Kennedy Center on June 3 as a part of Sound Health: Music and the Mind, where he will partner with jazz trio Mark G. Meadows & the Movement and the Different Strokes for Different Folks choir for this interactive presentation on creative aging that uses neuroscience to explain how the mind and body respond to the act of creating music. Learn more.
In science, the area of basic research is in some ways about answering the kinds of questions that a curious child might ask about the world around them. Why? How? For any field, these are the building blocks that inspire more specific questions and help clinicians and others trying to figure out how to apply what we now know in a useful way.
“Like my other colleagues that do basic research, we’re driven by those basic questions,” said Aniruddh Patel. “Just like a physicist would be interested in basic questions of matter and energy and how the universe works, we have basic questions about how the brain works and how music is processed by the brain.”
Patel is a Professor of Psychology at Tufts University, and his work focuses on music cognition: the mental processes involved in making, perceiving, and responding to music.
“I got into the field through evolution, thinking about why are we musical and why do we have this behavior? Did it serve some function in evolution? This is an ancient question that intrigued Darwin.”
That curiosity has led him to explore music cognition down several paths. From an evolutionary perspective, Patel explains that rhythm seems to be a core aspect of music, and the ability to perceive and process rhythms is likely what first emerged in the musical human brain. That led him to wonder: if humans have the ability to process rhythm, do other closely related species have that ability as well?
“Surprisingly the answer currently seems to be ‘no,’” Patel said. “It looks like the ability to sense a beat and move in time with it draws on sophisticated brain mechanisms that we might not share with our closest primate relatives.”
In 2006 he hypothesized if there are other animals that could recognize and hold a beat, it would vocal learning animals like songbirds and parrots, not primates. “Most animals just have an innate set of calls. They don’t learn them or modify them much as a function of experience, whereas songbirds and parrots and humans have to learn how to vocalize based on experience, which requires very tight circuits between the auditory and motor systems, that is, the hearing and movement centers of the brain,” Patel said.
In 2008, Patel was shown a YouTube video of a cockatoo named Snowball, bobbing along to “Everybody” by the Backstreet Boys. Patel and his team decided to put Snowball’s apparent sense of rhythm to the test by creating 11 versions of “Everybody” that were sped up or slowed down to various tempos. Snowball did remarkably well adjusting to the different tempos, synchronizing to the beat in nine out of the 11 tests.
This work built on Patel’s larger research program exploring cognitive connections between music and language. In 2008 he published Music, Language and the Brain, which is widely regarded as a leading resource on the topic and explores the idea that music and language share deep connections in how they are processed in the brain. In one early study, Patel explored whether the languages we speak might impact the music we create. Patel and his team used computer software to measure pitch variables and duration in British English and French languages, and in melodic themes written by turn-of-the-century composers from each country.
“The intuition that there are hidden connections between instrumental music and speech had been floating around the musicological world without evidence for over fifty years, and we were able to bring new quantitative measurement tools to this idea and show that there was something to it, which was very satisfying,” he said.
When Patel started in the field, there were no conferences or societies of music neuroscientists, unlike today. Still, it is a young field, so more basic research is needed to build our understanding of how the brain interacts with music. While such research doesn’t always have a clear connection to a real-world application, as the field has grown Patel has seen a growing interest among music therapists and clinicians to use basic research to inform, guide and support their work.
“I’m primarily motivated by basic questions about how the brain processes rhythm or melody, but you don’t have to go far before these questions lead you to practical issues like ‘Does rhythmic training improve linguistic processing in children with language disorders?’ or ‘Can regular singing improve fluency in aphasic patients?’” Patel said. “These are questions that are interesting from an applied perspective, and they have very interesting basic research components, too, because they’re about fundamental links between music and language. I’m happy when I see that something that I work on has a possible application to a real world issue or problem.”