Music is everywhere. It has become so seamlessly interwoven in our lives that sometimes we don’t even notice its presence. You encounter or interact with music on a daily basis, but have you thought about what it takes to actually write a song? Probably not. In this episode of Wonky and Technical, I’m going to look at a familiar subject from some unfamiliar angles. Join me as I dive into music composition, the neurology of music, and how the technicalities that define both are delightfully strange.

Now you might be thinking: Music has been around forever! What’s left to not know about it?

And you’d be right—music is incredibly old. We’ve been studying the technical side of music since the dawn of math. We know more about the physics of sound and how it breaks into sequences and scales than I could ever summarize. But understanding how sound waves work is a far cry from grasping what the human brain undergoes during the songwriting process.

Songwriting is a bit of an enigma. We know, or at least assume, that it’s a hard and probably frustrating process. But we might also assume that it’s a creative process driven by emotion, that it’s mostly nontechnical.

And so this is the perfect place to begin.

In the audio portion of this episode, I talk to two people: Ahko, an American-born musician currently working in Japan as a teacher, and Sarah Storm, an actor and accent coach with a unique perspective on music. In these interviews, you’ll learn how they each work with music, their views on songwriting as a technical process, and why explaining songwriting is particularly difficult.

Settle in. This is about to get wonky.

The Surprisingly Technical Ears of a Composer

Simply put, the big draw of music is that it makes people feel things. It isn’t just patterns of sounds; music has leverage over us. It hits an emotional switch that can turn on or rev up different reactions in our brains. It affects our emotional regulation and recognition, our associative memory and recall, and our ability to recognize patterns via pathways that aren’t activated by other forms of art.

So, what is it that we’re really hearing?

The ability to keep a beat is limited to birds that can mimic speech and a sea lion named Ronan. It’s a neurologically complex task that requires synchronization between the sound-processing and movement-processing portions of the brain — as well as a fair bit of cognitive oomph — in order to work, which subsequently requires a whole bunch of supporting functions from the rest of the brain.

Music, therefore, is a specific pattern that turns on all of those functions, and does so in a way that produces predictable, semireplicable results. Just as your executive functions regulate memory and emotions, your “pattern functions” regulate how you respond to and process sound, among other things.

To get an idea of the scale of this sound-driven regulation, check out Valorie Salimpoor’s research on the neurology of music listening. It’s a fascinating read and includes some illustrative music samples.

Salimpoor’s euphoric and mood-altering experience while listening to Brahms’ Hungarian Dance no. 5 is a great example of the power of music and how powerful our music response can be. Her findings shed light on the role of a brain’s “pleasure center” and the chemical release that occurs when someone is enjoying a piece of music, as well as the importance of connectivity between the brain’s regions — all of which lends itself to our understanding of the composition process.

You see, when a composer listens to a song, they don’t just hear sequences of notes; they hear the triggers for all these different kinds of brain activity.

The emotive and creative language artists use to describe their music isn’t subjective fluff. When an artist says that a song or a particular series of notes feels happy or sad, or that it has a certain direction or character, they aren’’t just personifying it. They’re describing how their brains synthesize the patterns of music. The characteristics of a note that sounds “bright” or a melody that sounds “energetic” are fairly contextual, but they’re still driven by neurological factors that are replicable and predictable.

In fact, these emotive descriptors are accurate enough that you can classify them as technical terminology. It’s highly contextual terminology, but it’s still a theory-independent system of language that catalogs what a song does and what an artist wants it to do.

Music Theory: Loved, Hated, Useful?

Music theory is a strange, old beast. Essentially, it’s music fundamentals and studies music notation, composition methods, key and time signatures, rhythm, music history, and more. It uses dense language and is a skill you have to use consistently in order to build upon and retain it. It’s really easy to learn the basics and neglect the rest if you aren’t interested in writing particularly complex music or diving deep into music history.

A surprising number of musicians even go so far as to celebrate the fact that they don’t study or practice music theory at all, with some saying that theory-driven musicians produce complex patterns that pretend to be music, instead of “actual music” that taps into people’s emotions.

Are you sensing a pattern here?

Individual notes, sequences of notes, scales, modes, and keys — they’re all mathematically quantifiable. We’ve even reached the point where neural networks — that is, computer systems designed with brain-like capabilities — can produce inoffensive music. All that to say music theory is backed by some solid science. But there’s a fundamental difference between “solving for music” and writing a song that produces a strong emotional response in a specific audience.

This makes more sense when you look at music composition as a task that requires emotional intelligence, and at emotional intelligence as being deeper than simply HR-speak for not being an asshole.

The system of pattern recognition that drives our connection to music bridges the gap between interpreting sound and interpreting motion by connecting them to memories and emotions, right? It’s kind of like a prelingual system of emotional recognition. Let’s go back to Salimpoor’s article for a moment. We now know that the brain processes music using templates. We also know that we use these templates as cognitive shortcuts, interpolating and extrapolating from them to anticipate patterns in the music we listen to. It’s how we end up “in sync” with songs we haven’t heard before.

A consequence of this template-driven processing is that we perceive unfamiliar music as being either really cool or really blah based on our listening history and cultural background, both of which inform our tastes on a neurological level. Complex music that doesn’t build on familiar patterns (read: esoteric music that relies heavily on theory instead of feeling) is inherently inaccessible to many people. Our ability to emotionally engage with new music is limited by what we’re familiar with.

Another factor to consider is music’s evolutionary components, as discussed in Virginia Hughes’ article for National Geographic, specifically how it relates to the brain processes and recalls movement. Hughes’ piece covers how we consistently connect both movement and sound to emotion and how that might affect our ability to write and perform music.

If music really is connected to our prelingual emotional intelligence and our general ability to synthesize emotional impressions from mixed sources, then the love/hate relationship that so many musicians seem to have with music theory makes a lot of sense. If songwriting leverages — but doesn’t necessarily require — emotional intelligence and an ability to synthesize emotion from sound, then a “powerful” vs. “complex” dichotomy is a natural result.

Summarizing What We’ve Learned: Songwriting Is Bloody Difficult

Alright, so we get that music does funny things to our brains and that the sensations produced by music we deem to be good aren’t just arbitrary tingles. We know that, for a skilled composer, a close listening can reveal a lot more than just patterns of notes, and that making “good” music is largely a process of understanding and translating previously successful patterns and sensations into something new.

But how the hell do you actually write a song?

The initial components of a song can be inspired by anything: A chord progression can be pulled from a childhood memory; a melody can be pulled from a series of family photographs; someone’s probably built a drop out of the sensation of realizing they’ve gotten a parking ticket. Songwriters then build on these musical stubs by playing through their permutations, searching for connections between the different parts.

While the mood and tone of a song might change drastically throughout the process, the core inspiration usually stays the same. Songwriting is iterative, but it’s more reliant on iterative interpretation than iterative note sequences. It’s often a process of translation, too, as the pieces of a song are volleyed back and forth between people who can best interpret the emotions of the song and those who can best express those emotions through musical grammar.

Interestingly, the person who is often the best at translating these ideas during a song’s production isn’t always the artist. You’ll find that many of the artists interviewed on Hrishikesh Hirway’s music podcast, Song Exploder, “have a guy” for music theory — a producer, a fellow band member, or even just a friend — that they bring in at some point to help with the really sticky parts.

These “technically minded” musicians help the artist with the grammar and structure of the song, but they aren’t just cleaning it up—they also help the artist put ideas that drive specific note and chord choices into clearer contexts, giving them a framework to lean on. This kind of help can come into play surprisingly early in the songwriting process, and it can lead to some significant changes.

Musical talent isn’t just the ability to play an instrument; it’s the ability to access a specific neurological mode consistently enough to make a career out of it. It’s the strategic ability to use theory, structure, inspiration, and the support of friends and collaborators to build sensations out of sound.

And that’s what makes it a surprisingly wonky and technical process. It’s a hodgepodge of cognitively demanding pattern recognition, powerful emotional experiences, and the incredible persistence required to take a nugget of inspiration and hammer it into something that satisfies all of the above. It’s a unique art that’s rife with explicit technicalities, and yet also one that’s defined by the implicit technicalities of neurology and emotional intelligence. It’s distinctly human.

And what’s wonkier than that?