Can’t You Just Turn Up the Volume?

AMP Audio
7 min readJan 6, 2015

Written by Varun Srinivasan, in collaboration with the Amp team

“Why can’t you just turn up the volume?”

When Alex and I were pitching SoundFocus as an app that helped people with hearing loss, this was often the first question we got. We’ve decided to write up our answer to this question as a blog post, complete with cycling bears, to help anyone else who is interested in the true nature of hearing loss.

People often think of hearing loss as a simple loudness problem, one which can be fixed by hitting the “volume up” button on your remote enough times. That’s not quite how it works.

Not the solution. But still awesome.

If you know people who have hearing loss, you’ve noticed that they can’t tolerate loud noises that you’re fine with, but they also can’t hear some of things that you can hear perfectly well.

This seems contradictory at first, but to understand it better we need to dive into the physics of sound and hearing.

Sound expresses itself in three dimensions: time (seconds), volume (decibels) and frequency (Hertz). Time and volume are self explanatory: they define how long a sound lasts, and how loud it will appear.

Frequency requires some explanation—it’s the times per second that the wave vibrates. Heavier, bassy sounds, such as the rumble of an engine, are created primarily by low-frequency waves. These may vibrate 100 times a second. In contrast, crisper sounds like cymbals are composed of high frequency waves, which vibrate more in the range of 10,000 times per second.

Every sound you hear, from the pop of a cork to a sneeze, is composed of a collection of frequencies that have a certain amount of volume for certain lengths of time. These three factors give each sound its unique characteristic.

Waveform of a high hat (drumset)

You’re all probably familiar with a waveform (pictured above), which you’ve seen in a music player at some point. As you look at the waveform, the problem should become apparent. Sound is a 3-dimensional construct, but we can only represent 2 dimensions on a textbook or a monitor. In the waveform representation, we see Time on the x-axis plotted against Volume on the y-axis. We don’t see any data about which frequencies are creating the sound.

For hearing loss, frequency is an important factor. So we’re going to use a different representation — the Frequency Graphs. Let’s take a fixed length of the sound and apply a Fast Fourier Transform to it, which gives us the graph below.

Here, you have Frequency on the x-axis and Volume on the y-axis, which shows you which frequencies are creating the sound. You do not, however, know anything about the order of sounds or even the length of the sound. This could just as easily represent one second or one hour.

The frequency plot is actually a representation of dynamic range, a concept that we’ll have to dive into in order to fully understand hearing loss.

Dynamic range is the difference between the loudest and quietest sound you can hear. The human ear can hear volumes from 0 dB (a pin drop) to 120 dB (a loud Metallica concert). Anything louder will cause a lot of pain, and anything quieter will be almost impossible to hear.

Frequency range is the range of different frequencies that you can hear. For a typical human child, this ranges from 20 Hz to 20,000 Hz.

What’s interesting to note is that hearing loss begins to affect the dynamic range at particular frequencies, often at the higher end of the spectrum.

So what, then, is hearing loss?

Well, if you get hearing damage at a specific frequency, you’ll start to lose sensitivity to the quiet sounds at this frequency. However, your sensitivity to loud sounds remains the same. This is the subtlety that is often misunderstood.

Each person with hearing loss has a unique hearing pattern — that is, a range of sensitivities across the entire frequency spectrum that will vary from person to person. Even though two people suffer from hearing loss, they can have completely different hearing patterns. This means that they also require different audio processing to correct for their respective hearing loss.

There’s a visual analogy I’m fond of using to explain this. Imagine a picture of oh, say, a bear on a tricycle. Like so:

Developing hearing loss is like not being able to see part of the image anymore. Pretend that you’ve lost about a third of your dynamic range across all frequencies. You’ll start seeing something like this:

Not quite the full picture, is it? Hearing part of a frequency range can be misleading in the same way. And turning up the volume is like trying to zoom in on this image. You’ll hear certain parts more loudly (i.e the big brown bear), but not the parts that have already been cut out (i.e the tricycle).

What’s the solution? Multi-Band Compression (MBC), a technique that’s been used by the $6 billion hearing aid industry to solve this specific problem.

An MBC uses intelligent design instead of a one-size-fits-all method. With the right data about your hearing pattern, it can mash the full sound into your range so that you get all the information you need. It can’t give you a perfect copy of the sound, but it can make sure that you’re not missing any important context or significant details.

What would this do to our friend, Bear on Tricycle? We’re glad you asked:

What’s happening is that the compressor is fitting the data into the user’s dynamic range without exceeding it. This is something that a simple equalizer or volume change can’t do.

The detail that’s most important to understand is that the loud parts of the sound haven’t gotten louder, but all the quiet parts have, which is exactly what a person with hearing loss needs.

But enough talking—let’s hear what multi-band compression can do to a sound. Check out the clips below. The first track is an uncompressed segment of a song; the second is that same segment compressed. Give it a listen:


Just by looking at the waveform representation (time vs. loudness), you can see where the song’s quiet parts have been made louder. You can also see how pitches that were once distinct now blend into the background of the compressed file.

It’s important to note that slapping a Multi Band Compressor on everything won’t cure the world of hearing loss. You need to be able to understand a user’s hearing loss pattern accurately, which is a hard problem to solve. There’s no easy way to determine this without interacting with the person, playing different sounds, and measuring their responses.

If you’re aiming to adjust for someone’s hearing loss, you also need to understand what type of correction to apply. For instance, if you notice that someone has a 30 dB loss at a particular frequency, adding 30 dB compression to that zone will actually make the sound worse for most people. This is the subtle art of fitting hearing algorithms to hearing patterns, which depends a lot on psychoacoustics, as well as a person’s preferences and specific types of loss.

And that’s why we launched Amp, a case that can tune your iPhone to your hearing pattern. For someone with hearing loss, it’s simply the best way to take phone calls, watch movies or listen to music on your iPhone.

If you want to learn more about Amp or get in on the pre-order, check out

We’re here because we believe that sound should be easy, personal, and experiential, anywhere you need it. If you have a question about this post or suggestions for future topics you’d like us to explore, you can always write to us at We love hearing from you, and we promise: we’re really good listeners.