From Creation to Completion: The Bass Line

One of the problems with being a soprano is not being a bass. While this doesn’t seem like that big of an issue, it limits you when you’re trying to create a lush vocal soundscape. Of course there are tricks of the trade, like filling in with close jazz harmony like tight sevens and nines, but I wanted a wider range from my latest music video, especially as I was planning to emphasize the rhythm section.

So I brought in the big guns: my friend and fellow musician, Jason Levine. Unless you’ve heard his version of “You’re a Mean One, Mr. Grinch,” you might not be aware that Jason possesses a deep, resonant bass range.

I hit a couple of problems: I didn’t want Jason to spend a ton of time recording a bass line that I hadn’t established yet, and while Jason is an accomplished singer, he’s never sung in a vocal band style of bass, which is often highly percussive and resonant at the same time. And worse, the phrase I wanted him to sing (“Bē”), is often spoken very shallowly, which means that the tongue is pushed forward, causing less room in the mouth and, as a result, less resonance.

What I ended up doing is deciding to sequence the bass line, allowing Jason to focus on vocal production and, hopefully, vastly decrease his time in the studio. The downside to this method is that it can make a vocal track sound less organic and more mechanical. I was willing to make this trade off, as I would get control over rhythm, potentially receive better quality recordings, and take the least amount of Jason’s time.

With that plan of action, I quickly switched gears into implementation.

Creating the bass click

As good of a plan as this was, I needed a few things before I could actually provide a click for Jason to sing to. I needed the pitches and the duration of the notes. But in order to get the set of pitches, I had to map out the entire song to make sure I only asked Jason to record what I needed.

I began with asking him what his vocal range was; I ended up writing the entire song around the bottom of that range. One of my rules of thumb is to make your most critical voice sound good, no matter how hard it is for anyone else. (I have, for instance, forced a pianist to play in some key with five or seven flats, just because it flattered my voice.) In this case, I sacrificed my own comfort in order to best showcase the tonal half of my rhythm section.

As I mapped out the bass line, I tried to emphasize the spaces between notes to more easily sequence the final product, knowing that this strategy also allowed space for the percussion to fill in. This also happened to fit within a critical philosophy when it comes to bass: you have to provide space between the notes to give a song groove. Once I had a couple of musical themes notated, I skimmed the entire piece and wrote down the notes I needed.

At the same time, I fiddled with the playback speed on in my notation software (MuseScore) to figure out what tempo would allow my piece to truly groove. Once I had that nailed down, I could create the click.

Example of the tonal click track I provided to Jason.

The clicks were simple: a single note spanning two measures for a tonal reference, and then a series of quarter notes (with a quarter rest in between) to give Jason an idea of how fast to go. I ended up making about a dozen notes or combos, each with different pitches, to record as separate WAV files, dumped them into a Creative Cloud directory, then shared that with Jason.

Bass note selection

I happened to be sitting at my computer when Jason synced his recorded tracks to my Creative Cloud folder. A sense of anticipatory pleasure snapped through me as I watched the sync notifications pop up on my desktop: the game was on! I launched Adobe Audition, loaded the files, and began evaluating each note separately.

Completely to my surprise, Jason recorded it all; he sang the initial tone as well as each of the individual notes I had mapped out. This turned out to be providential, as I ended up using one of these notes for the end of the piece.

My next step was to choose one take of each note to use as my standard. I was listening primarily to the quality of the sung consonant and the quality of vocal production for each note. Was the consonant (b) on pitch? Was the vowel sung with as much resonance as I wanted?

However, I kept any notes with minor pitch issues, as I knew I could use Audition’s manual pitch correction to make small changes, while still maintaining the perception that a human had sung the note.

Through a process of trial and error (and a couple of botched attempts at pitch correction), I had my final selections and was ready to sequence.

Visual sequencing

The next step was aligning the selected notes. I created a version of my click that only contained the bass line, and ensured that I selected a piano as the instrument. This gave me two things: a clean attack and a reference tone.

At this point, I was ready to begin working in Audition with a new multitrack file. I grabbed the bass click, dropped it into a channel, and zoomed in to begin.

The waveform on top was the bass and rhythm click.

When I looked at the click, I was able to see the location where I should line up the vocal samples I copied and pasted from the original recorded tracks. After this exercise, I muted the track and listened to the bass by itself to ensure it had the groove I wanted. With some final timing adjustments on both bass themes, I was ready for the next step.

Level alignment

For the bass to sound as human as possible, the sound had to be about the same level across the board. By selecting the quieter sections and increasing the volume, I made it sound as close to level as possible, but didn’t sweat the little details, knowing that it would sound even more level after some compression.

After a couple of adjustments, I cranked the waveforms to the point where it was just shy of clipping, because I wanted to have options later when mixing down.

At this point, I took the opportunity to bounce my work down to a single track (Multitrack > Bounce to New Track) so that I wouldn’t accidentally bump any of the sliced sections out of sync.

EQ, compression, and reverb

The last three pieces of magic were equalization, compression, and reverb. I discovered later that I could have simply squished the first two steps together via compression — chalk that up to “I do things the way I would live music,” which, Jason explained to me after I’d finished the final mix, was only half the equation.

However, I started out with equalization (EQ), using Audition’s graphic equalizer on the track to dramatically cut the highs and boost the lows to round out the tone. By doing so, I made the voice sound fuller and more like a jazz electric bass.

Equalization for Jason’s voice. Note that I bumped up the bass and pulled down the highs.

I then put a little bit of compression overall to help even out the levels a little bit more, to reduce the difference between the different samples. This made the entire track sound more cohesive, as if Jason had sung the entire track live.

A little touch of reverb to even things out.

Finally, I added a barely discernible reverb to warm up and even out the tone. I discovered at the end that I probably should have left it dry during the recording process and applied the reverb (and all other effects) during the mastering stage, purely for performance reasons, but I was far too excited to hear the bass line in its entirety.

I recently chatted with Jason, who referenced his recent online class on compression and EQ. What I had missed was an opportunity to control the punch of each one of the notes. By using compression, I could have had higher fidelity on the perception of attack and delay, instead of simply relying on the brevity of the notes and the space in the arrangement. He recommended playing with the settings with a short attack (15–27ms) and a fast release (59–99ms) to get a more percussive punch that would be felt through subwoofers.

I later discovered that Jason had done exactly this in a recent song, “Bess Tell Dee Groove” from This is Live Music. Throughout this song, but most noticeably in the second half, Jason’s voice is pushed through a compressor in just this manner to give a more punchy feel. In retrospect, this effect was more than I was looking for at the time, so I don’t feel terrible about not adding a more aggressive compression to my mix.


My experiment with manually sequencing was surprisingly successful; the resulting bass line in my piece, “Beatboxer Meets Creative Cloud,” is both rich and percussive, and lays the foundation for a groovalicious track. Check out the track on one of the links in the first half of the article.

Like this type of content? Don’t forget to hit the heart below to let me know you’re interested in hearing more about this.


Many thanks to Jason Levine, Principal Worldwide Evangelist at Adobe, for his editorial and musical support.

Elaine is a product manager at Adobe. You can find her on Twitter at @elainecchao.

Like what you read? Give Elaine Chao a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.