Is Music Technology At War With Artistry?

Virtuosos HATE him! Tone deaf kid with a snapback becomes world renowned popstar with this one weird trick…

Science is going to destroy the world. I can assure you that’s true, because I read it on a website that also happened to inform me that the global economy is run by lizards. As a result, I no longer trust scientists, nor do I trust people who lick their own eyeballs.

It’s also an indisputable fact that science is going to destroy authenticity in music. I know this because I’ve seen it all over Facebook and Twitter, accompanied by some very strong opinions on Kanye West’s underwhelming performance of Bohemian Rhapsody at the Glastonbury festival.

Though it’s sometimes difficult to portray vicious levels of sarcasm through the printed word, you may have already guessed that the idea of tech killing music is not a sentiment I wholly agree with.

Scientific advancement is not innately a force for good or bad. Nuclear physicists have gifted us with the ability to create a superior source of long lasting energy, but also the power to destroy large chunks of the planet in the blink of an eye. Likewise, the weapons of mass production that audio engineers now have available to them have vastly increased the fidelity of the music we listen to, yet also allow performers to manipulate their recordings in order to give the impression that they’re more proficient at their role than they actually are.

So can anyone now become the next big thing?

Well, first you need some marketable songs. You’re going to want a demo of these to get you some shows and send out to industry reps, so you enter the studio to record your first masterpiece.

You can’t hit many notes today because you were out until 4 AM the previous night drinking, and to be honest, you couldn’t hit many anyway. It’s all good though, because the producer who greets you has racks of expensive looking hardware all around the room. He’s got the latest Mac Pro at his disposal and a tonne of microphones. It doesn’t matter how good or bad you are, because this wizard has a spell to remedy every imperfection. Except… he doesn’t.

That’s right, we audio engineers cannot make Kanye’s Bohemian Rhapsody into Freddy’s, no more than a patisserie can bake a chocolate cake out of soil. When it comes to doctoring a performance, there are only really two “dishonest” tools at our disposal; quantization of rhythm and pitch. Quantization is the use of software to adjust natural imprecision in the performance that’s notably distracting or dissonant. In other words, we can put things in time and in tune. That, however, is pretty much where it starts and ends.

The majority of high end editing software, such as the powerful Izotope RX series is not aimed mending the work of our clients, but actually our work. In fact, there’s way more tools out there to fix our shitty recordings, than there are to fix your shitty takes. We can remove snaps, crackles and pops with spectral editors, wave goodbye to background hiss with noise reduction plugins and instantly remove comb filtering with ridiculous algorithms like Unfilter.

You know what we can’t do? Take an emotionless, bland performance and transform it into something which connects with people. We can make a technically lacking performance somewhat tidier, but our main way of getting results out of artists is the same it’s been since the dawn of multi-track recording in the 1950's, and that’s choosing good takes.

Picking takes is now easier that ever, as in the digital domain, you can drop in at any point to record and splice together an almost unlimited number of attempts. Perhaps this is something purists may take issue with, but the fact remains that all of the takes you’re putting together have come from the performer without any digital manipulation. Sure, they may need to up their game to pull it off live in the same manner, but a big element of live music’s appeal is the raw imperfection. Creating a comp track by jumping between takes also requires a level of consistency from the performer at any rate, or you’ll end up with highly unnatural sounding results.

“All these artists just rely on auto-tune now…”

Auto-tune is a big issue to many of those taking the anti-tech stance. It’s a type of pitch correction, but whilst the aim of this is to manually adjust noticeably inaccurate notes in a way that sounds natural, auto-tune is often used primarily as an effect. One of the most “love it or hate it” incorporations of this is often referred to as the “T-Pain” effect. For those of you unfamiliar with the work of Mr Pain, this is where the retune-speed parameter on an auto-tune unit or plugin is set so high that it instantly snaps every tiny imperfection of the performance to a quantifiable note value. This produces something known as “tuning artifacts” and defines the robotic character often associated with its usage.

The science behind auto-tune is incredibly clever. Wavelengths decrease in size as their frequency increases, therefore changing the frequency alone to correct the tuning will also modify the speed, resulting in a “chipmunk” quality as you move the vocal higher, or “demonic” quality as you move it lower. Auto-tune units use the mathematics of the Short Time Fourier Transform to break down the sound into tiny individual frames, arranging these to match the duration of the original sound as they’re moved to different frequencies and re-sampled. Thanks to this, we have a far more natural and usable result than auto-tune would otherwise provide.

Never the less, the physics are not particularly relevant as to whether you can stomach how it sounds. Disappointingly, many engineers have started opting for auto-tune over manual pitch correction in order to save time, rather than because it works contextually within the song. This approach has has sadly become synonymous with the technique. In this sense, you could argue that it is dumbing down authenticity. For example, talent show X-Factor was caught using it to enhance the supposed live performances of its contestants, a move that goes entirely against the show’s self purported image of being a competition revolving around vocal ability. You could say this is one of the few scenarios where it should objectively not be used, as it’s cheating.

In many sub-genres, however, auto-tune is a staple of the sound. I was told by a recording engineer who’d worked with Taio Cruz, that having delivered one of the best vocal performances he’d ever heard, Taio requested to have auto-tune applied to the take in T-Pain quantities. Likewise I’ve been told numerous times when I’ve been working as a front of house engineer for Lake Komo to stop applying auto-tune to Jay’s vocal, because “he’s already a good singer.” Now, whilst it would be nice if all venue desks came with this as an option (which of course, they don’t), I’ve had difficulty assuring the frustrated audience member that it’s an intentional facet of the sound, and that it’s being added with a pedal onstage, not by me because I’m covering for Jay’s non-existent lack of ability.

Why can’t we just keep it natural?

Music is subject to fashion and the whims of popular culture. Movements come and go, with new variations on genres incorporating their own production styles and defining sounds. We’ve had “natural” from the dawn of recording. In fact, we’ve had it since the dawn of music as a whole. Digital audio workstations have only become standard practice in the 21st Century, with the first example of internal plugins in a DAW occurring in 1996 with Cubase VST. At the moment it’s fashionable to utilise a lot of the technology we have access to. The mass popularity of hip hop and EDM has brought with it an acceptance of synthetic and programmed instruments, and with this there has been a decline in the need for instrumental virtuosity in the studio. In the 70/80's, on the other hand, virtuosity became so self indulgent that the punk movement gained huge momentum for offering an alternative. It’s likely that the same will happen with the current trends, and organic productions will make their way back to the forefront (though plenty are already there, contrary to popular belief).

My question, however, is this. Is the ability to manipulate sound digitally in the way we can today not equally as valid as a tool of expression as manipulating that sound manually as a player or vocalist before those signals enter the computer?

No act is able to choose takes live, but we’ve been doing that in the studio since recording began, with minimal complaint. Aside from uses of the technology which are noticeably applied without context and go against the vibe of the song, is the augmentation of an already energetic and heartfelt performance really so degrading, as long as it’s carefully thought out?

Without advances in technology, an artist such as Diplo wouldn’t have a career, but does that make Diplo’s music any less credible? Every musical instrument and technique was an advancement at some point. If we all rejected this on principle, we’d still be banging on rocks with sticks.

Finally, back to Kanye and that Glastonbury fiasco…

Even the staunchest Kanye fan would have difficulty claiming Yeezus nailed his Bohemian Rhapsody performance. It was out of time, and out of tune, regardless as to whether he was trying to imitate the original or put his own spin on it. Despite this, it isn’t sufficient to use that small segment of a set to paint his entire career with the “talentless” brush, as many have.

Kanye built his career up on his skill as a producer and writer for various artists, working with Roc-A-Fella Records and the likes of Jay-Z before he became a performer in his own right. Sure, an impressive CV doesn’t mean you have to like his music, but his use of auto-tune and modern production techniques have been implemented in an attempt to push hip-hop forward as a genre. It makes little sense to argue that a man with 57 Grammy nominations, 21 wins and a worldwide following is an inadequate artist when so many people enjoy his music, which often challenges mainstream trends. He may utilise tools in the studio to help him improve his abilities as a vocalist, but whatever the process, he’s still connecting with people and providing them with enjoyment, which is surely what matters?

Perhaps the choice of cover at Glastonbury was misguided, but Kanye is known for his work within hip-hop, not as a Queen tribute act. If he’d forged a career on awful covers, then fine, be outraged that he’s somehow succeeded. The scathing response against him however, should be reassuring to naysayers. The majority are not blind to poor performances, and no matter how much technology is used to improve recordings, artists still need to be able to translate that to the live arena to remain credible.

Music technology probably won’t spell the death of authentic music and the role of the artist, but cynicism and rejection of change will narrow your perception of authenticity until nobody utilising new production techniques fits into your ideal.