The Vocal Aesthetic: How Digital Workstations are Changing Music and The Human Sound

Chad Arle
Emergent Concepts in New Media Art 2019
6 min readDec 22, 2019
A string ensemble piece made in Ableton Live by Davide Riva.

My brother recently sent me a recording of a piece he’d composed for three stringed instruments, similar to the one above. After listening, I commented on how authentic the instruments sounded, impressed at the quality generated from his usual notation platform, Finale. When he informed me that he’d actually sent me a live recording of his piece performed by three instrumentalists, and not a computer-generated concert, I was shocked. I have always been awed by the ability of modern DAWs (digital audio workstation), such as LogicPro or Ableton Live, to generate remarkably authentic sound, but this was the first time that I couldn’t tell the difference between the real and the synthetic. The reality hit me on the implications these platforms have on the music industry.

As with many other art forms, the music industry has gone digital. The line between ‘real’ instrumentation and computer-generated sound has continued to blur as DAW technology expands in capability and accessibility.

The recording studio has always been the home for creating high quality recordings in efficient and revolutionary ways, and it was in the 1960s that the studio grew as a place for innovation and experimentation — a departure from the ‘live performance’ — whereas up until this point the goal was to capture as close to the live sound as possible (Dixon). As tape manipulation and overdubbing technologies developed (in which multiple tracks, recorded separately, could be played simultaneously), the ability to carefully select the best sounds, and add in new ones impossible to generate during a live performance, became a standard (Dixon).

Ryan McGinley messing around with Ableton Live, crafting a song on his laptop.

As these technologies develop the recording studio remains the home for music creation, but the home has become the recording studio itself, with digital technologies (DAWs) allowing artists to experiment with dubbing and mixing on nothing but a laptop (Dixon). For instance, the songwriter and record producer Oak Felder, who has worked on songs for artists such as Nicki Minaj and Alessia Cara, works off of his home studio and laptop, using DAWs and softwares such as Splice (Deahl).

Songwriter and record producer Oak Felder in his home studio.

While splicing and sampling capabilities took over as the streamlined manner of producing music all the way back in the 70's (Deahl), the increased accessibility and efficiency of these technologies and samples has only widened the gap between listening to a live performance and a Spotify recording of the same song.

This changing dynamic is particularly notable for what is arguably the hardest instrument to replicate: the human voice. Modern vocoder and mixing technology has revolutionized the music industry, allowing an artist opportunities to manipulate their voice, generate harmonies, and create unnatural but beautiful sounds synthetically.

“The Hymn of Acxiom” from artist Vienna Teng’s 2013 album Aims.
“Haze” from artist Amber Run’s 2017 album For a Moment, I Was Lost.

Live mixing and pitch correction are nothing new, with artists like Frank Ocean and Chance the Rapper using vocoders and prismizers to manipulate their voices in bizarre and robotic ways (Petrarca). In a world where music samples — which one could picture as sort of ‘music data points’ — are readily available for copy, manipulation, and distribution, artists consistently draw upon each other when making music; the increasing accessibility to these technologies makes these auto-tuned, unnatural sounds the standard. The result: a new vocal aesthetic begins to emerge, streamlined across popular music.

For instance, artists Bon Iver and Kanye West collaborated with Francis and the Lights on the 2016 song ‘Friends’, which made heavy use of the prismizer to give the vocals their dream-like, synthetic sound. On closer look, one sees that the prismizer was not only used to add flare to the voices, but to make West and Iver’s voices practically indistinguishable from each other:

“Friends” by Francis and the Lights ft. Bon Iver and Kanye West.

Examples like ‘Friends’ emphasize that while DAW technologies originally expanded the possibilities of music production, the constant use has eliminated the uniqueness of these sounds. In their essay on another form of database art, the Twitterbot, Tony Veale and Mike Cook argue that “instead of worrying about how to always be different, it is just as productive to focus on ways to avoid becoming overly familiar” (Veale, Cook) when it comes to database. The sound or appearance may be similar, but the distinction comes in achieving a unique meaning or perception. Veale and Cooke argue that a piece’s distinguishability “depends on who is making it, what they are making it with, and what they are making it about” (Veale, Cook).

Bon Iver using his audio engineer Chris Messina’s “Messina” during a live performance.

Bon Iver achieves a level of uniqueness in terms of what he is making it with.His audio engineer Chris Messina changed the game with the invention of his Messina, which granted Iver the ability to manipulate and synthesize his voice during a live performance using a keyboard and some computer software. Chris Messina describes the workings of his machine as follows:

“There’s a laptop running software, and then that software is run through a physical piece of hardware, that is then doing another thing. It’s many things working together and none of them are ours, but the product is. Basically, we used things the way they’re not normally intended, and we put them together. That’s how we get the sound.” — Messina

As vague as it is, the result is groundbreaking: the ability for Iver to create beautiful harmonies and synthetic effects with his own voice, by himself, live. The new vocal aesthetic continues to trickle into every facet of music art, as it is not just a post-production achievement anymore. This technology is demonstrated in performances of one of Iver’s most iconic pieces, “Creeks”:

Bon Iver performing hit song ‘Creeks’ LIVE at the PNC Arena this year.

The synthetic vocal sounds are beautiful, haunting, and nonhuman in aesthetic. They are not naturally produced, but generated as a hybrid of the human voice and digital technology — a meeting ground of humanity and the increasingly digital world.

But as mixing technology increases in prominence, how necessary is the human voice at all? As mentioned earlier, DAWs continue to eliminate the distinction between the authentic instrument and the synthetic, and as these technologies continue to develop, the same happens with the voice. Holly Herndon, a German music artist, came out with an album this year, Proto, in which the A.I. Spawn generates a song using various pieces of Holly’s voice as data inputs.

“Birth” from Proto.
“iMi” from i,i.

The sound is undoubtedly synthetic and robotic in nature, and one could argue that it is not hard to distinguish from a real human voice. And yet when studying the work of artists such as Bon Iver or Kanye West, one realizes that the traditional notion of a ‘real human voice’ is continuously devalued in the contemporary aesthetic.

Instead, the allure of the synthesized, manipulated vocal sound continues to dominate, and if one were to compare the sound of Bon Iver in his most recent album, i,i with that of Spawn in Proto, the distinction between the two is just as undetectable as a violin and a LogicPro violin.

The emergence of the new vocal aesthetic, overtly robotic in nature, represents the continuous influence of the digital world on the human experience; as technology becomes evermore interlocked with the idea of the human, art forms like music become increasingly database-oriented. In her essay “Future Texts,” from Social Text 71, Alondra Nelson calls out the vocoder in relation to R&B music and the question of identity in the posthuman era. She describes the vocoder as “an example of this particular conjunction of ‘man’ and machine [which] ‘renders the human voice robotic’”(Nelson).

And this is not a surprise. The recording studio revolutionized the music production industry back in the 60’s, allowing for sounds never produceable before. DAWs have done the same in the 21st century, spawning a new aesthetic for contemporary music and the vocal sound. Music has always been an art form which reflects the human experience, and in a world dominated and run by the sound of machines and robots, it is no surprise that music would follow suit. As the human experience becomes digital, the distinction between the two is continuously blurred.

Works Cited

Petrarca, Emilia. “The Sound Engineer Behind Bon Iver’s ’22, A Million’ Clears Up Any Confusion About Its Technical Creation.” W Magazine, 19 Dec. 2016, www.wmagazine.com/story/the-engineer-behind-bon-ivers-22-a-million-clears-up-any-confusion-about-its-high-tech-sound.

Deahl, Dani. “How This Grammy-Winning Producer Turns His Laptop into a Studio.” The Verge, The Verge, 28 Sept. 2018, www.theverge.com/2018/9/28/17874576/music-production-laptop-studio-producer.

Dixon, Alan. “How Has The Recording Studio Affected The Ways In Which Music Is Created?” Classic Album Sundays, 22 May 2018, classicalbumsundays.com/how-has-the-recording-studio-affected-the-ways-in-which-music-is-created/.

Tony Veale and Mike Cook, Twitterbots: Making Machines that Make Meaning (MIT Press, 2018), Ch. 3 “Make Something that Makes Something,” 55–90.

Alondra Nelson, “Future Texts,” Social Text 71 (Summer 2002): 1–15

--

--