Synths, Rhythms and Algorithms

Christian Tronhjem
The Sound of AI
Published in
9 min readMar 21, 2019

How technological leaps created new genres, and where AI could lead.

Photo by Patryk Grądys on Unsplash

While scanning your favourite music provider’s catalogue, it’s easy to take for granted the extensive variety of music freely available. In sixty short years music has undergone a thorough evolution, producing ranges of genres, sub-genres and sub-cultures. How cutting edge music technology has given rise to new genres is particularly fascinating for producers and composers alike. When you throw Artificial Intelligence (AI) into the mix, which is already revolutionising every industry across the board, it’s certainly within the realms of possibility that another new genre may be on the horizon. Before tapping into where we might head, let’s return to the innovation that kicked it all off.

The dawn of synthetic instruments

One of the first commercially available synthesisers was the Moog, in 1964. While the layman couldn’t afford it, those that had access stood at the gateway to new forms of music-making. Back then, two prevailing philosophies split the synth world: East coast style, which included a keyboard and was intended to play ‘real notes’ and intervals, and west coast style, which was more experimental and microtonal in its nature. For some, these new innovations served a specific goal: Emulate ‘real’ musical instruments.

One of the most successful examples of electronic synthesiser music at this time became Switched On Bach (1968), by American composer Wendy Carlos. Replicating acoustic instruments playing pieces by Bach, it sold over a million copies, peaking at number ten on the US Billboard 200 chart.

Shortly after the launch of the Moog, Don Buchla introduced his series of modules that conformed to the west coast style. In 1967, Morton Subotnick, using Buchla’s module, released Silver Apples of The Moon, an iconic piece for west coast synthesis, and a brilliant example of how the same technology allowed for totally new music, in stark contrast to the brassy synthesised sounds of Bach. While Subotnick’s piece itself is highly experimental and definitely not as popular as Bach, the strikingly different approaches are noteworthy. Of course, the Moog’s new timbre provided a refreshing take on Bach that could’ve been achieved using another synthesiser, whereas Subotnick’s piece implemented new technology to create something no other instrument was capable of at the time.

Rise of the machines

Synthesisers soon took off, became cheaper and were adopted into music genres around the globe. New machines started popping up everywhere. Fast forward to the 80’s, where Japanese manufacturer Roland took an unexpected risk, with the aim of giving studios easier access to drummers and bass players. In 1981 they launched the TB-303, a bassline synthesiser, and the TR-606 and TR-808, two drum machines (T stands for Transistor, B for bassline and R for rhythm). Although used on a few records, it wasn’t the massive success Roland had hoped for, and the range was discontinued in 1984. These music machines went on sale, and gained serious traction in burgeoning subcultures like hip-hop, house and techno in the urban environments of Chicago, Detroit and the UK.

A group called Phuture latched onto the trend, releasing Acid Tracks in 1987, one of the foundational songs of the iconic acid house genre, which took the 303 and ‘abused’ it to make that recognisable squelchy bassline sound — a sound gloriously far removed from the Japanese engineers’ original vision of a bass player.

It wasn’t until the machines were pushed beyond their original purpose by visionary people to create entirely new sonic experiences, that the 303 became the iconic instrument permeating techno, house and hip-hop culture today.

The stolen sound

While synthesisers offered the potential for entirely original sound output, new digital technologies arose based on repurposing already-recorded sound, known as samplers. These made it possible to capture small snippets of sound, re-trigger them and pitch them up and down. The Fairlight of 1979 was one of the first to achieve this. A standalone device resembling a computer, it could record up to a second and instantly play it back via a keyboard. In the following years samplers such as the Synclavier and E-MU afforded musicians the ability to store an entire ensemble of instruments in a box. A great example of the sampler one-man-band is Frank Zappa’s Jazz from Hell (1986), which only uses the Synclavier keyboard to play an array of instrument sounds.

In genres such as breakbeat, electric boogie and Golden Age hip-hop, producers had already been using found drum beats and old records as the backbone of new tracks. With the power of samplers, they could save small cuts from any source to play in new and creative ways. In the late 1980s Akai revolutionised the industry once again with the s900, and later the MPC60. The MPC60 was unique in that it mainly focused on drums, having sixteen silicone ‘trigger’ pads for each sampled sound. It also featured the ability to record a sequence of played rhythms or sounds and play them back in a looser way than the drum machines available, delivering a more ‘live feel’ than the usual drum sequencer.

Akai MPC60

With sampling and digital recording, producers now had a new tool in their creative arsenal. They could tap into existing sounds and transform them by slowing them down or pitching them up, or combine existing music to build a new arrangement. Plucking a struck piano chord from a jazz record meant it could be played as an entirely new instrument by pitching it up and down, in order to form new melodies. Previously breakbeats had been released on sample records for producers to use; but now they could grab a little nugget of sonic gold from a record and build something up from the ground up. This new technology also made it possible to create very rich sounding tracks for someone to rap or sing over, by simply ‘stealing’ sound from other songs with one or two machines. To say James Brown’s Funky Drummer is a pretty popular drumbeat is an understatement — it’s been sampled over 1500 times. However, the winner for most-used sample ever probably goes to the ‘Amen break’, which later became a pillar in early drum ’n’ bass and jungle music in the UK.

The famous ‘Amen break’ from The Winstons’ ‘Amen Brother’.

Instead of using the samplers for storing real instrumentation, the advent of chopping up vocals from old soul records, using sound snippets as new instruments and re-using drum beats became an essential component for the world’s current most popular genre, hip-hop, as well as another entirely innovative music creation process. Artists such Kanye West, Dr. Dre and A Tribe Called Quest were able achieve widespread success using these techniques.

Today we still rely upon these technologies, although computers have made it much easier. Synthesisers comes in all forms, whether digital or analogue, and are the central ingredient in many genres of music. Massive loop libraries mean you can piece together a potential smash hit from nothing in several minutes, just by layering together some ear-enticing loops. Each of these new technologies weren’t used as intended but became quintessential to new music genres.

Are the robots coming?

The technological advancement on everyone’s lips right now is AI, affecting various industries, including music creation. Computers, phones and tablets have revolutionised and democratised music-making for the masses, but it’s not yet clear what leap or opportunities AI will provide musicians and composers.

At the time of writing, AI is already used to sketch out compositions, clean up and mix recordings and equip us with faster, easier-to-use tools. The question is, are we trying to make the horse-drawn carriage faster by adding more horses, instead of inventing the car? Just like the Moog and Buchla’s series, there’s a substantial difference between replicating existing possibilities with new technologies and applying the new technology to achieve something new. But perhaps AI presents the opportunity for both.

The Buchla Series 100.

Olafur Arnalds, an Icelandic neo-classical composer, recently gained some buzz with the release of a robot-controlled ‘AI pianos’ piece for the title song off his album — re:member.

While the details of how the AI is used aren’t freely available, I imagine it’s somehow trained on Arnalds’ playing, compositions, and/or preference in music. The pianos play together as band members, controlled by a midi keyboard, playing live by themselves.

Of course, adopting an unusual approach to a familiar creative process is likely to spawn fresh ideas. Merely the addition of several ‘self-playing’ pianos is like having a new collaborator in the room. But it’s also going down the same path as playing Bach on a massive Moog modular synthesiser or trying to replace drummers with a drum machine. You’re using a new tool to do what could’ve been achieved otherwise, but without needing to hire musicians. Although, it may initially appear unnecessary to create a system that ultimately replicates what humans do, it’s refreshing to have a session musician that plays by your rules all the time, i.e., the frame and parameters you set for it.

But you can also have an AI that generates its own music. Exemplified by Holly Herndon’s ‘Godmother’, a collaborative project with Jlin, this composition is entirely made by the AI project Spawn. Trained on songs by both artists, Spawn created the entire track by synthesising voice-like timbres in a rhythmic fashion. On Spawn, she says: “…I am able to create music with my voice that far surpass the physical limitations of my body.

This approach certainly allows for some ingenuitive vocal-like timbres, that perhaps tend to be less aurally pleasing to most people than Arnalds’ self-playing pianos. However, what’s fascinating about this approach is how the technology offers exciting possibilities for automatic music remixing. Think of this as building new samples by recombining elements from varying sources into a new composition.

Last year was the 20th anniversary of Massive Attack’s Mezzannine. To mark the occasion, they launched an app called Fantom, that effectively remixes original songs based on user inputs, such as touch, facial expression, image recognition, and day-night cycle. While the alteration is simply glitching, pitch and tempo modulation, it adds a new twist to some old favourites. Robert Del Naja, one of the group’s founding members, revealed they’re building a machine-learning program capable of synthesising sounds based on the album, adding: “The most interesting parts were the mistakes the AI made (… ). You don’t want a perfect version of the original audio to come out the other end. You want it to combine the bass and the harpsichord somehow, or the drums and the vocal, to become one new sound, and that’s all about the mistakes.”

This captures the core idea of what we learned in previous music history examples. Unconventional use of technology allows for musical innovation; integrating machine learning to explore new frontiers, like combining the timbres of two instruments could forge a new path to sounds that otherwise wouldn’t have existed. A notable example of this is Google’s NSynth.

Collaborative creation

With these contemporary examples it’s clear that AI is already making its mark on music production and audio, improving audio analysis, inspiring composers and enabling innovative sound combinations. It certainly seems that replacing humans hasn’t been the winning formula so far, and the more promising results have come from a collaborative effort. Perhaps we’re trying to make the AI solve problems that we already have solutions for, just faster. Although, in the above examples, it’s primarily been a source of ingenuity, such as collecting sound bits from something previously unconsidered. Technically this might be achievable through other technologies, or human-made compositional techniques. Theoretically, techno or hip hop (as it sometimes is, see The Roots) could have been played by humans. However, these might’ve remained as Disco and R&B without the unique aesthetic derived from specific technological innovations.

Maybe we’ll one day discover entirely new musical genres by creatively mis-using the latest technology. As with Massive Attack’s app, music as a linear format might slowly fade and blur into a more fluid open form, as computers becomes much faster at generating music in real time. It could also be that AI is going to be nothing more than a tool for easier, faster analysis and recombination of audio. Only humans have the answer.

--

--

Christian Tronhjem
The Sound of AI

Sound designer, producer and composer with special interest in adaptive audio.