I taught a robot how to make Gqom
And it doesn’t sound as bad as I thought it would.
Ever since I discovered my favourite music genre almost 4 years ago life has never truly been the same for myself and those around me. Now I am never quicker to spend a shiny penny on club entrances, festival tickets, and most-recently, flights & hotel rooms, than I am for a true, unbridled gqom experience.
As a self-styled connoisseur of the house sub-genre, I am happiest when I am sweat-drenched, the rattling bass almost-tangible as strobe lights create bodies out of darkness and the floor reaches for my feet at each beat turn. So why, you ask, would I want to place a robot on the other side of the amp? The answer to that question lies in the brief but complex histories of both the genre and artificial intelligence.
As Sihle Mthembu so beautifully puts it:
“Disenfranchised kids and a desire to dance gave birth to the broken beats of gqom” – Noisey, March 16 2017.
Hewn from Durban townships around 2010, the genre’s international success today betrays its raw, unpolished origins. Pioneers such as Naked Boyz, DJ LAG, Sbucardo da Deejay, and Griffit Vigo imagined broken beats on oftentimes rudimentary computers and basic versions of Fruity Loops music production software. These youth crafted deafening drum kicks and stitched them together with anything from tribal chants to WhatsApp audio clippings, creating a sound whose charm today still lies in its unprocessed, and sometimes fetishised primality.
When speaking of artificial intelligence, Machine Learning is a layer of intelligence that extends our own. ML has been responsible for many of the recent advancements in AI, especially pertaining to software performing tasks that would normally require human involvement. While the public’s general perception of AI is that we’re a few Saturdays away from a robot-led apocalypse, the situation on the ground is a far less exciting. For all the buzz around software programmes outwitting humans in their own domain and making decisions that affect the lives of people on the road, there’s still a way to go to reach the singularity we were promised by science fiction. And looking at AI’s take on music composition whether through Deepmind’s generative model for audio or Tensorflow’s Magenta, the process still produces melodies that sound broken, poorly concatenated, and ultimately unrefined to the trained ear. What this means is that while AI can unsuccessfully try its hand at writing a pop melody, it will succeed at applying the same to a genre such as gqom that contorts to and thrives on a distorted structure.
Making Autonomous Dance Music
Here’s how I used AI to make my favourite genre of music make itself:
Back in 2016 an Uber driver shared a folder of 200+ gqom songs with me, I used this as training data. First, I separated the songs by sub-genres, taking care not to mix taxi gqom with commercial gqom or with songs that sounded like sgubhu (another similar genre). I used a pool of 20 distinct but similar songs to avoid overfitting which is what happens when a function is too closely fitted to a limited set of data points, in this case gqom bangers.
I then trained the neural net on the music data and it in turn outputted MIDI files which are audio files whose parameters are limited to musical notes and chords.
I proceeded to set up instruments in Fruity Loops, choosing drum kits and percussions from artists such as DJ LAG, and Distruction Boyz and I laid out a basic gqom track framework. I then loaded the melodies into the instruments, taking care to clean up outlier notes that the machine learning model mistakenly translated from instrument sounds in the supplied data to musical notes (remember the model can’t read layered instruments).
Finally, I garnished the track with vocal samples and transitions and pressed play. Here’s the song below:
Update — here’s a second song:
Is the robot worth a booking? Let me know your thoughts.