Buggy = Creative??

Quincy
2 min readMar 12, 2023

--

The field of AI Generated Music is contentious in part because the idea that computers can model creativity is not broadly accepted. We don’t view composition as an academic or fundamentally technical pursuit so it’s hard to conceive of something as procedural as a programming language having the ability to generate something a listener would classify as “creative.” In fact Music and AI has pushed me toward this belief that computers can’t be creative because every time I try, it sounds, well, computerized. People have started to recognize this in images too, with DALLE-2 having created its own recognizable style of art.

A few weeks ago I was making a tool that allowed the user to combine loopable beats. The idea of genetic crossover inspired the idea that you should be able to mix components of two loops to create something more interesting. After programming vanilla hip-hop, reggaeton, and pop beats I started listening to what they would sound like mixed. The first time I pressed run there was a “birth defect” where not only were timings recombined incorrectly but different components of the drum kit had traded parts. The result is unquestionably the most musically interesting and engaging section of what I produced. (1:04 in the video, please excuse the sad screen recording of my beautiful openGL viz)

Obviously the moral of the story isn’t to write buggy code. At least for me, however, the only way I’ve been able to create something that sounds musical is through these happy accidents. I’m interested in understanding how to develop a programming methodology that encourages bugs or, more sustainably, unexpected functionality. The more I think about it it seems like an oxymoron to be process-driven in an effort to break a process.

One approach to this is writing code between the hours of 3 and 5am (the bug incidence goes through the roof). Random number generation adds variety but I don’t think gets at the core of unexpected activity. Maybe the way to go is looking at some of the fundamental operations (like the algorithm to recombine beats) and modifying it in ways that are not informed by intuition. As an example, if you are interested in combining two serialized beats, it would make sense to decode them and do something like take the kick from one and the hat from another. The unexpected activity approach might be to, instead, simply take the first half of one serialization and the second half of the other. We have no idea what this will sound like, which is a sure sign we’re on the right track.

As someone who has produced countless bugs, I can confidently say that most of them won’t lead to anything other than a feeling of inadequacy. But if I were to give advice to myself a before coming in to Music and AI it would be to embrace the idea that you don’t need to be mechanical or practical or logical in your development of music generation algorithms. It’s a place where being playful or silly can lead to the most exciting outputs.

--

--