Voices of Musicians Using Neutone (Part 2) — The RAVE bird model is a logical progression in the history of using birdsong in classical music【Darragh Kelly, composer】

tatsuyamori
Qosmo Lab
Published in
7 min readOct 25, 2023

This is the second installment of a series of interviews with musicians who actually use Neutone, an AI-based real-time timbre transfer plug-in developed by Qosmo.

We interviewed Darragh Kelly, a composer using AI a lot in his compositions, about the possibilities of using AI in the context of classical music.

You have used AI in many of your compositions. I would like to know more about why you use AI in your compositions and how you came to use AI.

About four years ago, I was doing a lot of experimental music stuff and conceptual musical work. And felt like I had kind of reached a wall.

I’ve always been interested in the culture of cyborgism. I read Donna Haraway when I was a teenager and became fascinated. I was reading many theorists who said internal musical progression is over, so the progress has to come from outside. And I thought AI was that kind of progression. At that time, there were various composers and musicians using AI within the classical music scene, like Alexander Schubert, Jennifer Walshe, and many more.

So then I took an interest in using AI and decided on a commission of mine to collaborate with Dadabots. They are primarily a kind of hacker duo, quite DIY. I think they gained prominence because they published Britney Spears’ “Toxic” as if it were sung by Frank Sinatra. They’d done lots of other work and they collaborated with other composers and musicians.

I had been commissioned by an Irish ensemble called Crash Ensemble and could do whatever I wanted for eight minutes. So I got in touch with Dadabots and I decided for that work to get Databots to train the neural network on Crash Ensemble’s own work, CrashLands, an album of new works from Irish and international composers.

It was kind of a feedback loop in a sense. Crash Ensemble were collaborating with their kind of AI counterparts in a live classical music piece, that involved live electronics and live musicians.
It became a kind of strange piece about their collaboration with themselves and the augmentation of labor, including their own.
That piece was called Deep Model Worker.

So that was my entrance into using AI in music composition. And that was a really fantastic project, it went very well and I was very pleased with it. And that began my relationship with AI in music. And it’s continued since then.

Deep Model Worker / Darragh Kelly

Recently, AI is progressing very quickly. For example, ChatGPT or other generative AI music. What do you think about this era? How do you think that will affect music?

I guess it’s a funny time to be a composer and a musician working with this stuff because it has almost become the domain of the programmer and the technician.

And it’s interesting at the moment, because if you consider this a kind of musical instrument, then the instrument is not the medium between my idea and the work as it once was in acoustic or electric instruments. But it’s a kind of instrument, a kind of epistemic tool, that embodies the actual music in itself, the music theory.

So it’s quite unstable, I guess. Its progression is so fast that its instability characterizes the musicality of it I think. And I think that there’s a kind of material excess in regard to the theory of what AI music embodies. So, I think it’s an interesting time in which the medium, this tool is kind of leading rather than people’s ideas or the concept of musical work. It’s very technology-led, which is quite fascinating.

Is there any way of using AI in music that you think is interesting?

For me, the most interesting has always been raw audio. Plug-ins such as Neutone are one example, and another is the direct generation of the audio waveform itself by a neural network, such as Rave. I’ve not been interested in symbolic stuff like notation, symbolic music generation, or anything. It doesn’t really speak to me as an artist.

But the raw audio is the most interesting thing to me because this is sound synthesis, and it is kind of the most significant development in sound synthesis in decades and decades. Since the 80s maybe.

So my interest lies in the unique qualities of this digital production of audio, as opposed to the digital simulation of past technologies to which symbolic notation, and symbolic music generation, somewhat adhere.

Do you think it’s like quite a big change and do you think it will affect the music industry?

You know, I’m kind of a somewhat niche DIY artist. So my own practice is quite different from the major industry changes going on.

But I guess we see this most potently with the issue of voice and timbre transfer of someone’s voice now. Like the latest thing with Drake’s performance where AI-generated versions of his voice are used, the issue of his voice being his label’s intellectual property has garnered attention.

The notion of who is it that performs, who is the “I” that’s performing. I think a lot of other art forms have already had to deal with, reconsidering notions of the self, expression, and communication. And to ask who we think we are and how we can speak meaningfully to one another. Whereas the performance of selfhood and identity continues to be central to practice and theory in other art forms.

I think the question of how one’s self is manifested through musical performance has been somewhat neglected at a deeper level. Obviously, the person has always been central to the voice, but the notion of who that person is and what that means has been neglected, I think.

So I think artistic research is important to explore how embodied performing voices contain multitudes and what a voice is. It’s a combination of things. No one has one voice and no one’s voice exists in a vacuum. So I think exploring how voices can emerge from interactions of performers and composers and material and humans and mediating technologies and different performance contexts.

I think it’s drastically changing how we relate to the voice and to the self. Which is to even ignore the issue that there’s another parallel progression in how we relate the idea of the self to people’s appearance.

So I think the voice and one’s appearance are both undergoing quite drastic changes that I play with in a slightly more DIY context.

I want to know more about the song you used the RAVE bird model. Could you explain about it?

The RAVE bird model got me thinking about the major upheaval in sound production. In classical music, there’s a long storied tradition of imitating or incorporating birdsong into music.

I was looking back at artists like Olivier Messiaen.
He was a French composer who created a lot of bird music, and could be considered a master of birdsong in music.
And that trend of birdsong in music continued up until the present day. It’d still be very common in classical music for a piece to be inspired by or to borrow birdsong.

And to me looking at the Rave model and then looking at that trend in classical music, I have been somewhat critical of it. To me, it’s something I always shied away from using bird songs in music, because I thought it was kind of tired, actually.

Then with this, I saw it as the kind of the next logical progression within the idea of bird song in music. Timbre transfer. So it almost felt like a compulsion to do this.
And I kind of echoed Messiaen in some ways in the instrumentation. And it just became a kind of a quite repetitive, circular kind of call.

I had Michelle, who was the vocalist, vocalizing in the performance and the RAVE bird model emerged in tandem with some kind of other bird call-like motifs in the actual instruments. It was an interesting experiment. I felt that I had to do it once I saw the model.

Scene for a New Heritage: birdsong(Extract) / Darragh Kelly

Did you use any models other than the RAVE bird model with Neutone?

I have done a lot of playing around certainly. In an actual composition, not yet.
Though I am currently working in combination with a programmer. I work with training a rave model to be used in Neutone for my next work. Neutone SDK with rave and that will be trained on lots of Irish traditional singing. It’s a type of singing called sean-nós.

So that will be the data set. And then I will use it in Neutone live with Ableton and lots of other digital music-making tools for a live performance. So that’s the next application.

Compared to other AI-using plugins, what strength do you think Neutone has?

I’m not a programmer, so I’ve always collaborated with people, specifically when working with neural networks which I can’t do.

Then with Neutone, it’s kind of the first AI plugin that I’ve used substantially. I guess it walks a good line between ease of use and being malleable. But also not as so many other AI plugins are completely kind of prescriptive and generic.

So I appreciate the attempt to make it easy to use, but also keep it interesting and strange essentially.

What contribution do you expect Neutone to make in the music scene as it develops?

I know Neutone’s applications are diverse and lots of different types of musicians use it. But within my little corner of experimental classical music, I would say a significant contribution to the ease with which one can apply AI, particularly in a live music context.

I think this Neutone and RAVE for Max MSP are the two most significant tools within a composer or musician’s application of machine learning in a live context.

If you were to recommend Neutone to other people, how do you describe it?

For me, Neutone is probably the most accessible and interesting application one can use for machine learning in music. I think it appeals to someone who’s not trained fully in programming, but also who has an interest in genuine sound synthesis and creating new sounds themselves by collaborating essentially with a tool to create something original.

If you progress technologically, you can get even more out of Neutone. It appears.

Text and Interview: Tatsuya Mori

--

--