A year ago we have recorded a mini album with lyrics written by a neural network that emulated the style of the cult Russian underground punk-rock singer Egor Letov. Here it is on AppleMusic, GoogleMusic, Spotify.
This year we thought it’s high-time to do something like that but in English. “Let’s just choose a really cool musician, generate lyrics using the net architecture that we already have, record some music and we are done!” — how easy is that? And then one of us (not sure who it was exactly) said: “Dude! Kurt Cobain would be 50 this year!” And at the moment we said that we were obsessed by this idea. In the next month we slept less and worked more, since everything that sounds easy turns to be way harder than you thought, when you actually start working on it.
To train an artificial neural network you need data. The more the better. In this case we obviously needed poetry. Lyrics, naturally, are not that ‘heavy’ in terms of the size of the dataset, so approx. 200 mb (that we harvested across the internet) turned out to be sufficient to cover a broad range of poets from classical to modern ones. Unfortunately, it turned out that you can not teach a neural network to create something that at least remotely resembles English poetry if you use some raw web-poetry data. Apparently a lot of people write poetry in English, but it seems that at least 50% of them do not know basic English grammar. Looks like people start writing poetry in English and publish it online before they are able to write an e-mail without a mistake (and I am Russian and we are not quite known for our distinct knowledge of English spelling and syntax, you know ;) ). We worked days and nights (well, nights mostly, who am I kidding?) and filtered the data as best we could. The poor thing lost half of its’ size, yet was big and good enough to train a language model with sophisticated embeddings over it. These embeddings were:
- word2vec word embeddings
- lda parts of speech meta information
- the state of the letter-by-letter lstm
- the state of the bidirectional lstm over phonetic transcription
- meta-information of a document such as author, poem, sentiment etc.
In the end of this long and hard road we got an artificial neural network that could generate lyrics with a style close to a given poet.
We have generated a lot of different lyrics that resembled different authors. The proposed algorithm has captured some styles really well. For example, this are two verses of our neuro-Marley:
yeah i know that you can be
something that i can t do
you got me feelin crazy
i m giving everything to you
in this jungle i stand up
i wanna turn free
get the beat up to the top
and this is what i wanna b
And here is an example of neuro-Poe:
the rose she saw was beauty shining
a flower full of gold
of silence and a warning
that nothing told
The stylization was working well enough, so we have generated 4 songs that resembled the style of Kurt Cobain. The core part of the project was ready, but we still needed some music and, of course, vocals.
It is extremely hard to write something “that sounds like Nirvana”. First of all, because the guys were phenomenal. Second, because there were so many bands already (and still are) that tried to copy them. So from the start we have decided that we do not want to be too close to the original, so that it does not sound like a parody, but rather like a tribute. Ivan did his best and we got 4 minuses for the lyrics (that we agreed on to be good enough), but looking for the singer turned out to be even more challenging than writing the music.
At some moment after our demo with the fourth singer didn’t live up to our expectations we just lost hope. We could never find the singer that fits for the project! We were desperate and just searched for “sing like Kurt Cobain” on youtube and started to text random people, who were on the videos and whose vocals sounded interesting to us. Surprisingly, a talented New York based musician Rob Carrol answered and said that he would be happy to be on board. We have sent the minuses and lyrics, Rob sent us back the takes of the vocals. We were sending music and vocals there and back over the pond, discussed the intonation and melody, but we already knew that this is it and we finally are going to make it.
Finally, it was ready. An LP of four songs, with lyrics written by a neural network trained to emulate Kurt Cobain… It was in our hands… And there was just one more thing to do with it. We all listened to Nirvana on old tape players, so to give some additional occult touch to the whole project, we have taken an original nirvana-tape, and recorded the EP over it (yes, I know we are monsters and do not have souls).
We also thought that every EP needs a clip, so we made one.
Making it was loads of fun as well, but that is not the point of that story. The point is that 4 songs written by the neural network actually do resemble Nirvana lyrics a lot and the whole EP gives us back this feeling of listening to the Nirvana songs for the first time on our old tape-players back in the 90s.
That’s all, folks! Listen, enjoy and spread the word!
PS. We would like to say thanks to Timur Bulgakov and Stanislav I. Spassky for their hard work, to Vladimir Glazachev for some help with the data, to Ilya Edrenkin for his advice on the network architecture, to Pavel Gertman for the design of the cover and tape-recording. Creative AI is something that is absolutely fascinating to work on! Stay tuned and check other pet-projects that we did on our website.