IN CONVERSATION: NORAH LORWAY ON LIVE CODING AND ALGORAVES

Anthony T. Marasco
15 min readDec 5, 2017

The following is an interview with Norah Lorway, Ph.D., which I conducted on December 3rd, 2017. Norah is an electroacoustic artist who works as a Lecturer in Creative Music Tech at Falmouth University in the UK. I’ve been a fan of Norah’s work and research for sometime, and I was fortunate to be able to talk to her about her work as developer of live-coding software tools for electroacoustic performance, and her work in the burgeoning Algorave scene.

ATM: Can you talk a bit about how you got started with Live Coding as a performance tool?

Norah Lorway: So, I’m actually one of the early pioneers of the Algorave field. I mean, I write music, but my entire Ph.D. was all software I wrote for music, so I’m more kind of in the whole engineering side of things, and my post-doc was all engineering. I do write music still but it’s mostly a side thing at the moment. Mostly, I’m building [instruments] and coding. I haven’t really been a composer as such maybe since, like, 10 years ago. I’ve been involved in the Alogorave scene since the early beginnings since it started over here in the UK, so I can tell you all about that.

Can we talk about those two terms and their corresponding musical styles? Live Coding vs. Algorave: those are two terms that when I first heard about them, I wasn’t sure what the relationship between them was. Is there any crossov er between musical traits or performance techniques that you’d consider to be from the Live Coding side of things and from the Algorave side of things?

Yeah, there’s definitely crossover. Algorave isn’t something that just got invented one day. It was mostly a term that we used for the 2012 SuperCollider conference, which is a conference that used to happen yearly. I’m pretty heavily involved in the SuperCollider developer world, which is why I came to the UK. Scott Wilson, who’s one of the main developers of SuperCollider is teaching at Birmingham, and also just because their program suited what I wanted to do. Through the SuperCollider world is where Algoraves grew out of because it was meant to kind of be…well, they were already happening, but they weren’t necessarily live-coding focused before that time. That’s the other thing: Algoraves don’t have to include live-coding performances; it’s any music that is made with algorithms. So, you could run a Max[\MSP] patch and have that be in an Algorave, as long as it’s almost exclusively done with algorithms. A lot of people use live coding and incorporating those aspects. So, there’s this idea of always having your code broadcasting behind you, which is sort of a live-coding mandate, if you know what I’m talking about. There’s been a manifesto about it that you can look into.[1]

You bring up this unique trait of Live Coding performances, which is the act of showing the audience your code while you’re writing it. Why do you think that became such a common thing to do?

The whole idea behind that is because of live-ness, essentially. I don’t know if you’ve come across these issues in laptop performance, but nobody [in the audience] really knows what you’re doing behind your screen. In Europe, there were a lot of fringe laptop orchestras that developed in sort of rebellion to the whole Princeton laptop Orchestra scene. None of us liked the idea of having a conductor; it was really ridiculous, we thought. So we just wanted to have our own ensemble. There’s quite a lot of these groups that have popped up and kept this tradition of showing their code. The whole “Showing Your Code” business is to have some transparency; no one needs to know what coding language you’re using or to know the syntax, but the fact that they see you doing something and it shows some correspondence between what you’re doing with your keyboard and what’s coming out of the speakers.

I worked a lot during my Ph.D. with BEAST — which is the Birmingham Electroacoustic Sound Theater, a venue that has around 100 channels in it — and I did a lot of electroacoustic diffusions there, so moving the sound around the speakers and whatnot. SuperCollider was a big feature of that. So, we’ve done some interesting things with live coding where we’ve shown the code and also the speaker layout, so people are getting to more of a multi-dimensional perspective of like, “Okay, the sound is coming out of this speaker, and the code that’s making it do that is up on the screen”, so it’s basically trying to get some kind of audience understanding of what’s happening. Going back to how that applies to Algoraves, that’s an element of things: if you’re going to live code, it’s really cool if people can see what’s happening, because otherwise, it’s just like going to another club with a DJ. DJ’s and live coders are not the same, and I think people do tend to compare them a lot.

It’s interesting you bring that up because even in laptop ensembles, you have to break through this barrier that often exists between you and the audience where they see you performing on the exact same device that they use to watch Netflix in bed.

Yeah, for sure.

Along those lines, do you think that there should be a push by performers to develop new coding languages or syntax’s that make it easier for the audience to understand more of what you’re doing on stage? Would it be beneficial to have the audience understand more about how the algorithms you’re creating result in the music they’re hearing?

I actually find it kind of irritating that we have to show the audience things. I understand why, but at this point in the game I’m sort of like “Just get over it, it’s not a big deal, just enjoy the music”. I don’t think it needs to be made easier for them, necessarily. I think at one point it was kind of necessary to do, but at this point, I don’t think the language needs to be simplified any more.

What are some of the elements of live coding that made you want to start using it more and more as a performance technique? What are some of the tools or coding environments that you like to use?

I prefer to live code things because you get the risk, and you also don’t always know what going to happen. I don’t like the preplanned nature of a lot of stuff; I’ve done a lot of laptop performances where everything’s pre-coded beforehand. I’m also working in SuperCollider, which is in somewhat of an extremely-complex language. Some of the syntaxes are ridiculous. Something like Tidal is all sample based, it’s very easy to use. You don’t have to know a lot about programming and you don’t have to know a lot about the way that computer music works, whereas with something like SuperCollider, it is just crazy. Doing any kind of live coding in that, in general, you have to know what’s going on. At the same time, there are things like JIT Lib, which is the “Just In Time” compiler, so you can do everything in real-time, which makes things easier. You can do everything on the fly, you’re not going to run into any of issues with having to restart the code. Compared to something like Max[\MSP], where you’d have to unlock your patch and disconnect some cables in order to fix things, that’s not an issue with SuperCollider. With live coding, you get more flexibility and you can change things around on the fly. If you don’t like something that you’re generating, you can quickly change it without having to go too deep into the source code of things, because you’re writing with these things called NDEFs (Node Proxy Definitions), so you can basically build a synth and then loop it and keep it in the system so that it’s always there to access, where as if you’re writing some kind of big program where you’re not able to manipulate the code in real time, you wouldn’t be able to be as flexible in performance.

It’s also the idea of improv. I wrote a paper in the Computer Music journal with my old research group, called BEER (Birmingham Ensemble for Electroacoustic Research), about free improv and how it compares to live coding.[2] We were coming the from the school of thought where the European way of doing things, in comparison to the American way, is that we’re a lot more chill with the idea of failure and making mistakes, whereas a lot of the American artists who came over to perform and do some talks were very worried that they would crash and things like that while doing laptop performances. There’s a lot more acceptance of failure and glitch in terms of laptop performance over here. It’s an interesting difference culturally as well.

I think you need to embrace those problems. Live coding is very much entrenched in the hacker mentality, and that’s something I’ve been involved in since 2007 when I left my undergrad. It was a very structured, Classical music program and I hated it. When I moved to Calgary I got involved in this new Computation Media Design program, which allowed for more collaboration between computer science and music, so I got more into that. So the hacking became something I was interested in doing, hacking music as well. But failing, and using failure and crashing, all of these sorts of things that we’re not meant to do as pianists, or violinists, or composers… the way a composer is supposed to be like “Everything must be perfect” in some sense, and you need to justify why it isn’t perfect if it isn’t, and “What does perfection mean?”. I was kind of rebelling away from those things. I think now live coding as a practice has sort of become a way to justify that stuff. It’s sort of catching on now, especially in America where it didn’t use to be a thing before. There are Algoraves in the States now, and that’s sort of great.

A live, improvisatory performance by Norah Lorway, Kiran Bhumber, and Nancy Lee at NIME 2016

That makes me wonder if we’re going to see a school or two here in North America that can eventually become centers for live coding, producing more students who focus their research primarily on live coding for performance.

Yeah, the people who are doing that in the US are people who are studying at places like Pratt, and other types of like visual arts schools, and that’s the same case over here. I’m teaching at a university in Cornwall where I work as a Lecturer in Creative Music Tech. Most of the people who want to do live coding are coming from visual arts, they kind of like that idea of hacking and alternative music-making and art-making, whereas I find that music students here are interested in getting a job as a sound designer. They don’t care about the other stuff, and that’s fine, but maybe that’s where things are coming from. In the US, it’s hard to say really. I think in places like California, San Diego seems to be the place where that is happening. But in Canada? Not really. McMaster [in Ontario] because of David Ogborn, and then UBC [University of British Columbia]. But it’ll come, it’ll go.

Let’s talk about the concept of hacking and how it relates to live coding, because you brought that term up earlier. When I use hacking as a performance concept/tool, I think of it in the way that someone like Nicholas Collins would, where you take a device or instrument that is designed to work in one specific way, and then you force it to operate in ways that are against its natural order to produce sound. When you talk about hacking in the context of live coding, are you actually forcing a piece of software or device to do something it wasn’t meant to do, or are you using that term more metaphysically and talking about hacking the concepts and traditions of what it means to perform electroacoustic music?

No, no, definitely the first definition. I don’t really get into all that critical theory stuff. I’ve been building an instrument on the BeagleBone Black since probably 2014, and literally just hacking into…I’m dealing with building embedded software on it, basically, and using SuperCollider on it in weird ways. I’m trying to get sensors reading off of it, and these days I’m making 3D printed wearable instruments, with embedded speaker and sensor arrays involving it. So, taking the concept of live-ness and have the body be, not just a gestural controller because I’m not really interested in that anymore, but mostly having your body be an instrument in some way. I’m also kind of interested in going beyond that and maybe ways of hacking the body. I’m interested in sound still but also interested in ways of transferring movement to generating other types of sound, or visuals and what not.

Going back to improv for a second, what are some pieces of advice you would give to people who were interested in adopting the practice of live coding as a tool for improvisation?

Well, in live coding, you still need to pre-plan things. You need to pre-plan what it is that you’re going to do. I personally wouldn’t want to go out there without a plan, but I don’t stress about it before I do it. I do a lot of blank-slate live coding, which is like, you don’t start with anything before you go out there, you’re starting from scratch. That’s a method I’ve been using lately.

I’ve been part of a number of live coding groups — I’m still working with BEER — , and what we use is this live-coding tool called Utopia, which we wrote a few years back. It’s a network sharing tool that allows us to share code in real time. It was based on the one called Republic, which was from PowerBooks Unplugged, one of the first live coding groups from Germany. So, we’re using a more improved, better version. We’re all connected through Utopia, which looks like a chat program, and we can send code around. Anytime you play some code it’ll get sent to the whole group and someone can just grab that code and manipulate it slightly. So, learning how to use that to work with SuperCollider in order to start by just changing some parameters slightly, nothing major, you can create an interesting group piece with just one small bit of code. It’s a really good learning tool for people. If you don’t know how to use [SuperCollider], you can see how other people are using it in real time. For solo stuff — like in Algoraves, for example — that tends to be dance music, so that’s a little more pressure. You have to make sure that there’s a beat going at all times. You’re maybe going to be using more pre-planned stuff for that, and it’s harder to do more blank-slate stuff. You need to plan out what you’re doing, in other words. But in the more free-improv stuff where you can do any style, maybe beats will be involved in it, but you don’t have to worry about people dancing to it.

It’s funny that you mention the dance-music aspect of Algoraves, because as someone who hasn’t really done much live coding, I think I was melding the Live Coding and Algorave styles together in my mind just because most of the pieces I’ve seen that involved coding happened to heavily lean into beat-driven, pattern-based sequences. When I first heard some of your performances, however [like bline or drone bølge II], I was surprised by how often you don’t use 16-step sequences or drum-and-bass elements in your work; you tend to use these long, slowly evolving drones or odd-length, subtle loops.

Oh yeah, for sure! Before I got into Algoraves, I had beat-type things, but it wasn’t [dance-based], it was just drones and loops. My older music is like that, from my old days. You can really…any type of music can be live-coded, it doesn’t have to fit into the Algorave type of stuff.

Now that Algoraves and even just Live Coding as a performance tool are becoming more popular and well known, do you think that there is a danger of certain troupes forming a “mold”, like musical traits that people will start to inherently associate with Algorave music or Live Coded music in general?

Well, to me, that’s weird. Maybe because I’ve been involved in it for so long, I didn’t realize there was a mold to be broken [laughs]. When I was starting with BEER, I mean, none of our stuff was beat-driven as such. SuperCollider does have a commonly-used Pattern-Generator and Pattern Library, and we used them, but to me there was no mold to Live Coding; anything you wanted to do, you could do, as long as it was done in this way. That’s really sort of interesting. I think that Algoraves have sort of shaped the way that live coding is, or, is thought of. Things like SonicPi, when you teach it, is based on forming beats and utilizing beats, which is great because it enables people to look at music that they recognize, — stuff that is maybe more popular — and show them how they can make it with code, whereas if you were to introduce them to maybe some Drone music or some Ambient, they would probably not be as interested, unless they just like that stuff already. That’s a really good way of introducing people to [live coding], and it’s one of the reasons why Algoraves have been becoming more popular. The Guardian has been writing about them, Wired wrote about them ages ago. And it’s because it’s using a commonly-heard style of music and making it through code. That leads people to get into it more.

Before I started touring through England performing at Algoraves, we used to just get asked to play at Developers Conferences throughout Europe and stuff like that. Now, Algoraves are appearing in more mainstream festivals. That’s kind of cool, but I can see how this style of performance is starting to be associated with dance-based electronic music because of that.

Algorave at Access Space, Sheffield, UK

I want to ask you one last question, and I hope it’s not too “What Are Your Desert Island Discs?”-esque: We’re three years away from a new decade, and as we approach 2020, I was wondering if you could take a look at trends you’ve noticed (either in your field or others) in the electroacoustic music world over the past few years, — or, maybe tell me about your own research trends — and predict how you think they’re going to evolve into this next span of time, say, five years into the new decade.

I think I was sort of in the right place at the right time when Live Coding and Algoraves picked up, and I’m fortunate to be one of the people who sort-of started the trend, I guess. There’s maybe ten of us [involved in the field], most of us in the UK, that’s kind of exciting. You never know at the time if things you’re doing will ever really take off, so looking back on that, it was kind of cool to have been a part of that.

Right now, my own research is leading me into the medical world and medical tech at the moment, and I’m working on projects at the university that are dealing with health and well-being and tech. I’m moving more into that realm at the moment, which is why I’m focusing more on wearable technology and more utilitarian stuff for therapeutic uses of tech, not necessarily using sound, either. I’m very interested in the augmentation of the body, and I don’t just mean wearing a dress, but like, turning you into a cyborg that can make some music. I’ve talked to some people in other places about the concept of transhumanism, which is a concept that I’m not entirely convinced by, but I think it’s interesting to see what will happen when we start augmenting the body more. The use of AI in music more…I’m not entirely sure I’m 100 -percent into the use of AI in music yet, but I was recently reading an article about health care and AI, with AI being used to diagnose people, so it’ll be interesting to see how AI starts to take over fields and how electronic music sort of adopts that. I’m definitely interested in using more of the body completely as an instrument, going beyond waving your hands in front of a Kinect (which we did at NIME in 2011 and it was kind of embarrassing! [laughs].

Would you use any of these techniques to explore ways of generating blocks of code to for use in Alograves/Live Coding performances instead of typing them out on a keyboard?

Yeah! That was the idea of my instrument when I went to UBC when I was working with Bob Pritchard. My idea was to build a gestural controller that would produce code and to connect this flow between an instrumentalist and a live coder. So my instrument would take what she was playing and translate it into MIDI data, and we would grab it, turn it into NDEFs, and then live code with it in SuperCollider. With using the body, people like Atau Tanaka at Goldsmiths University does a lot of work with gestural control through bio-signals, and the sort of ideas he’s using with using the body to translate heart-rate sensors to translate data into code…there’s a lot of possibilities, but there’s also the ethics of it that we need to consider. There’s a lot of technological fetishism lately where we are just using tech because it looks cool, but there are questions we have to ask ourselves: what are actually doing with this, where are going with this? Why are we doing all of this stuff, and where is it going to take us? That can get a bit depressing. I did a lot of thinking about this over the summer while researching at UBC: what’s going to happen in the future? Things to consider, I think. That probably made no sense, but…[laughs].

[1] Written by the TOPLAP Organization, this can be found at https://toplap.org/wiki/ManifestoDraft. Relevant lines: “Obscurantism is dangerous. Show us your screens.”, and, “Code should be seen as well as heard, underlying algorithms viewed as well as their visual outcome.”

[2] Free as in BEER: Some Explorations into Structured Improvisation Using Networked Live-Coding Systems, published in Vol. 38, Issue 1 of the Computer Music Journal [2014]

--

--

Anthony T. Marasco

Composer, Sound Artist, Researcher and Educator. Currently pursuing a Ph.D. in Experimental Music & Digital Media at Louisiana State University.