Monica Lim on her piece Electromagnetic Room

sandris murins
25 composers
Published in
11 min readApr 5, 2024


Read my interview with contemporary art music composer Monica Lim on her piece Electromagnetic room. It is a participatory music piece, where audience is invited to manipulate and change the acoustic sounds made by a performer on the the electromagnetic piano via 3 interactive screens which tracked hand movement with AI pose-estimation technology. The overall sound became a product of the collective and the space. Monica is a sound artist whose work spans installations, performance art, contemporary dance and screen. The text version of this interviewed is created by Estere Bundzēna.

Can you give some history of the piece?

Electromagnetic Room was presented at the New Music Studio, which is a Melbourne University program, and it focuses on new music and music of the last 50 years. It encourages interdisciplinary works as well and this was a work that I did with two collaborators David Shea, who is a pianist and an improviser, and Patrick Telfer, who is a sound technologist. It came out of research that I have been doing in using hand gestures with computer vision to make changes to sound. The whole idea was to have an instrument that was made up of the whole room, not just the musicians, but also the audience and the room acoustics itself. A few years ago I made an electromagnetic piano which has attachments that use electromagnets to play the strings of the piano so that it can sustain indefinitely. We had a special piano which could be played traditionally but also in this augmented manner. Then we had three participant stations with computers and anyone from the audience could come up to the stations and then use different hand gestures to affect audio effects which were captured by a microphone.

So, at any point, three of the audience could be part of it. One of the stations triggered the magnets on the piano and then two of the other stations affected the audio effects. None of the participants could really control the entire sound and you could not really predict what the sound that would come up was, because it depended on everyone. The output went through a spatialized speaker, so we had six speakers, the acoustic piano and the affected sound in six channels.

Watch full interview:

How did each station impact the sound?

It was different for each movement. We had five movements in the work, so we roughly knew what sort of textures for each movement there were. For example, the movements were based on a book called The Decay of the Angel. It is about the steps by which a diva becomes human and one of the signs of the decay of the diva was a lot of sweating. We wanted sounds that had a lot of gliding and pitch shifting. So, in that movement, for example, the movements of the audience would make everything pitch shift, it would create reverbs and then in other movements it would use granular synthesis. You would have the sound of the piano that goes through a granular synthesizer and gets lots of speckled, granular textures.

What is the main idea of the piece?

I was brought up in the classical music tradition, but when going to concerts with young people, for example, you go to a bar, you go to a gig and you get involved in the music, it is a lot more immersive, you do not just sit down. I really wanted to bring that sense of the audience being part of the whole atmosphere into a more classical music setting. We had the acoustic piano, but I wanted to have something where you could think of everyone as being part of that whole instrument for the whole room to be an instrument.

Watch the piece:

Does your piece have a special message?

Not really. It was an experimental piece with no special message other than an invitation for people to just try something new and be part of something. We use hand gestures because it is quite natural. People understand it and there is a cognitive sense, we think: ‘Okay, if you do movements this way, we feel that it should make this type of sound somehow.’ There is quite a lot of naturalness to it. It is also easier than using the whole body. We had to choose if we get people to make sounds with their hands or get them to use their whole body, but with audiences, if you use the whole body, they might feel very self-conscious, because we are not dealing with dancers. For example, if you work with dancers you use the whole body, but a regular audience is more self-conscious, so hands are a little bit easier.

How did you create the software for this piece?

The model itself, the computer vision model, is a Google model called MediaPipe. I created a program that would connect to that model on the local computers and then convert that into the required MIDI based on which hand movement I was trying to track. The AI models just give you coordinates for 30 points of the hand, but with this, you choose what and how you want to track it and how you want to map it. That was all coded by myself.

What was the listening experience you achieved?

It was varied. The piece went over 45 minutes. We focus on the electromagnets of the piano. A piano that is triggered with electromagnets has no attack because you do not play it, the magnets just pull the string, so it is a very sustained sound, almost like an organ. So we started the piece with a sort of electromagnetic sustain sound. In the end, it is quite dark, people are lying down on the floor to feel the vibrations from the base.

Can you explain the 5 parts of the piece and their sequence?

As I mentioned, it was based on a book called The Decay of the Angel by Yukio Mishima. We use this idea from Buddhist philosophy, that when a diva, a demigod, starts to become human, there is a decay process. There are five stages of that decay process. The first one is when the petals from their crown start to fall off. The second one is that they start to sweat a lot. The third one is when their clothes become really dirty. The fourth one is dissatisfaction and the fifth one is that everything starts to become quite dark. We use these concepts which have to do with the body. It was about the decay of the body as a metaphor for the decay of the spirit. We built the sounds around those ideas.

For the first part — decay, we used a lot of echoing. There is a lot of playing with the idea of decay, of something being echoed that changed and became shorter and sort of morphed a bit. For the second part, there was lots of pitch shifting and very fluid and echoey reverb sounds. The third part was very granular and had speckled, more percussive textures. For the fourth part, we experimented with lots of distortion and pitch-shifting delays. For the last part, there were a lot of sounds in the lower register and really heavy sounds.

Did the pianist improvise his part?

It was mainly improvised based on the structure. We gave the structure, but it was not a set graphic score or even any score. It was basically a set of inspirational notes and writing ideas and he used that.

What was the role of the audience participation?

It really changed the sound. They combined the effect of whatever they did and had a huge effect on the sound. They were not an addition, but an integral part of the whole piece.

Can you imagine the same piece but without audience participation?

I do not think it would work. The audience participation was a key part of that work and without that, it would have been just an improvised piece. David Shea is a great pianist, a fantastic improviser, but that work revolved around the idea of the whole room being an instrument. I think it would have been a very different piece without the audience.

Was the audience very active?

Actually, we were worried about how involved the audience would get. When I designed the piece, I had to build contingencies in case there was no one at the station. However, in the actual performance, there was never a time when there was not someone at the stations. The performance and the stations were in the middle and the audience was seated around. So, in the beginning, they were all sitting down and only halfway through the performance they started to really move around. When they did that, they sat on the floor and lay down on the floor, so that was when it started to get more interesting when the audience became part of that physical space where the performers were.

It is hard for me to imagine an outcome of a piece like this. How did you know what to expect?

There was a lot of faith that it would just happen. I think that is the beauty of having an improvised piece because David can respond to what he is hearing. It is like a feedback loop. The audience is making the sound and he is listening to the sound and responding to it. It is not something that you can plan very well, but at the same time, it is something that gives you a lot of freedom to compensate for the sound that you get. If the sound gets too dense, he can pull it back and vice versa.

Were you surprised as well?

I was not really surprised, I had a rough idea of the outcome. However, it made me think of the mapping I did. When you map gestures to sound, there is always a design issue. You have to decide how obvious you want to make it. Do you make it so that the gesture and sound mapping are really obvious, so that the audience understands that this specific gesture makes that effect on the sound, or do you make it not obvious at all? In retrospect, I would have made it more complex and less obvious. It is a hard one because sometimes if the audience cannot hear the difference that they are making, they also do not understand it and then they get disengaged.

How was the audience’s reflection after the piece?

The audience was the ones who were interested in experimental music, so they were very receptive to it. For many of them, it was the first time they had seen something like that and the novelty of it was really interesting. I think they felt more connected. Not just the people who came up and participated, but also the people who did not necessarily participate, but could see other people participating, could walk around and sit around them. I suppose you feel like you are a part of it rather than watching something.

How did the instrument process the gestures?

It tracked the movement of the hands and then turned it into information that the digital sound processing could use. For that, I use Open Sound Control as the protocol and network the three different computers to the one main computer.

Can you imagine the piece without the software?

I think absolutely. It is the idea of people being part of one instrument, so this idea of remote networking distributed sound can definitely work in analogue. Computer vision was meant to be a tool. There are many tools that can do the same thing.

What kind of hardware and software was used?

There were magnets that sat on top of the piano, microphone speakers and the computer stations used the Computer Vision software on the browser. Then all the information went to a central computer which used Max MSP for processing.

Can you imagine the piece with strictly pre-written scores?

Absolutely. Then you would have to control the participant input because then you do not have the ability to respond as much, but I think it is definitely possible. I think you could make really specific effects, for example, that at a specific moment, there will be an effect that happens and that could be actually really effective. It just has to be very planned and timed, but it could be something which would give you a lot more precise control over the sound.

Were there any kind of compositional principles?

It was mainly improvised, so I would not say so. It is not a traditional composition. Our first priority was textures. That was the main guiding principle. Not just the way the piano was played but also the audio effects that the hand movements were making.

How did the participants affect each other and the piece?

It was based on the movement. It was pretty much the same movement to sound mapping for the whole section of the piece. Of course, the participant would change and it would depend on what two different people were doing at the same time. You have a chain of effects and one effect from one person will affect the sound of the effect on the other person.

How did you work with the pianist?

We had a few rehearsals. We have collaborated together quite a few times, so we sort of know each other’s language. We had a few sessions where we had to mock up the audio effects and I was pretending to be the audience so he could listen to what could happen.

What was the process of composing?

Actually, once we had the idea and the concept it did not take too long. It took a while to get to a structure and to figure out the technology. Sometimes it takes a lot to decide what system to use. There were a few decisions to be made. Computer tracking can be done through many platforms, we can do it through the browser, and we can do it through different programs. Then we had to figure out what hardware we had and what CPU we had. Computer Vision is quite processing intensive, so if you track too many things and get it too complex it becomes really slow. There was a lot of fussing around with that. However, because of the improvised nature, it did not take too long. We probably had a couple of weeks before the performance where we rehearsed and settled on some sort of a structure with rhythmic and melodic elements so that David roughly knew what he was going to do.

What would you suggest to other compositors based on your experience on this piece?

I think there is a lot of participatory artwork now. It has become more and more popular over the last 10–20 years, not just in sound, but also in visual arts and installations. I think people are a lot more used to it and you can just use it as a gimmick. Sometimes I see works where there is a whole composition, orchestral or chamber music, and then sometimes it has some mobile phone sounds. When the sound that the audience makes is not an integral part of the piece, it can be difficult to engage the audience, because people tend to just want to sit down and just watch. You have to find how to make it absolutely necessary to the work rather than something that you just add on.


Source: screenshot from youtube video of the piece

Monica Lim is interested in new cross-disciplinary genres and forms as well as combinations of new technology with music. Her work has been presented at Arts House, Science Gallery Melbourne, AsiaTOPA, White Night, Liquid Architecture, Melbourne Fringe, Arts Centre Melbourne, Sydney Dance Company and WorldPride as well as international symposiums such as ISEA and NIME. Monica is currently undertaking her PhD at the Faculty of Fine Arts and Music, University of Melbourne in movement-led composition and new technologies. She is part of the research team at VCA Dance’s TrakLAB and the University of Melbourne’s Centre for Artificial Intelligence and Digital Ethics. Monica is a 2023 Artist-in residence at the Grainger Museum and Melbourne Electronic Sound Studio and is co-creator of the Electromagnetic Piano with Mirza Ceyzar and David Shea.