Hunjoo Jung on his piece “ I Have the Right to [De]Story Myself”

sandris murins
25 composers
Published in
9 min readNov 5, 2022

Read and watch my interview with Hunjoo Jung on his musical piece “I Have the Right to [De]Story Myself”. This piece is composed of the psychologically interactive process of the human being such as outer action, inner action, interaction, and emotion in response to the psychological representation of musical/non-musical contents in spatialization. To realize this process, each of the musical/non-musical materials is formalized in a complex audio + multimedia interactive system, which involves a real-time audio process, computer-generating sound system, sound playback system with pre-recorded sounds by instruments, pre-recorded video, real-time video + visuals, lightings, lasers and theatrical action in spatialization.

What is the short history of the piece?

So, I started composing this piece in 2013, 10 years ago, then I complete this piece in early 2016. It was premiered in 2016 at San Diego, commissioned by an American cellist, Taylor Borden from Mivos string quartet. He actually asked me to write for a Cello solo with electronics, but I suggested that I want to add multimedia later on. It took almost 4 years to complete this piece because I wanted to program my own complex audio+multimedia system which enables me to manipulate a lot of multimedia devices such as live video, lighting and white laser which are controlled by an audio process in real-time.

Watch the full interview:

What is the central message of the piece and how is this message communicated?

The title, I have the right to destroy myself came from one of the South Korean Novels. That book created a huge sensation in South Korea at that time because this novel talked about taboos that most Koreans don’t want to talk about it. Unfortunately, South Korea is the highest committing suicide in the world. Basically, this book talked about grouping committing suicide which only exists in South Korean culture. This novel wanted to talk about how hidden extreme social pressure makes South Koreans tip themselves off a cliff for death. However, for my piece, I supposed the story like a person who undertaken to deal with trauma, and how he struggles with the memories which constantly reflect on going his life by the dehumanizing nature of modern society. Ultimately, I wanted to address the savage increasingly revealing the darker side of human nature to the audience, highlighting more than just our competitiveness but also our pettiness and greed. This piece might be raw, rough, and direct, but there is something beyond the extremely complicated and over-saturated emotional sense, something that can not be controlled by rationality or logic.

Watch the piece “I Have the Right to [De]Story Myself”:

How sensorial or listening experience is created through this piece?

This piece opens a psychological dialogue that invites the listener into a liminal space, between the worlds of inner and outer action, inward emotion, and outward engagement, and what is personal and social. I believe that this process of psychological interaction will allow for a musical realization of images of mental state, with their inherent emotional conflict and ambivalence. In terms of sounding and multimedia layers, each multimedia components represent the psyche’s parts of the conscious, preconscious, and unconscious. Conceptually, there is substantial contrast between each psyche’s parts of the concepts, which are intertwined, competing, and overlapping, in order to create a state of confusion that represents the conflicting aspects of the mindset through the multimedia process.

What was the process of composing this piece?

So before started composing this piece, I was a resident at International Digital Exploration of Arts+Science in California. During the residency, I composed a theatrical piece, which involves lighting and live video. My original plan was I wanted to do something interactive process between lighting, sound, and video but I realized this is not easy and it takes a lot of time for rehearsals. After this project, I

realized that I need to invent a complex audio+multimedia system that is designed so that one main interface of one the computer controls everything, sound, and other multimedia devices so that I can exchange data controlling between each multimedia and audio device in an interactive way. I composed a piece at first. And then I recorded sounds from the cello to make tape parts. Then, I designed a MAX/MSP patch for electronic parts. After that, I had to research multimedia devices one by one. Before composing this piece, quite frankly, I didn’t know that much about multimedia devices, so I needed to research at first, then try to find the way how these multimedia devices can be connected by MAX/MSP. So, I learn the live video process at first. MAX/MSP already has a live video tool which is called Jitter. I used that one for video parts. After that, I learned about lighting. Fortunately, there is some possibility to connect the lighting device and MAX/MSP through dmxusbpro. The last thing was I learn a white laser system. I had to build my own laser machine, then connect it to Arduino. After that… I combine all of these devices together and designed one interface for this system.

What role does play technology in this piece?

This piece is composed of a psychologically interactive process of the human being such as outer action, inner action, interaction, and emotion in response to a psychological representation of musical/ non-musical contents in spatialization. To realize this process, each of the musical/non-musical contents is formalized in a complex audio+multimedia system, which involves a real-time audio process, computer-generating sound system, sound playback system with pre-recorded sounds by instruments, pre-recorded video, real-time video+visuals, pan/tilt lightings, lasers and theatrical actions in spatialization. Each multimedia and audio process has its clear part in realizing this psychologically interactive process of the human being. For example, the pre-recorded video represents the memories of trauma. In the middle of the section, when the cellist hits on the fingerboard of the cello, the fragment of the pre-recorded video pops up on the screen. And, the live video process represents outer action which is the state of current one’s life. The pan/tilt lighting and laser represent inner action which is on going process of what was happening and how to have trauma in the past. The color of the lighting represents the emotional change between the process of outer action and inner action through reminding trauma. On the stage, the cellist sits behind the screen. The projector is located on the opposite side of the screen towards the angle of the screen. The lighting and HD camera are located behind the cellist. So, when the lighting and video are off, the audience will be in the dark. When the lighting is on, the audience will see the cellist’s silhouette through the screen. When the pre-recorded video images and/or live video process are on, the audience will see the images on the screen. Then, the pre-recorded sound and video combine or independently interlock with live video/visuals, computer-generating sound, and a real-time audio process.

What software/hardware have you used for creating multimedia parts?

As I mentioned, this complex audio+multimedia system is designed so that one main interface of one computer controls sounds with other multimedia devices. The playback system is built from protocols. It runs on MAX/MSP with spatialization. The Pre-recorded video images are built and run on Jitter which interlocks data points from the audio process. The pan-tilts lighting and a white laser connect to MAS/MSP through dmxusbpro and Arduino, accompanied by actions staged behind the screen on the stage. About the speakers, I used 16 speakers. The 8 speakers were hanging, and the rest of the others are grounded. The hanging speakers were spatialized by Vbap which built-in MAX/MSP, then I programmed my own spatialization system which is basically the dynamic of a real-time audio process from the instrument that controls the direction of sounds in real-time. This data of the spatialization system also manipulate the color of the lighting and video as well. In addition, the grounded speakers were spatialized by ambisonic, which also built in Max/MSP, then controls the computer generalizing sound and sound playback system separately.

Simply, I used two software MAX/MSP for all of the audio processes, AND Jitter for all of the video/visual processes. For hardware, I used two microphones for voice and cello. I used one projector for the video. One HD camera for live video process which is directly connected to a computer. And I used to pan/tilit lighting, which directly connected to a computer through dmxusbpro, then I programmed my own lighting interface into MAX/MSP. Then I used a white laser which is connected to one main computer through Arduino, but there was no data exchange between Arduino and MAX/MSP. My original plan was the data of real-time audio process controls Arduino, but it crashed sometimes so I didn’t do it for the premiere to be safe.

Can you imagine the same piece without multimedia (visual part)? What could it be?

This piece has been performed four times so far. Last year this piece was actually performed at ZKM, Karlsruhe in Germany without multimedia. Both versions are slightly different. For example. I emphasized physicality and theatrical elements for the version without multimedia. Also, I cut and reduced the middle of the section which has some parts only show pre-recorded video without sounds. I reduced them because these parts can be boring without multimedia. I believe that both versions enable the audience to have a different perception. Again, for the multimedia version, I wanted the audience to perceive the psychologically interactive process of the human being, through the complex audio multimedia systems in various dimensional ways. On the other hand, for the version without multimedia, due to missing information from multimedia, I wanted to provide incomplete, ambiguous, perceptual elements to the audience in order to promote participation in the piece, so that they can create their own perception. I believe this approach can provide the context for each performer/audience to create their own sonic environment. Basically, the multimedia version is more direct and personal and expresses energy inside myself. The version without multimedia became much more abstract and presented energy outside of myself.

What are the basic compositional principles used in this piece?

I think the main principle for this piece is an experiment with a complex audio-multimedia system, however, this piece shouldn’t serve for testing technologies. When I hear some of the pieces composed for multimedia, sometimes, I had a sense that they often show off what technologies are used without a critical way of artistic thinking. I used super complex multimedia for this piece, but I attempt to control these technologies only to realize the artistic frame of the realization. I gave the mission to myself that this piece must stand out by itself sonically regardless of multimedia and high technologies.

How work with performers impacted this piece?

For the last concert, I worked with Niklas Seidl from the ensemble Handwerk. There were many vocalizations in that piece. I did not use any emotional words on the score. Instead, I asked him to understand the musical content itself, then simply execute them in his way with his interpretation. Through this process, he and I created a sensational new musical realization of the content. Although his execution was still very close to what I wanted, the different route allowed us both to come to a new realization that remained true to each of us. I believe this working process is the way to find myself from out of my composition and to find new perceptional way of composition. Besides, I adopted various notation styles in this piece. For example, in the past, I mostly notated everything exactly. However, for this piece, I notated four different ways. First, I still use traditional notations which in that case, I want to control everything in that frame. Second, I also use a lot of graphic notations which in that case, I control most of the contents in an exact time frame while I give a bit free to performers to control in terms of gestural configuration and accentuation. Third, I also use graphic notations with senza tempo, which in that case, performers control gestural configuration and accentuation within the flexible time frame. Fourth, improvisatory phrases… which in that case, performers control almost everything within my guideline and concept. So… depending on the way of notations, I am trying to make some balance of degree of control between a composer and performers.

Photo:

Source: Hunjoo Jung

Hunjoo Jung is a Germany-based composer of acoustic, electronic, and electroacoustic concert music as well as intermedia art. In recent years, besides focusing on acoustic music, Jung has also been exploring multi-complex structural ways in which interactive visual, live video and video mapping, lighting & laser, sensor, actions, and/or sculptural forms of objects can be used in a wide range of combination with acoustic, electroacoustic and electronic music in spatialization. His most recent works will be/have been commissioned/ performed by ensembles such as Distractfold [UK], Curious chamber players [Sweden], Talea [USA], KNM Berlin, Recherche [Germany], Mimitabu [Sweden], Interstring Project [Germany], Surplus [Germany], Multilaterale [France], Soloists such as Taylor J. Borden from Mivos Quartet[USA], Niklas Seidl from Handwerk & Ensemble Mosaic [Germany] and Kevin Toksöz Fairbairn [USA], Alexander Schimpf [Germany] among others.

--

--