x-Music Lab 22春

長瀬眞承
x-Music Lab
Published in
5 min readJul 29, 2022

Makoto Nagase
Environment and Information Studies
X-Music Lab.

It was my 3rd term together with the lab, and it was one of the first terms to be able to freely do and search for what I wanted to find out. I enjoyed the freedom and wide spread of areas that allowed us to study our interests deeply. I have always been keen to discover the relationships between communications through music. Non-verbal communication through music as a medium contains information such as culture and emotions. In the future, I would like to figure out the reason behind why people are able to unite through music communication.

I tried to figure out the critical factor in emotional communication through music as a medium this term. To be specific, my goal was to figure out the system and the background behind how music and sounds could deliver emotions and use the formula to create a piece of music that could intentionally change the listener’s emotions. I started my research by reading a paper by Patrik N. Justlin called “Emotional Communication in Music Performance: A Functionalist Perspective and Some Data”. The report laid out the basis of the rest by first explaining that the ability to encode and decode emotions in musical instruments must be done parallelly and simultaneously. This is because, from an evolutionist point of view, one cannot develop by lacking the other. If only the encoding ability were set but not the decoding ability, the listener would not be able to feel and understand what the encoder(the performer) is trying to express, which leads to the conclusion that the encoding would not develop further until decoding ability develops enough for the encoded message to have a meaning at all. This fundamental basis allowed me to realize that trying to find a way for the emotions to be delivered in a certain way would mean that humans developed such a sense with a definitive meaning.

With the fundamentals laid out, I started experimenting by picking out several musical factors such as the BPM and the loudness(in dB) to find the correlation between the emotions delivered and the predetermined musical factors. One of the experiments I have done was where I tried to encode emotions on the drum-set and metered the predetermined musical characteristics to see the correlations. The results looked very similar to the experiments done in the other papers(listed below). Specifically, the predetermined musical factors were the BPM and loudness in dB, and the encoded emotions were the following: Happy, Sad, Angry, Scared, and No Emotion. However, with further research, I concluded that most of the musical instruments that exist today and their musical factors change according to how the human voice changes under the emotion of the encoded emotion. For example, the human voice gets louder and speaks faster when we are outraged. This was also reflected with most musical instruments since their BPM and loudness got faster and louder once the emotion “Angry” was encoded during the experiments.

After figuring out that the encoding and decoding of the musical factors correspond to the use of the human voice when encoding and decoding emotions, I came up with a new research question: whether humans could encode and decode emotions without corresponding to the use of the human voice. To be specific, whether humans could deliver emotions with differences in multiple frequencies. This led to my final piece of work presented on the last day of the lab. In the first model, I wanted to see the interaction of people and the way they used the frequencies in order to encode their emotions. Therefore, I asked a few people to ring a sine wave with random frequencies and a few to ring a sine wave that could change frequencies. The few people who could change the frequency of the sine wave were appointed either “comfortable” or “uncomfortable”, and they were to emulate and express their emotion using only the changing frequency. I have decided to simplify the emotion down to “comfortable” and “uncomfortable” because it is the simplest form of human emotion. The test did show the human tendency to react negatively to high-frequency sounds. However, it did not indicate if a human can express their feelings from “comfortable” to “uncomfortable” since the experiment only allowed one person to express one emotion throughout the test. Many of the feedback contained discomforts to the high frequencies, which I wanted to get rid of to get a clear answer to my initial question, which is to see whether humans could deliver emotions with differences in multiple frequencies.

In the second and last model of the experiment, I have programmed software myself which allows the operator/encoder to use three buttons in total to control multiple frequencies of four sine waves(MF) and play it together with three static sine waves(PF). The changes that occur to the MF were programmed differently between the four frequencies, which allowed the encoder to spread its sound across the sound spectrum. The experiment was conducted with a minute of musical “performance” where the operator moves the MF to express “Comfortable” emotions for half of the time and “uncomfortable” emotions for the other half of the time that the experiment is conducted.

Moving Frequencies(MF) patch

As for the improvements, a lot needs to be done for the experiment to become effective. For example, the investigation lacks logical reasoning behind why the MF sine waves move in a particular way(there might be one specific speed of movement of frequencies which allows humans to encode and decode differently than the other frequencies) or why the PF is set to a particular frequency. In the reviews I have got, few mentioned the point regarding the differences in the sound’s timbre. Although moved and shifted its frequency in the same way, the encoded message could change regarding the kind of timbre the sound might have.
In the next term, I would like to further develop this model of emotional encoding and decoding by the shifts of frequencies to seek different abilities of human beings to communicate with sound.

--

--