Emotion Sonification

Agoston Nagy
3 min readJul 10, 2017

--

Sentiment analysis is an interesting method that can be applied to different tasks from automated review ranking, opinion mining or finding semantic clusters andemotions in large scale corpora. A basic goal in sentiment analysis is classifying the polarity of a given text at the document, sentence, or feature/aspect level — whether the expressed opinion in a document, a sentence or an entity feature/aspect is positive, negative, or neutral.

Twitter is an interesting place of live, continually evolving texts in constant conversation, so I kept on investigating live text analysis and representation methods based on the Twitter sphere.

Server / client based app structure

I continued my regular path, which is about to combine modular sound patching (libPd) with text based app development (c++ / recently nodejs). Sidenote: I had a plan to use web technologies only, so I tried WebPd again, which still lacks most audio modules so it’s not really useful for advanced audio processing at its current state. I also tried Heavy which has much more objects implemented, but it completely lacks support for dynamic patching. So, Pd is still not there at web publishing, and I did not want to dive into different branches of javascript web audio frameworks, because they (along with web standards) are changing so rapidly, my project would not run even in a few months. Compare that to Pd, whose syntax and concept didn’t change in the last 20 years, I’ll continue using libPd.

After the first few tests of accessing Twitter data it became clear that working with APIs, using advanced learning modules and persistent data storage (that is independent from an app cycle) is relatively easy to maintain using NodeJS. The best thing is to combine its async nature with client applications through socketIO. Authenticating on twitter, fetching tweets and analyzing their content to find sentiment rates can be done with a few callback functions that run independently from the main application thread at the client side. Using the sentiment module, each tweet gets a ranked value, that is moving on the spectrum from negative to positive moods. Sentiment uses the AFINN-165 wordlist to perform sentiment analysis on arbitrary blocks of input text. Visualizing and sonifying these values can help track different phrases and their acceptance by the public in real-time.

Examples of mood-graphs from the Twitter corpus (realtime stream)

While visualizing polarity of moods within tweets is trivial (vertical axes, color coded), tweaking with sonic representation resulted to a more complex system. The overall soundscape consists of a drone-like sound that is linked to the average emotion of incoming tweets.

The drone at the negative extreme consists of some waves tuned to an average minor, combined with some bit crushing that leads to a dry noisy result. The drone at the positive peak consists of some waves that are tuned to an average major, combined with a biquad filter that amplifies overtones and natural harmonics which leads to a pleasant, sweet sound. Each incoming tweet from the stream triggers a tiny percussive component, where kick hit belongs to the negative, high hats belong to the positive values.

A third, unexpected information became audible this way: the frequency of these events indicates us how popular the actual search phrase is within the Twitter sphere. If a word is mentioned a lot, we have a very quick, percussive soundscape. When a word is less popular, the resulting sound is a slow drone scape with some clicks and pops here and there.

--

--