Unsettled Music : a series of audiovisual experiments on the web

Robin Jungers
Qosmo Lab
Published in
5 min readSep 30, 2021

As the team behind Qosmo works on various projects, some focused on music making, others closer to data visualization, many small ideas and pieces of software are often left unused. The Unsettled Music series project is an attempt at gathering some of those in one place, and share them on the internet as a continuous exploratory process. Works shared as part of the series aren’t meant to explore their subject thoroughly — they are more like snapshots of an idea, with humble ambitions.

Unsettled Music takes inspiration in platforms like Google Arts & Experiments, which are wonderful places for discovering original perspectives. With this project too, we’re hoping to collaborate with researchers, artists and designers from outside of the team, open a dialog, and give them a framework to try out their ideas. Like a gallery, each work can have its own space, and explore themes around music and AI with a certain freedom.

For artists, it’s easy to get lost in the sea of possibilities, when confronted to a blank canvas. Media art is even tougher in this regard, because it also removes any type of material limitations. Deciding to work within time and technical constraints is a good way to solve this issue, and to stay focused over a longer period of time.

The Unsettled Music website

The Unsettled aspect in the title has two meanings here.
The first refers to the uncertain character of the music that we are presenting — as often in generative art, the experiments are more about the development of a method than the outcome itself. This outcome isn’t fixed, it changes over time, every iteration being no less meaningful than the previous, and thus it never settles in one place.
The second meaning is that the results are often odd by design. Exploring the outskirts of AI techniques often leads to unusual outputs, and that weirdness is a rich source of curiosity for us.

Two first experiments

The first work that we shared is a study around cyclical movements and continuity. Using a rhythm pattern generation model, collections of gradually evolving loops are played back one after the other : the audio engine itself is only composed of sounds synthesized in real time. The loops are visually transcribed in a continuous manner too, using the equation of knots to build their curves.

Experiment #1 : Nested Cycles

The second one takes a different direction. Rather than generating drum patterns, this experiment revolves around texture and noise, and it evolves as a continuous wave of ambient frequencies. As a base element for the sound, we used a drum generation model to synthesize low quality samples, at the margin of what would be considered good. Those samples happen to have a more abstract character to them : as we play them back using granular synthesis methods, they overlap and blend together to form a dark wave of crackling rumble.
The visuals are directly inspired by printed graphics and typography, from designer Hans-Rudolf Lutz or artist Carl Andre. Here, the letters are broken down and turned into a textured surface that responds to the audio.

Experiment #2 : Broken Samples

As we work on future experiments, we will try to explore aspects of machine learning in different directions. Some of those topics may barely touch AI in fact, but are meant to question some marginal aspects of technology and software, and give them an audiovisual transcription.

Using the web as a platform for experimentation

In the past few years, the web browser has become a platform of choice for media art : not only does it provide a number of features that make modern technologies easy to access, it also, by nature, makes even the earliest prototypes available for everyone to try online.

The first technical component of this project is the visual rendering engine. Web browsers have been supporting hardware-accelerated graphics for many years now with WebGL, Three.js being the most famous library built on top of it, and the one we’re using for this project.

The second component is the audio framework — here again, on top of the well-stocked web audio API, libraries like Tone.js allow developers to design sound performances using natural musical approaches.
Other libraries like Essentia also make use of WebAssembly to implement advanced analysis algorithms with great efficiency.

The final and most challenging component is machine learning. With the help of hardware acceleration and the TensorFlow.js library in particular, Javascript has become a perfectly viable option for developing real-time AI systems running entirely in the browser.
Some compatibility and performance issues remain though : while possible on most modern devices, running a model isn’t consistently fast on every machine, which makes it hard to have any reliable expectation on the timing of events. The size of the models involved is often problematic too. A few megabytes may not be much on a hard drive, but a poor internet connection can lead to several minutes of loading time.

For this reason, the machine learning-based experiments of this project are bound to be optimized, with some data being pre-generated, in order to remain accessible to most visitors. Working with Javascript actually allows for things to run offline seamlessly, as part of a Node.js environment — and thus conveniently keep everything in one place.

While we’re working on some new experiments, have a look at the Unsettled Music website and let us know your thoughts!

We are Qosmo!

Thank you for reading till the end. We are Qosmo, Inc. a collective of Artists, Designers, Engineers and Researchers. Read other Medium articles from Qosmo Lab and if you are so intrigued to find out more, get in touch with us from here. We are actively searching for new members, collaborators and clients who are passionate about pushing the boundaries of AI and creativity. Ciao!

--

--