On live audiovisual performance #1: “Aztoratuta dabiltza”
As I am interested on the real-time and improvisatory audiovisual performance, I love trying different setups and both feeling and observing how the systems and my self work while using them. This is one of those setups.
For the video “Aztoratuta Dabiltza” (watch here below or at Vimeo), which is the first of the upcoming series that will shape a future show, I used a setup were the video and the audio are manipulated at the same time, using the same controls, so it’s actually impossible to change one parameter of the audio without changing one of the video. At the same time, there is another new thing on this real-time setup (for me): most of the time I’m used to create, trigger, edit, compose, manipulate speed and direction, or apply effects to the medium, but this time it’s only about applying effects and manipulating it. It is about the creation of new medium just through manipulation of a one only original.
Setup, tools and workflow
I’ll make a little explanation of the tools and workflow just to understand the context and function of this setup:
- The music and overall control are made on an iPad: synth on Oscillator and drums on DM1. Both are sent to Turnado multi-effect through Audiobus, and them to output, which is wired to the computer.
- Turnado has four X/Y pads to manage 8 different effects for the audio, but at the same time they send MIDI signals to the computer via Wi-Fi.
- The video is made on Quartz Composer. There are two almost alike videos transformed by the “v002 Glitch analog” patch, the “datamosh/bangnoise” patch and a “pixelate” filter. There is also a “MIDI Controller Receiver” so all the parameters of those nodes are controlled live via midi from the Turnado X/Y pads, and an “Audio Input” that controls a couple of parameters of the “v002 Glitch analog“.
- For record purpose the image is sent via Syphon to “Syphon Recorder” witch takes the audio from the line in.
One of the main features of this setup is the use of a unique control for both a parameter of the video and a parameter of the audio. So we have eight controls controlling eight audio effects and eight video effects’ parameters (sometimes more as I could map more than one video parameter on the same control).
The other important feature is that there is only one kind of controller, because there are only four X/Y touchable pads to control everything. This gives to the author (artist, driver, performer, leader, conductor, call it whatever) a very particular body position, having both hands using four fingers on the screen to manipulate everything just sliding the fingers up and down and left to right, but every finger in its way.
The performance feeling
An interesting point here is that being music and video controlled at the same time, your movements are guided by both, and if what you are doing is cool for the audio but not for the the video you have to move on, and vice versa.
The same way if you want to do a certain thing for the video it will affect the audio and the other way back. So all the time is finding a balance between music and video, knowing that any movement will affect both inevitably.
This crossed conditioning is a very interesting restriction that leads the improvisation process all the time, and dynamizes it, bringing the performer from one audiovisual moment to another. You may know that for the improvised performances it is as important knowing which possibilities offers the system or setup, as the capacity to react to what the output is resulting. In this case, due to the chained control of audio and video, the author is moving all the time while exploring on the system. When moving to one point (in this case would be equally aesthetic/audiovisual point or physical point on the pads) there could be a great moment for the video but not so great for the audio, and the other way back, so a lot of time you are looking for a great moment on both mediums, to stay a bit and have a little hype momentum. While moving between those good moments interesting transitions arise, because manipulation and effects are working its best when values are changing more than staying on fixed ones.
Effects, Glitch and Datamoshing
Apart from the performance and improvisation process in its self, the interest of this project is about applying different effects and manipulation on mediums (audio and video) with different structures (waves or pixels).
On the video side it is about exploring the different ways that those processes of glitch and datamoshing affects the several nature of the pictures on a pixel level. On this first video the images are of one concrete kind: black and white, night shots, almost all black backgrounds, high contrast… and those images has a concrete pixel level structure, which evolves on a certain way when those effects are applied. Thus each kind of image reacts best to one or another way of applying those processes and you have to try and fail with each one to understand the best way to performance with each kind. We would watch the difference on the future pieces of this series.
On the audio side it is about finding the effects that suits best the sound we are using with the different images, but also pairing in each control of the X/Y pads which sound effects works best with the video effects. It is also important to experiment with the order the effects are chained and applied one on each other, what leads to different order also for the video effects.
We’ll see what all these leads to, but I guess that it will be at least fun to perform, what is always important.
Originally published at elurmaluta.wordpress.com on April 30, 2016.