Rainmaking on Mars WIP blog
This is a summary of the research blog contents originally posted on my Wordpress blog documenting the work in progress of the Rainmaking on Mars instrument and outcomes including exhibition at Ars Electronica Festival 2017. The project was also done with critical sessions with IBM iX designers Rania Svoronou and Ricci Janus. There is another blog entry documenting the collaborative research component prior to this project.
This summary follows a backwards timeline of the project. The final outcome is available on my website.
Ars Electronica Festival 2017
“Artificial Intelligence — the Other I”, or the connection between AI and humans was the main topic at the 2017 Ars Electronica Festival, which turned POSTCITY Linz into a hotspot of media arts for the third time in a row this year. There were over 100.000 visits to over 600 different events, conferences and lectures, exhibitions and projects, concerts and performances, animations and award ceremonies, guided tours and workshops between September 7 and 11, 2017. More than 1000 artists presented their works for five days straight at 12 different festival locations — a festival which was really filled to the very brim with media arts. -Ars Electronica
The long awaited exhibition at Ars Electronica was a really good platform for showing Rainmaking on Mars as well as for me to get an insight into the top designers and artists working in the field. Through watching people interact with my project, I’ve learned a few things about how it’s perceived by the public and how to continue it in future shows.
Feedback notes:
- I started to change the sounds on each day to test different sounds and its sustainability throughout the day. Mainly because some sounds became really high pitch by end of day but also to see how different sounds work with the piece, how people react to the piece.
- The sounds are a bit indistinguishable so changing some of the sounds or mixing the sounds more during interaction is important for immediate feedback
- Squeeze sounds are delayed in reaction
- When the background rain sound was clear and affected by touching the leaves, people really enjoyed it.
- During no interaction, I turned up the volume of the rain background sound to attract attention.
- Not many people could create a steady rhythm or music yet, could be more interesting if they could, maybe they would also stay longer.
- Many people didn’t know to squeeze the blue squishy part and press the 2 sensors together. More affordance needed.
- Pressing the 2 sensors together to make background ambient music is enjoyable as a movement and as a sound. Make this into performance.
- Have performers play with the piece at certain times. People can watch and then know more about how to play with it after.
- The piece started conversation amongst strangers sometimes, I really like this aspect.
- Try having multiple balls hanging in a room and if you move around a room, it will be experienced throughout.
- People will swing/hit/pick-up ball and not handle with delicate care sometimes. Make this into a feedback. For example, sound changes when ball is picked up.
- 1st thing people touch tend to be the 2 hanging leaves which don’t work without squeezing the blue ball. This sometimes made people walk away as there was no immediate feedback.
June 2017 — late prototyping and planning for exhibition
Rania and Ricci joined us from IBM iX for the final crit, they were there for the first presentation of this project in April. The crit was mainly a peer informal session so I got notes from different colleagues and tutors.
The feedback I got was mainly around the interaction/affordance and the sounds of my object. Here are a few points:
- Work on the haptic experience
- Work with the sound to be more effective, noticeable
- The haptic touch points need to be more self-explanatory, maybe change the squeeze to squeeze and only stays on for 5min?
- Next, to focus on the cybernetics, learnability, machine learning for future development
- Need to prototype the exhibition in the air, add haptics to top and bottom of object
- Work on the aesthetics of how it looks and experience. Will it be clear/covered? Add lights?
- Rainmaking sounds are calming and background story from Alison is interesting
- Maybe add instructions for the order of interactions
- make the audience feel the isolation in a small space or blindfolded
These will help me to evolve the project for Ars Electronica.
With the limitations of bringing this project to Ars Electronica, I have to think about size, weight, material, and how to wire it all up. Various materials I thought of: metal, wood, grass covered, rope, etc.
Case Study: Binaural Listening
Through this unit, I’ve been researching more into sound and music making. I found Suzanne Ciani’s work on spacial sounds and 3d listening which led me to learning about and hearing binaural recordings. A well known example is the virtual haircut.
Binaural recording is a method of recording sound that uses two microphones, arranged with the intent to create a 3-D stereo sound sensation for the listener of actually being in the room with the performers or instruments.
Although I do not have the budget to buy a binaural microphone, I am trying to simulate this in my exhibition of Rainmaking in Mars. What if I put the speakers in the corners of the room and then link the sounds on my MAX/MSP to only a left/right inlet allowing certain sounds to only come out from one of the speakers. It could create a simulated spacial rain environment.
From testing with the sounds and the final prototype, I also found that I need to change some of the sounds being used to make the outcome more effective. For this, I’m taking the inspiration from binaural rain sounds and seeing what other types of sounds I could use, ex. thunder, river etc.
Gestures and sound
After the last prototype and talking to Chase from MA Sound today, I decided it’s a good idea to go back to designing the interaction a bit more around the object. The only thing you can do right now is touch a point and turn on/off the sound or use the proximity sensor to control volume. But I want to design bigger and more variety of movements.
Chase mentioned that in instrument playing, the musician often acts like a dancer in they way they move to make sound. Something that requires a bang or a soft sliding motion can create different forms of interaction and position of the body. We imagined having a long elastic like instrument that 2 people would need to pull around to make sounds and might give this a try.
- Let the object carry it’s own essence of sound
- Try different haptic surfaces to provoke different modes of movement, pressure, and gestures.
- Trying to control not only turning the sound on/off but also speed, pitch etc along with different gestures.
Update: I worked on more interaction possibilities with Gareth and Nicolas and now I’m using a data stream from Arduino to MAX touch board communication so that the pins from the MPR121 can sense a range of proximity and touch and that range can control volume, pitch. I would still like to add the loop part from the last patch.
May 2017 — prototyping
Below are prototyping videos from physical interaction to using MAX/MSP to process sounds and using the prototype of the sphere with MPR121.
The problem I have while testing is that I lose track of the pins and forget which sound came from which pin. Also, when stopped and restarted by touching a pin, the sound always plays from the beginning, it would be nice to make it play from where it stopped last.
Also I wonder if I’ve lost the modular or multiplayer part of this as one person can technically play this alone.
Workshop: how does the future sound?
This week, I attended PoL # 34 How Does the Future Sound? A workshop to imagine the sounds of the future operated by Mark Peter Wright.
This is a question I have to think about on the Mars One mission. In my prototyping process, I have decided that the sounds are not going to be traditional instruments already developed through centuries of history on earth. But the sounds may be a jam of the environments of earth. This possibility of new sound arts could generate a different environment of exploration and be less intimidating as none of the instruments I create will have an way you are “supposed” to play it.
I was inspired by “musique concrète” — the experimental technique of musical composition using recorded sounds as raw material. The principle uses the assemblage of various natural sounds to produce an aural montage.
Colonizing another planet is a whole new step in human progress, the sounds and movements could be completely different and yet familiar to Earth. My project will facilitate interactivity, learnability, and creativity amongst the crew.
Alison Rigby, Scientist and candidate for Mars One loves rain. But you cannot have rain in space nor does a rainstick make sound due to lack of gravity.
April 2017 — research and early prototyping
I wanted to work with different sounds and materials from everyday life. But one big constraint that I found was that there will be no gravity. This means some of the instruments I made wouldn’t work normally.
The few instruments I wanted to mimic, because of their simplicity and therapeutic ability: wind chime, hang drum, rainstick, monochord, sea drum.
Using disposed objects from the mission would be a good consideration, as there is limited space on Mars and I want to mimic more sounds from nature on Earth or crew can broadcast sounds from Mars to match earth environments.
Defining your space (IDEO brainstorming): learnability for beginners, interactive/multiplayer, modular
Sound healing
Since the existence of humanity, people have made sounds as what we call today, music. Thousands of years ago in ancient times, mystical instruments of all kinds were used to remedy illnesses and revive the spirit. Music therapy has been used by aboriginal tribes to heal physical and emotional ailments. Here’s a list of instruments that would improve the state of mind through sound vibrations. The sound cradle would require 2 people, 1 to experience the therapy and another to perform the “sound massage”. This family is clearly interacting and enjoying some music together.
In addition to this, I’ve also thought, maybe the instruments can mimic nature sounds from earth, as I’ve found most of these therapeutic instruments mimic natural sounds of rain for example. Then, the crew can broadcast or try to record/make sounds from Mars back to Earth as a way of sharing to each other.
In 2021, NASA plans to send the Mars Microphone on Mars Polar Lander and let us hear what Mars actually sounds like. This is what a song will sound like, according to a test of the microphone in an air-evacuation chamber at simulated Martian air pressure (6 millibars). A video from Cody’s Lab shows that he puts camera inside a vacuum chamber, turns down the pressure to mimic what it’s like on Mars.
Virtual vacuum
In order for sound to travel, there has to be something with molecules for it to travel through. On Earth, sound travels to your ears by vibrating air molecules. In deep space, the large empty areas between stars and planets, there are no molecules to vibrate.
Listen to space on NASA Space Sounds: what happens when spacecraft are used to record radio emissions, which are then converted to sound waves.
Chris Hadfield played Space Oddity (David Bowie), filmed in the International Space Station. It seems that Astronauts can, indeed, play instruments in the space ship. It’s only when they get out into the space that sound is not able to be made.
Sounds waves need air to travel through, so even though the string of a violin would vibrate properly in the near-vacuum of space, it wouldn’t produce any sound. Brass instruments would also fall silent: they produce sound because of air vibrating inside their metals bodies, so without air, there would be no sound! More here