Rainmaking on Mars WIP blog

Betty Zhang
10 min readSep 12, 2017

--

This is a summary of the research blog contents originally posted on my Wordpress blog documenting the work in progress of the Rainmaking on Mars instrument and outcomes including exhibition at Ars Electronica Festival 2017. The project was also done with critical sessions with IBM iX designers Rania Svoronou and Ricci Janus. There is another blog entry documenting the collaborative research component prior to this project.

This summary follows a backwards timeline of the project. The final outcome is available on my website.

Rainmaking on Mars final prototype

Ars Electronica Festival 2017

Visitors interacting at Ars Electronica Festival 2017

“Artificial Intelligence — the Other I”, or the connection between AI and humans was the main topic at the 2017 Ars Electronica Festival, which turned POSTCITY Linz into a hotspot of media arts for the third time in a row this year. There were over 100.000 visits to over 600 different events, conferences and lectures, exhibitions and projects, concerts and performances, animations and award ceremonies, guided tours and workshops between September 7 and 11, 2017. More than 1000 artists presented their works for five days straight at 12 different festival locations — a festival which was really filled to the very brim with media arts. -Ars Electronica

The long awaited exhibition at Ars Electronica was a really good platform for showing Rainmaking on Mars as well as for me to get an insight into the top designers and artists working in the field. Through watching people interact with my project, I’ve learned a few things about how it’s perceived by the public and how to continue it in future shows.

Feedback notes:

  • I started to change the sounds on each day to test different sounds and its sustainability throughout the day. Mainly because some sounds became really high pitch by end of day but also to see how different sounds work with the piece, how people react to the piece.
  • The sounds are a bit indistinguishable so changing some of the sounds or mixing the sounds more during interaction is important for immediate feedback
  • Squeeze sounds are delayed in reaction
  • When the background rain sound was clear and affected by touching the leaves, people really enjoyed it.
  • During no interaction, I turned up the volume of the rain background sound to attract attention.
  • Not many people could create a steady rhythm or music yet, could be more interesting if they could, maybe they would also stay longer.
  • Many people didn’t know to squeeze the blue squishy part and press the 2 sensors together. More affordance needed.
  • Pressing the 2 sensors together to make background ambient music is enjoyable as a movement and as a sound. Make this into performance.
  • Have performers play with the piece at certain times. People can watch and then know more about how to play with it after.
  • The piece started conversation amongst strangers sometimes, I really like this aspect.
  • Try having multiple balls hanging in a room and if you move around a room, it will be experienced throughout.
  • People will swing/hit/pick-up ball and not handle with delicate care sometimes. Make this into a feedback. For example, sound changes when ball is picked up.
  • 1st thing people touch tend to be the 2 hanging leaves which don’t work without squeezing the blue ball. This sometimes made people walk away as there was no immediate feedback.

June 2017 — late prototyping and planning for exhibition

Notes from final crit June 13, 2017

Rania and Ricci joined us from IBM iX for the final crit, they were there for the first presentation of this project in April. The crit was mainly a peer informal session so I got notes from different colleagues and tutors.

The feedback I got was mainly around the interaction/affordance and the sounds of my object. Here are a few points:

  • Work on the haptic experience
  • Work with the sound to be more effective, noticeable
  • The haptic touch points need to be more self-explanatory, maybe change the squeeze to squeeze and only stays on for 5min?
  • Next, to focus on the cybernetics, learnability, machine learning for future development
  • Need to prototype the exhibition in the air, add haptics to top and bottom of object
  • Work on the aesthetics of how it looks and experience. Will it be clear/covered? Add lights?
  • Rainmaking sounds are calming and background story from Alison is interesting
  • Maybe add instructions for the order of interactions
  • make the audience feel the isolation in a small space or blindfolded

These will help me to evolve the project for Ars Electronica.

The drawing plan for exhibition of the project at Ars Electronica and an image of how the floating ball looks at this moment. Some of you may recognize the ball is actually a large hamster ball. I think it’s always good to find quick and easy solutions especially in prototyping and since it was a ball with all the slots and holes I needed, it was perfect.

With the limitations of bringing this project to Ars Electronica, I have to think about size, weight, material, and how to wire it all up. Various materials I thought of: metal, wood, grass covered, rope, etc.

Now that I have the main sensors that I want, I’ve been playing with what kinds of sounds to use. I also added a stretch sensor to control volume and could replace the volume touch sensor with this one. It would also be nice to control speed with stretch as well. Through this, I found that I need to curate the sounds a bit more in MAX so that they can sound melodic together.

Case Study: Binaural Listening

Through this unit, I’ve been researching more into sound and music making. I found Suzanne Ciani’s work on spacial sounds and 3d listening which led me to learning about and hearing binaural recordings. A well known example is the virtual haircut.

Binaural recording is a method of recording sound that uses two microphones, arranged with the intent to create a 3-D stereo sound sensation for the listener of actually being in the room with the performers or instruments.

Although I do not have the budget to buy a binaural microphone, I am trying to simulate this in my exhibition of Rainmaking in Mars. What if I put the speakers in the corners of the room and then link the sounds on my MAX/MSP to only a left/right inlet allowing certain sounds to only come out from one of the speakers. It could create a simulated spacial rain environment.

From testing with the sounds and the final prototype, I also found that I need to change some of the sounds being used to make the outcome more effective. For this, I’m taking the inspiration from binaural rain sounds and seeing what other types of sounds I could use, ex. thunder, river etc.

I have been working in MAX/MSP with the 12 soundtracks of rain. 8 of them from Hildegard Westerkamp and 4 from soundtracks that Nicolas sent me from Vande Gorne’s album. Left: The second patch focused more on interactions through capacitative sensor. By receiving serial numbers, this number from 0 to 11, turns on one of the 12 soundtracks. This was interesting but also limited in controlling range. Right: The first MAX patch that I worked in. I first got a sound to play through pressing a specific button, then I linked the Arduino ultrasonic sensor and capacitative sensors. Then I also added 2 record and loop patches to add voice interaction into the object.

Gestures and sound

After the last prototype and talking to Chase from MA Sound today, I decided it’s a good idea to go back to designing the interaction a bit more around the object. The only thing you can do right now is touch a point and turn on/off the sound or use the proximity sensor to control volume. But I want to design bigger and more variety of movements.

Chase mentioned that in instrument playing, the musician often acts like a dancer in they way they move to make sound. Something that requires a bang or a soft sliding motion can create different forms of interaction and position of the body. We imagined having a long elastic like instrument that 2 people would need to pull around to make sounds and might give this a try.

  • Let the object carry it’s own essence of sound
  • Try different haptic surfaces to provoke different modes of movement, pressure, and gestures.
  • Trying to control not only turning the sound on/off but also speed, pitch etc along with different gestures.

Update: I worked on more interaction possibilities with Gareth and Nicolas and now I’m using a data stream from Arduino to MAX touch board communication so that the pins from the MPR121 can sense a range of proximity and touch and that range can control volume, pitch. I would still like to add the loop part from the last patch.

May 2017 — prototyping

I made a prototype out of paper mache and attached capacitive MPR121 sensors

Below are prototyping videos from physical interaction to using MAX/MSP to process sounds and using the prototype of the sphere with MPR121.

After I spoke to Jacob from the Media and Arts Technology program at Queen Mary University of London, he suggested that I could put conductive material on both sides so that when certain parts touch, they create different sounds. I like this idea very much and am going to continue with it.
This video shows the programming side I’m working on in MAX, where I have different input options to create different sounds. This so far includes a loop function that records and then loops, the proximity sensor I had before, and the key press input. Now I will have to think about how to join these together or which to include for further development. The looping is also not working as well as I had hoped yet.
I had great difficulty with MPR121 but once I figured that out, it now does exactly what I want. I also had to create a new MAX patch that will take input numbers from Arduino to trigger different sounds.

The problem I have while testing is that I lose track of the pins and forget which sound came from which pin. Also, when stopped and restarted by touching a pin, the sound always plays from the beginning, it would be nice to make it play from where it stopped last.

Also I wonder if I’ve lost the modular or multiplayer part of this as one person can technically play this alone.

Workshop: how does the future sound?

This week, I attended PoL # 34 How Does the Future Sound? A workshop to imagine the sounds of the future operated by Mark Peter Wright.

This is a question I have to think about on the Mars One mission. In my prototyping process, I have decided that the sounds are not going to be traditional instruments already developed through centuries of history on earth. But the sounds may be a jam of the environments of earth. This possibility of new sound arts could generate a different environment of exploration and be less intimidating as none of the instruments I create will have an way you are “supposed” to play it.

Imagine astronauts floating in space with this object and the movement of their limbs and body will change the volume of the sound. The above images show several ways that these instruments could take form and be interacted with. I imagine them to be completely different from traditional instruments.

I was inspired by “musique concrète” — the experimental technique of musical composition using recorded sounds as raw material. The principle uses the assemblage of various natural sounds to produce an aural montage.

Colonizing another planet is a whole new step in human progress, the sounds and movements could be completely different and yet familiar to Earth. My project will facilitate interactivity, learnability, and creativity amongst the crew.

Using an ultrasonic sensor and a piezo to make the effect of a theremin and adding multiple controls for multiplayer modes. Although the piezo sound is not so pleasant this method can be applied with other sounds.

Alison Rigby, Scientist and candidate for Mars One loves rain. But you cannot have rain in space nor does a rainstick make sound due to lack of gravity.

I got inspiration by looking at sensor boards usually made for babies and the objects on them. This could be a direction I can take in terms of the physical modular instrument. I also referenced toys with modular or multiplayer components like BopIt! and fidget cubes.

April 2017 — research and early prototyping

I wanted to work with different sounds and materials from everyday life. But one big constraint that I found was that there will be no gravity. This means some of the instruments I made wouldn’t work normally.

The few instruments I wanted to mimic, because of their simplicity and therapeutic ability: wind chime, hang drum, rainstick, monochord, sea drum.

Using disposed objects from the mission would be a good consideration, as there is limited space on Mars and I want to mimic more sounds from nature on Earth or crew can broadcast sounds from Mars to match earth environments.

Defining your space (IDEO brainstorming): learnability for beginners, interactive/multiplayer, modular

Ideas: Touch and sound — A board of different textures such as grass, sand, substances from earth that when you touch them, make sounds of nature. This came from Alison, who likes to take long walks and may not be able to on Mars. Also, this could help monitor blood pressure which will help monitor stress levels. Theremin — since people are going to be floating, it would be interesting when their movements and interactions make sound. Ref: Human Theremin(Lucy Sansom), Odd Harmonics(Francois Chambard)

Sound healing

Since the existence of humanity, people have made sounds as what we call today, music. Thousands of years ago in ancient times, mystical instruments of all kinds were used to remedy illnesses and revive the spirit. Music therapy has been used by aboriginal tribes to heal physical and emotional ailments. Here’s a list of instruments that would improve the state of mind through sound vibrations. The sound cradle would require 2 people, 1 to experience the therapy and another to perform the “sound massage”. This family is clearly interacting and enjoying some music together.

In addition to this, I’ve also thought, maybe the instruments can mimic nature sounds from earth, as I’ve found most of these therapeutic instruments mimic natural sounds of rain for example. Then, the crew can broadcast or try to record/make sounds from Mars back to Earth as a way of sharing to each other.

Watch as Mark Applebaum plays his self-made instrument and how the sound can be transformed by electronics.

In 2021, NASA plans to send the Mars Microphone on Mars Polar Lander and let us hear what Mars actually sounds like. This is what a song will sound like, according to a test of the microphone in an air-evacuation chamber at simulated Martian air pressure (6 millibars). A video from Cody’s Lab shows that he puts camera inside a vacuum chamber, turns down the pressure to mimic what it’s like on Mars.

Virtual vacuum

Initial sketches for a modular instrument

In order for sound to travel, there has to be something with molecules for it to travel through. On Earth, sound travels to your ears by vibrating air molecules. In deep space, the large empty areas between stars and planets, there are no molecules to vibrate.

However, sound does exist in the form of electromagnetic vibrations that pulsate in similar wavelengths.

Listen to space on NASA Space Sounds: what happens when spacecraft are used to record radio emissions, which are then converted to sound waves.

Chris Hadfield played Space Oddity (David Bowie), filmed in the International Space Station. It seems that Astronauts can, indeed, play instruments in the space ship. It’s only when they get out into the space that sound is not able to be made.

Sounds waves need air to travel through, so even though the string of a violin would vibrate properly in the near-vacuum of space, it wouldn’t produce any sound. Brass instruments would also fall silent: they produce sound because of air vibrating inside their metals bodies, so without air, there would be no sound! More here

--

--

Betty Zhang
Betty Zhang

Written by Betty Zhang

Strategist, designer, and researcher working in digital innovation, strategic foresight, and customer experience.

No responses yet