Soundscapes: The Poor Man’s Teleportation Device

Varun
wambam
Published in
6 min readApr 2, 2018
Source: Slate

For this project, I decided to expand on the concept of acoustic ecology and tackle some of the barriers that were presented in “An Introduction to Acoustic Ecology” by Kendall Wrightson. I created a soundscape of my recent trip to Mexico by composing different aural modalities — clips of various audio recordings over an instrumental song. I believe that this form uses the unique affordances that audio provides and aims to create a shareable experience that doesn’t fall subject to the “eye culture” norms that dominate the social technology culture nowadays.

Eye Culture

Eye culture is described as the dominance of the visual modality in society. R. Murray Schafer, who coined the term “acoustic ecology,” noted that children’s ability to listen was deteriorating. [1] This is perhaps emphasized by the recent advancements in technology, particularly regarding communication and social media. A quick scan of the most popular social media apps (Facebook, Instagram, Snapchat, etc.) reveals the bias towards the visual modality — on all platforms, the visual mediums are the ones that are predominantly used and designed for.

One reason for this development could likely be the limitations of technology and how it influenced what devices like smartphones could afford users. Of all the senses, only sound and sight are reasonably emulated by today’s devices. Smell and tastes are far from being produced by our portable devices, and touch is limited to taking input on a screen and providing haptic feedback. However, devices are capable of projecting, recording, and producing sound with speakers and microphones, and visuals with screens and cameras. On our phones today, we can do a lot with those two modalities. Why then, did the features regarding sight surpass those regarding sound?

My guess (an educated guess as a computer science major) is that the more recent breakthroughs in sensory related technology revolved a lot more around visual data as opposed to aural data. There’s been a tremendous amount of research done in the areas of computer vision to the point that much of it is already embedded in the everyday apps we use to communicate. Google photos can automatically perform image recognition/tagging, which allows you to search your photos based on what’s in them. Pardon my brief exit from this otherwise mostly academic essay, but it’s really fucking cool. Facebook has augmented reality technology that allows you to interact with virtual snowflakes falling on you while video chatting. Snapchat also has augmented reality technology that allowed this dancing hotdog to dance on platforms in your real, physical surroundings.

You think you’re sooo cool, huh

Meanwhile, although there have been interesting and practical research done on audio that have led to creations like Shazam, which can detect what song is playing, the nature of audio processing as data makes it more difficult to develop with. As a result, the tech companies that develop the devices and common software/apps most of us use today focused a lot more of their efforts on advancing visual technology. This caused many users to become more well versed with visual technology, use it socially, and raise expectations for more advancements in visual technology, probing technology research and development further into this field.

Tackling Eye Culture

Of course, the technological advancements related to audio didn’t necessarily get stalled behind visual technology. It’s just that instead of the technology being developed for novel audio related use cases, the majority of audio related technology has been allocated to music. It’s gotten to the point that being able to stream music has become a widespread expectation, and that sharing music online has become social.

Although Schafer claimed that eye culture has resulted in people’s ability to listen to deteriorate, the affordances that faster internet, portable headphones, and streaming services like Spotify provide have actually greatly increased the number of people listening to music. I hoped to use this as an advantage when creating my soundscape, instead of being set back by it as a limitation. This is further reflected in how I uploaded my soundscape to Soundcloud so I can embed it on this article.

My soundscape is a composition of audio clips from my trip to Mexico that are pieced together over instrumental music. I believe that it tackles the eye culture present in most social media as it doesn’t include any visual components and is purely aural. Typically, when someone in the 21st century wants to share a trip they made, they usually show pictures or videos of where they went. I decided to skip this visual modality and focus on how I can only make use of the aural modality to convey how my trip went, and perhaps even share it socially.

Since the social aspect revolving around audio is largely with music, I decided that music must be a key factor of my project. The different clips in the soundscape are in chronological order, but they change environments. I didn’t want to simply piece together these bits of audio recordings alone, but rather have a constant that ties them together and keeps it moving forward. The song I used as the background was actually procedurally generated, meaning that it was created using code and math and all that fun stuff. It deserves its own article so I won’t talk much more about it here.

Semiotics and Affective Potential

Wrightson states that, “ Perhaps when listening to a “soundscape” — sound heard in a real or “virtual” environment — you have been transported to another time, another place. Conversely, maybe you have experienced the-here-and-now even more acutely as a result of listening intently.” [1] I hope that my soundscape also has the ability to “transport” listeners to the kind of environment I was in. I think that the distinct lack of any visual accompaniment also plays a role in this form of experience sharing. Instead of showing people exactly what I saw, I can let them imagine for themselves what the scene might look like based on what they hear. I think it is this aspect that uniquely makes use of the affordances of sound, as well as tackles the prominence of eye culture in social contexts.

I also believe that this form of experience sharing paves way to a greater freedom in its semiotics. Because the audio only activates one of the listeners’ senses, the rest of them are left to interpretation. The listener can decide what each of the sound clips means to them, and perhaps what they mean given the order they were presented throughout the “song.” This affords the listener with the affective potential of the audio.

It also follows the notion of atmospheric attunement. Stewart explains the concept as:

an intimate, compositional process of dwelling in spaces that bears, gestures, gestates, worlds. Here, things matter not because of how they are represented but because they have qualities, rhythms, forces, relations, and movements… It is not an effect of other forces but a lived affect — a capacity to affect and to be affected that pushes a present into a composition, an expressivity, the sense of potentiality and event. It is an attunement of the senses, of labors, and imaginaries to potential ways of living in or living through things. [2]

Although the language is rather flowery, I believe that the underlying relation between my composition and the concept of atmospheric attunements is that the aim of my composition is to use just the modality of sound to share an experience that affects the listener by transporting them to the environment I was in, yet instead of presenting everything as it is and having the viewer live vicariously through my experience, it allows them to recreate their own sense of the atmosphere and derive their own meaning.

References

[1] Kendall Wrightson: An Introduction to Acoustic Ecology

[2] Kathleen Stewart: Atmospheric Attunements

--

--

Varun
wambam
0 Followers
Editor for

Just another kid lost in the abyss