CCRMA 256A Reading Response 2

tess
3 min readOct 4, 2021

--

Design inside-out

Before realizing that we didn’t have to do the Design Etude for this chapter, I laid awake for maybe ~45 minutes just thinking of different possible inside-out designs. This led me to look up a web app that my friend Sanjay made last year — a laptop accordion — only to realize that the original application was designed years ago for this class!!! Honestly, I should’ve known.

In designing inside out, the first step is to begin with a medium and consider its constraints. In the context of the assignment, it seems like the medium should have some way of ingesting and transforming (human?) input. In the case of Ocarina/Sonic Lighter, that input was air pressure vibrating into the mic. With Sanjay’s laptop accordion, it was keypresses and the angle between two halves of a laptop. Of course, with complete control over a toy’s design, anything could become your input —light, touch, smells, scrapes on a pane of glass, knocks on a door, tongue licks— but working backwards from only existing technology, I mostly thought about devices that I could write software for (computers, phones). Here are some thoughts on possible inputs:

Computer:

  • webcam footage
  • angle between halves of a laptop
  • keypresses
  • sound into a mic

Phone:

  • sound into a mic
  • photos/video
  • taps on a screen
  • button presses

I was particularly interested in possible uses for laptop cameras, and my first thought was whether it was possible to create a theremin using computer vision. Of course, it turns out this has already been done, and it looks like Cornell even has a class where you create a theremin using a camera and a raspberry pi for an assignment! Even still, I do think the theremin idea could be expanded on… perhaps bringing in objects of certain colors could create certain sounds, keyboard presses could be used to further modulate/filter the sine waves of the sound emitted.

The next idea I had after that was perhaps some way to read mouth movements to emit vocaloid sounds using computer vision. Given the shape of a person’s mouth, you could have a vocaloid sound play that could interpolate between different vowel sounds. Think of making a nice O shape with your mouth and hearing an “ooooo”, or pushing your teeth closer together and getting an “eeee”. Opening your mouth wider or smaller to modulate pitch. I also thought of using a laptop like a big clamshell mouth, using the angle between laptop halves in a way similar to the way Sanjay made his accordion. Perhaps you could have some pre-written sentence (e.g. “Hi, I’m a laptop monster”) and open and close your laptop to read the syllables out.

Further reflections on the reading

I’d like to respond to the relationship Ge draws between artful design and music, and the idea that music and technology have always co-evolved. I appreciate that music is acknowledged as a way to change one’s environment, literally transforming some input into some other output (e.g. keypresses, strumming, breath etc. into sound) for our own enjoyment…. and I’m struggling a bit given my current time crunch to say anything new here, but I will respond and say that this idea also helps to argue against the thought that computer-based instruments will eventually replace real physical instruments. Despite the fact that so many new instruments have been created as technology evolves, we continue to play really old instruments! It seems much more likely that we will just continue creating more ways to manipulate sound.

Also, I am ~90% sure that I played with the lighter app that Smule made circa 2008(?)! Wild.

Unlisted

--

--