Ultrasonic Selfies

Calebjhammel
RE: Write
Published in
5 min readNov 6, 2020

No one to take your photo? Time to harness the limitless potential of high frequency cross device audio interpretation.

Photo by Steve Gale on Unsplash

The inspiration for this project came from the aptly named podcast Twenty Thousand Hertz. This podcast focuses on the auditory world and derives its name from the highest frequency a human can hear. I was visiting my parents in July 2020, just before starting design school, and listened to an episode that has captivated me ever since. The episode was all about ultrasonic tracking and the sneaky way advertisers sneak audio signatures into video content. These signatures are too high for you and I to hear, but can be easily detected by cell phones. This tech can be used to sonically transfer data or trigger action without any connection between devices. Advertisers will then partner with 3rd party mobile applications to recognize when a user has seen their ad. The podcast also brought up Lisnr, a service design company that utilizes ultrasonic data transfer for good. I began obsessing over this technology and knew it was the perfect starting point for the final project of my Critical Making class.

My intended creation will exist at the interactive art gallery Wonder Wonder, here in Boulder, CO. “The gallery consists of 18 immersive rooms that guests can explore and take pictures in. Most of these photos are self portraits or selfies.” says Natalia Vinueza, the gallery manager. Users are encouraged to take their time and to take photos. What happens though when people want a group photo? Who takes a group’s picture in front of a beautiful mural when no one else is around?

A photo I took on a recent visit to Wonder Wonder

Who takes your photo if you are alone? I believe we can utilize ultrasonic technology to solve these problems while requiring minimal input form users and operating on any device with an internet connection. Remote controlled selfie sticks exist but each requires connections specific to each mobile device. Photo timers could help take one photo but force users to walk back to their device each time they want to take another photo. In comes the ultrasonic selfie.

The easiest way to explain this concept is to simply breakdown a user’s interaction with it. Upon entering the gallery user’s will be encouraged to open wwcamera.com. Although this step does require user input it is the last device action they must do throughout the duration of their visit. A user will then walk into one of the gallery’s rooms and place their phone in the designated mobile device holder. Next to the holder will be a speaker connected to an Arduino controller. Users can now go and pose for their photo. User observation will be done prior to installation of this device to determine where most people pose for photos in each room. Next to this pose location will be a button. Once a user is ready for their photo they simply press the button. The button press will then trigger the speakers to emit an audio frequency at 19,990 KHZ. This sound, which is inaudible to the human ear and roughly 10,000KHZ higher than any other audio sources in the room, will then be heard by the web page open on the mobile device and trigger the Java-Script within to take a photo. Users can continue to press the button as many times as they’d like.

an early prototype of the Arduino controller

While I have not finished a full prototype of this device yet, I have created enough working parts individually to see what challenges I will need to overcome. Firstly, the frame rate of my Draw() loop is roughly 10 per second. Therefore if a sound is played for 1 second 10 images are captured. This of course would result in far too many images captured on a user’s device and will need to be addressed. Secondly, I have not tried to analyze audio frequencies with the microphones of mobile devices yet. Although this is a crucial yet non tested aspect of my project I have done enough research to believe it will not be an issue. The third issue, and an unfortunately unavoidable one, regards the internal camera access from a web browser on mobile devices. Without creating a native app I am unable to take photos with a user’s camera itself. Instead I am actually taking a screen grab of the web page. Although this is not ideal it is the only way to generate such images without forcing users to download an app. Perhaps this would be an acceptable practice if the device usage were to grow.

early prototype of the webpage — analyzed audio frequencies can be seen on the right — console.log(‘lil’) is set to trigger when audio goes above 16,000 KHZ

A wide range of tools will be used to accomplish my goal. The core functionality of the webpage will be written in Java-Script while utilizing the P5-JS libraries. The Arduino controller itself will require physical wiring and Java code loaded onto it. I will use 3D printing to create the mobile device holders and am working with Wonder Wonder to have them fit the theme of each room.

Although this project does not have any direct connection to my other classes its potential for integration is endless. To me the greatest form of user experience design is service design. This technology has endless possibilities within service design as it can be manipulated to act as auditory code. If one broke down the chirps into two frequencies and timed them out they could essentially transmit binary code without any other form of connection between devices. I could list dozens of ideas that conversations over this tech has spawned. One could place audio signatures inside streaming video content to trigger an action on a mobile device thus creating a more immersive viewing experience. Hotels could use this technology to know the second a guest walks into the lobby, allowing them to have their room keys ready to go by the time they reach the front desk. The possibilities of ultrasonic audio are endless as too is my excitement around it.

--

--