BoseAR: Developing a Spatial BoseAR Experience

Developing a geo-location based spatial audio experience

Akshansh Chaudhary
The Stories Within
7 min readMay 27, 2020

--

This post is a part of The Stories Within series, a project by the students of Parsons School of Design as a part of the Spatial Computing Challenge under the guidance of the R&D team at The New York Times and NYC Media Lab. The project is introduced in The Future of News Series!

What is BoseAR?

BoseAR or Bose Augmented Reality is an audio AR technology from Bose. It allows the user to “interact” with audio. Generally, audio is head-locked to the listener, i.e., if you move your head, the audio moves with you. But, the BoseAR platform allows interactivity with spatial sounds by placing them in physical space such that they are not head-locked. Usually, AR is considered a visual media. But, BoseAR augments audio over the physical world.

Getting Started with BoseAR

We were early onboarders on this platform, and realized that it had the potential for creating innovative audio experiences. We became BoseAR developers, installed the SDK for Unity, and customized the audio outputs to make them interactive. This post is a getting started guide into our findings from the SDK, and customizing it to create interactive audio experiences.

Note: COVID-19 has led to changes in the development efforts towards BoseAR. Currently, the platform is closed for developers. But, it would open soon.

The following steps explain the flow of onboarding on BoseAR:

  1. Register as a developer on BoseAR
  2. Download and install the SDK for Unity
  3. Customize the demo scenes
  4. Export the app for Android/iOS
  5. Experience audio AR on a BoseAR compatible headset

Some essential links before we start using BoseAR:

Customizing BoseAR SDK Inside Unity

Once you import the Unity Package for BoseAR or download the project from the github link (Spatial Audio Bose AR), your Unity project will have a folder called “Bose”. This folder will contain all the demo scenes, scripts, and other models (prefabs) that allow easy onboarding and customization. Everything you need is in the “Wearable” folder.

Navigate to Bose > Wearable > Example App > Scenes > Root. The Root is the starting/opening scene of your application. As soon as you plugin a BoseAR compatible device (in our case, it was the Bose Frames Alto) and run the Root scene, the game view should show up the Main Menu which detects the BoseAR device, and configures the position of gyroscope. This has to be done every time, because the initial position of the device defines the directions for the device. This means that if you are facing a window when you start the configuration, then the north would be that window. The device would not consider true north based on cardinal directions.

The way the BoseAR package has been configured in Unity is through an interconnectivity of scenes. The Root is the main scene, which is linked to the “MainMenu”, “BasicDemo”, “AdvancedDemo”, “GestureDemo”, and “DebugDemo”. You can see this when you open the Build Settings. All the scenes would be loaded, because one cannot work independently.

Now, to customize individual scenes, navigate to Bose > Wearable > Example App > Demos > Advanced > Scenes > AdvancedDemo. Our audio experience for “The Stories Within” relies primarily on the instantiation of audio in 3D space at different times during the story. So, we customized the AdvancedDemo scene with multiple audio tracks and adjusted the scripts accordingly.

When you play the scene, you see a translucent sphere with a blue torch in the center. The torch represents the position of your head (it’s actually the position of the Bose Frames). There is a glowing yellow target that increases in size as you look towards it. This yellow target is the audio source. The targets instantiate on the periphery of the sphere based on the adjusted settings. You can move your head and listen to different sounds, as the targets appear and disappear in the view.

But how is this soundscape interactive? That’s the innovation! The interactivity with an audio source is based on the gaze of the headset. Put simply, the sounds change volume based on how far your head is from them. If you look right towards it, the sounds grow louder. If you look away from it, the sounds are quieter. This is controlled through the “Close”, “Middle” and “Far” settings for an audio. Essentially, every target is assigned the same audio file three times — for Close, Middle, and Far. The volume is adjusted for each — louder for close, quieter for far.

All the audio clips are attached to the yellow targets in the scene. So, to change the sounds, you have to change the audio source in the Target prefabs. Simply drag and drop the required audio clip on the SFX Loop elements, and the audio levels would adjust automatically. These targets are located in the “Shapes” folder. Navigate to … Demos > Advanced > Prefabs > Shapes. Double click to open the target.

Inside the Shapes folder, you will find 6 Targets (Target, Target 1, … Target 5). That’s because we have tweaked the instantiation (appearing) of the targets at 6 specific spots — North, South, East, West, Top, and Bottom. Each target can carry a different audio, and can play at a specific time during the clip. The customization is available in the Script, and the settings for the audio targets can be updated in the “Game Controller”. The main script responsible for that customization is the AdvancedDemoController. You can find it under “Scripts” in the “Advanced” folder.

That concludes this post. Our process with BoseAR involved experimentation of the medium to create an interactive audio experience. We were able to customize the package to fit our needs, and exported it on Android to play the experience. The BoseAR platform is still evolving and getting better. Currently, the experience works as its own app, but it can be integrated with other apps to enhance the listening experience for the user.

About the Team

Akshansh Chaudhary is an XR Experience Designer. In his projects, he focuses on social and world issues like privacy and local news. His design approach is to create immersive and playful experiences to spread awareness. Follow his work on akshansh.net.

Karen Asmar is a design technologist working at the intersection of the built environment, society and human-computer interaction. Her work focuses on exploring the impact of emerging technologies on ways we interact in space, with space and with data in space. Follow her work on karenasmar.com.

Ponvishal Chidambaranathan is a fervent immersive storyteller and digital producer with a strong inclination towards innovative, philosophically charged content in unconventional storytelling and interactive media. Follow his work on ponvishal.com.

Yashwanth Iragattapu is a creative technologist and interaction designer. He creates products that encompasses human and spatial interactions through emerging technologies like Augmented reality and Virtual reality. Follow his work on yashwanthiragattapu.com.

Debra Anderson is an entrepreneur and educator specialized in XR. She is recognized for designing data-driven and research based approaches to immersive experiences with a focus on how data and emerging technologies can be used for civic engagement and social impact.

The Series List:

--

--

Akshansh Chaudhary
The Stories Within

I am an XR Experience Designer and Animator based in New York. View my work at akshansh.net