Navigating the Blind with Microsoft’s HoloLens

Microsoft HoloLens image: Microsoft

Microsoft’s HoloLens maps your environment.

That’s incredible! Well it definitely was in 2015. But besides games what other practical things can we do with real time mapping?

We can help navigate the blind!

Paul, a blind man, walks by my apartment almost everyday. I’ve seen low tree branches hit him in the face. I’ve seen him smack strait into the wall of a parking garage. One day I even saw him walk strait out in the middle of an intersection when he missed the bumps in the sidewalk.

In an age of autonomous cars why can’t we make something for the blind to at least navigate from point A to B safely?

This was a great application for the Hololens. Because of its real time mapping capabilities, the HoloLens could let the Pauls of the world know when he was about to run into something he missed with his white cane.

Beyond just avoiding obstacles however the HoloLens could let Paul know how to get from point A to B.

That’s where the idea really began to form.

A couple weekends later my friends and I hacked together an initial version of the idea using an Xbox Kinect and some headphones.

Seeing the initial version working with purely obstacle avoidance and getting some feedback from initial users we decided to move from the Kinect to the HoloLens.

Originally we chose to use music to alert the users about objects from different directions. Each region was tied to a different instrument such as guitar for objects dead ahead, saxophone for objects to the right, and water phone for objects on the left side. We used instruments with the idea that it would create a sort of orchestra of sound and based on the change in the orchestra sounds the user could guide his or way through the area. We pinged the objects with the instruments playing so the proper 3D sound effect could be given to the user with information such as distance and location in 3D space.After testing with users however we found that three regions did not provide the user with enough information about where exactly the object or objects were.

We decided to add more regions and more instruments to increase fidelity. The three initial regions we started with then blossomed to 8 regions and a host of orchestral instruments from violin to bass drums, but it became even more difficult for the user to discern where the objects were. The 3D effect was lost in the cacophony as objects in overlapping regions were pinging and walls such as those in a hallway were demanding more attention than they required.

Seeing the region based approach not working as anticipated we decided to refine it by adding additional sounds and identifying solid planes as walls and providing them their own sound. Instead of instruments we switched to gentler sounds for static objects such as the walls and floors. Since these were always ringing in the background their volume was reduced so objects that the user was about to hit would take clear precedence over the background noise and the user would know what to focus on immediately. These further refinements resulted in a much calmer experience, but we still found the fidelity was quite low. Knowing an object was there was good but now users had trouble finding a way around it.

Not wanting to add more regions we decided to go with a sort of radar approach. We had a “ball” that would sweep back and forth across the HoloLens map and ping any objects it found such as walls and other objects (see video below). The urgency of the pinging corresponding with the objects proximity. This radar approach meant the user only had to focus on one noise at a time and could create a more meaningful mental map of what the space actually looked like.

This radar approach turned out to work extremely well with further refinement and practice. After about 30 minutes to an hour of practice for a blind user and about 2 hours for a seeing user they were able to navigate a novel space at about 70% speed of a sighted control. We found that one of the biggest parts of the practice was not learning to understand the sound and create the mental map of the room but trusting the information being provided. Building confidence in the device turned out to be the biggest hurdle in using the software. Once the user gained confidence in the device they were able to use it at just shy of a regular walking pace.

We added a couple more sounds for the user to gain more information. We made the collision alert “cone” broader so it would include an object that was going to be imminently hit at the feet as well as the head. We also added the mapping sound to alert the user that the holoLens was mapping the area and to wait a moment and to maybe look around to help the process.

In addition to the fact that the HoloLens creates a map it has Microsoft’s Cortana voice assistant who can open apps for the user. Because of her a blind user does not need to see the HoloLens interface. All he needs to do is press the power button to turn on the device and say “Hey Cortana, open SoundSense” and the app opens and is ready to go. The volume buttons on the side let the user then adjust the volume or mute it if they do not need to the sound.

There is still so much to do for the project. It would be great to figure out how to tether a phone and the HoloLens to provide GPS data from the phone to the device for outside GPS guidance. Also adding voice based alerts such as “Watch out. Stairs going down 20 steps away” or “Hallway ends in 30 steps go left or right at 90 degrees.” This is more along the lines of GPS guidance but having that indoors would give the blind and visually impaired more freedom to navigate indoor environments without assistance.