Blind navigation with a wearable range camera and vibrotactile helmet
Steve Mann, Jason Huang, Ryan Janzen, Raymond Lo, Valmiki Rampersad, Alexander Chen, Taqveer Doha. MM ‘11 Proceedings of the 19th ACM international conference on Multimedia.
Date: April 1, 2011
Publication Source: Blind navigation with a wearable range camera and vibrotactile helmet.pdf
This projected was inspired by Professor Steve Mann while I was taking the course ECE516 Intelligent Image Processing in my 3rd year of undergraduate at the University of Toronto. Near the end of the winter semester 2011, Professor Mann gave us the idea of ‘Seeing through Kinect’ and use it as a wearable computer.We thought the idea was really cool so I joined up with Jason & Taqveer to hack out the first prototype in one night.
This project uses Microsoft Kinect 3D sensors to detect obstacles and signal the user with actuators. It is meant to help blind or visual impaired individual to navigate better in indoor environment.
Microsoft Kinect is used to replace the function of human eye. With Kinect’s depth map functionality, we can detect in door obstacles in our daily lives. In real time, the algorithm will determine the location of the obstacle and signal the user with vibrators indicating the direction of the obstacles relative the user.
Actuators Distance Sensing
Actuators(vibrator) is key communication device between the machine and the user. The user uses his/her skin on their forehead to determine the distance and direction of the obstacle. He/She will utilize this information to determine its optimal path in the environment without running into any obstacles.
The main challenge with the actuators is to find the best signaling strength to represent different distance between the user and the obstacles. In the first few attempts, we tried to represent the distance between human and obstacles with a linear function. As a result, the user can’t really determine how far the object really is because his/her skin cannot really tell the minor difference in vibration. Therefore, the user moves slower among obstacles. In later prototypes, we changed the linear function with other non-linear transfer functions that will change dramatically as the distance changes.
2D Gaussian Mapping
We applied a 2D Gaussian weight mapping on Kinect’s depth map. As a result, the obstacles at the corner of the depth map will vibrate relatively weaker than the obstacles appear at the center of the depth map. Since center is normally the general direction a person is heading, this weight mapping can help users get rid of noise vibration signal.