During my freshman year, I met a graphics professor who got me hooked on the idea of live virtual avatars. We were unsatisfied by current ways of controlling player characters in immersive games. Eventually, we decided that a better solution was needed for avatar locomotion and expression. With the help of vision-based technologies such as Microsoft Kinect and facial landmark recognition, we set out to create our own motion capturing system. I am writing this article to share my implementation of this system. This article is inspired by my final research paper.
Researcher @ Cornell University