Machine learning and gestures

Off an on, I’ve been attending the excellent Hawaii iOS developer meetup, and the recent topic has been using TensorFlow in iOS apps. Since over a year ago, I had tossed around the idea of incorporating machine learning into Multitouch, but it’s always been an idea that seemed a little too far away to actually get around to implementing. Seeing a physical object recognition demo app for TensorFlow in action got me pretty excited about giving it a go.

As you could imagine, gesture recognition could be an excellent application of machine learning. One of the main problems that is hard to tackle with gesture creation is accidental gesture execution. Each person has unique “touch signatures” where say, it might be easy for me to accidentally rest two fingers and tap one finger and another person might not ever accidentally trigger that gesture. That’s why the main built-in Apple gestures are nearly all swipes. It’s surprisingly difficult to develop very specific gestures that can handle all “touch signatures”, and allowing the user to tweak every aspect of a gesture would be way too overwhelming to the user. Machine learning could not only help create those tricky algorithms for recognizing certain gestures, but it can adjust per user in a non-invasive way.

Swift for TensorFlow was announced two months ago, and unfortunately there’s still a disclaimer that it is not yet ready for general use by machine learning developers. On top of that, when I tried to set it up I ran into a number of issues with getting the sample TensorFlow code compiling in xCode. Outside of xCode, it actually seems to work great. Nonetheless, there’s some pretty cool aspects to it, including being able to execute code from Python libraries from within Swift.

I think I might have to wait a bit until Swift for Tensorflow is a little further along, but it’s definitely up there on my list of priorities.