Gestures and Notifications as the Interactive Operating System

One of the things that I am noticing as I am setting up my new (to me) Google Nexus 5 is that I am not relying on the analogy of folders and icons in home screens in order to navigate around the device. I’ve actually resorted to taking all of the icons off of the main interface area. The goal being I wanted to get around the device, and be notified only when necessary, when there was something that needed my attention.

It started when I added a program called All Gestures in order to get around the device much like I did with my Nokia N9. And now I am noticing that I’ve gone as far as taking some of the information that I would normally come thru an application, such as weather or notifications about specific website updates, and I’ve made those items to show up in the notification tray only (using IFTTT). If you will, I’m taking information that comes to my mobile device, and putting it in a more temporary place than an application.

I’m not only doing this on the Nexus 5, I’ve also done this on my iPad. I did this one custom action in IFTTT where when I put a schedule reminder into the iPad (I usually use Siri to do this), it would send a notification about that reminder to my Nexus. While I don’t like the idea of syncing tasks across devices (calendars, sure; tasks not so much), doing this action at least allows me to take advantage of the productivity that I do on my iPad, and the personal assistant context pursued on the Nexus.

It was on the iPad that I actually thought of this kind of interaction. But I don’t take full credit for it, I know I read about this previously. The article went something like, “what if the notification tray became the center hub for how you interact with your device?” I think the article talked about some of the new chat applications and how they just work inside of the notification shade. The advantage of this being that you could keep where you could find it, but you didn’t have to last as long as an application icon/badge.

Along with this idea of sending things to a notification tray, I’m taking advantage of the fact that my Nexus 5 doesn’t have the same always-on standby screen as the N9. I’m going to use an application that basically gives me a vibration every time I pick up the phone to let me know that there’s a notification waiting for me (Vibify). That way I’m not doing the part of turning the phone on just to see if there’s a notification there. The application only gives me this kind of vibrate notification for specific apps. I am less likely to take the phone out of my pocket every few seconds if I know that when I do take the phone out of my pocket, it’ll vibrate and let me know that there something that needs my attention on it (no more phantom phone vibrations).

My thinking here, if I am going to utilize devices that say that they are smart, then why don’t I begin to take away the perception of what made them smart (touch, lots of apps, connectivity), and start adding intelligence with gestures, voice, and context to augment the equation “smart” mobile device? As stated in article along time ago that I wrote, if our devices are so smart, why are we adapting to them instead of them adapting to us?

I quote Bill Buxton at the beginning of my presentations often, but his words really do make the most sense: “what if we paid more attention to interaction the presentation?” What if we really interacted with the content mobile devices? What is the relationship to our devices, apps, and services when that kind of interface happens?

Show your support

Clapping shows how much you appreciated Antoine RJ Wright’s story.