My Moto 360 Sport survived a Spartan Race

Why can’t apps just die?

How context aware services are replacing apps on connected things.

Ubiquitous computing for health and fitness [Part 2/2]

75% of American smartphone owners download at least 1 app/month

In the first part of this two part post, I reviewed my experience with the Moto 360 Smartwatch during an obstacle course race. In this break, I’ll take that further with an opinionated take on where smartwatches, and the concept of ubiquitous computing as it pertains to health and fitness, is going.

As you know by now, the main difference between the Moto 360 sport and its 1st gen counterpart is the built in GPS and Offline capabilities. This means, I can go on a run, without a phone, and still track my run, listen to music and get stats afterwards. I hate running with a phone and realizing the freedom of enjoying music and tracking fitness while untethered is life changing.

I choose to workout with my own app, LynxFit, but when I run I choose Strava for running. Strava has an impressive Android Wear app and they adopt awesome features like activity recognition. When I start running the timer and recording starts. When I stop Strava, knows to pause my activity for me. This makes running so much more seamless because

technology and screen interaction just stays out of the way during activities.

Today, for me, running seems futuristic. I don’t need to interact with an app, the my phone gathers sensor data and uses context to provide useful semantic information based on my kinesthetic activities. It’s like I’m in the future, I pair my Bluetooth headphones with the watch, tap start and go; no cords, no phone, just sweat and pain.

Strava Offline Mode on Google Glass
I really believe context will eat apps, so let’s start leaving the apps behind

On phones, apps will die as well eventually, but in the immediate future, connected things like smart watches, cars, and home and enterprise connected things won’t need apps, unless for setup, management and unique, non-repetitive use cases.

Virtual assistants, the ‘un-app’ applications

Currently, the ‘not-quite baked’ feature in Android Wear is full Google Now voice action integration. When paired with an Android phone, the full power of Google Now voice actions means, I can have full control through voice search activities. With voice actions, I can pause, play skip tracks on a song as well as a host of other actions just by speaking a command when enabled on Google’s AI powered personal assistant, Google Now. This is not quite live at this moment for wearables but I’m quite hopeful Android 2.0 introduces more Google Now voice functionality.

As “personal assistants” like Alexa, Cortana or Google Assistant get smarter and more matured, functionality will permeate into physical hardware. Coupled with technology like Google’s Fence API, launching an app, and manually managing its state will be a thing of the past. Fence extends the concept of geofencing to cover where you are, what you are doing, device-specific information and surrounding sensor conditions, to provide context and assistive actions to the user, normally hands free.

We see this now in software, where Google leverages its massive traffic dataset to advise leave 15 minutes earlier to make it to your destination. Imagine this norm, applied to hardware. You can have a smart home truly connect to you; a home that knows to dim and change the hue of lights upstairs to warm colors, because it knows you were in an office all day, and those blue wavelengths from the florescent lights sucked away all the melatonin you need to sleep.

This, is all part of what Google is calling the Awareness API, and I encourage you guys to check it out.

Today, most Android Wear applications outside of health, are simply repurposed phone apps. The apps available today don’t leverage context all too well, they aren’t built from the ground up for the “connected thing” medium. But there is promise, from Apple to Amazon, Microsoft to Google, and maybe, some startup you’ve never heard of.

I am personally longing for the day the touchscreen takes a backseat to the wiser sensors and things around me. Let’s get rid of apps and replace them with contextual triggers.