Why Wearables will be Successful even if they Fail Part 2

This is a continuation of thoughts I had around wearables. You can check out the first part here.

As I stated in the first part of this essay, the definition of success for wearables, should not be focused on how many sell, but instead how it changes how we view computing. Besides the change in perception, another clear measure of success should be the impact wearables have on new interactions. Consistently, mass market adoption of new technologies is a series of baby steps. Initially, we apply the cognitive models of previous technologies to new ones. These ‘training wheels’ increase the speed at which we can accept and understand new technologies. Overtime, with new technologies this changes, and we learn new interaction methods that become part of our shared thinking of how technology works. Then this flips and we wonder why these interactions won’t work on previous technologies (everyone has a story about a kid who touches any device with a screen and is confused when it doesn’t react to gestures). These new affordances, methods for interacting with data, and cognitive models that are fleshed out through new technologies are where the real ‘magic’ lies. And whether wearables as a product, category or technology will be successful depends greatly on what new interactions will emerge and be embraced.

The difference in use case is the most important factor driving this. An interface slightly larger than a coin, which is continually attached to us changes our goals and expectations. The use case is no longer to take what we do on a desktop (or even a smartphone for that matter) and ‘shoe-horn’ it into a different screen size. Who wants to edit a spreadsheet on their watch anyways? However a timely spending alert while at the store would be very convenient, especially if I can swipe it to dismiss it. This means less time drilling into hierarchy fiddling one handed with tiny buttons, and instead more contextual personalized immediate interaction. And so the idea of pushing content, which requires limited interaction (acknowledgement/validation/peripheral notice) is pretty much the extent of GUI interaction that a wearable comfortably allows us.

From touchscreen UI, we have learned gestures, which improves the interaction within limited screen sizes and allow us to navigate in more dimensions. One inherent limitation is that while gestures work well for things like ‘swiping’ and ‘scaling’, they are difficult to tune fine adjustments. For example, a gesture might be good to change between set options in a consecutive order (think pre-set stations on streaming radio). But it’s rather clumsy when you want to increase the volume 20%. Likewise, no one would want to ‘thumb’ through the alphabet repeatedly in order to write a text (hence the inclusion of ‘Swype for keyboards’ in the newest Android Wear announcement). Still this is a poor interaction and not very convenient on a watch.

To increase the level of accuracy and comfort in interaction, there need to be less touch based methods. Voice input is increasingly a reliable option which could be a much more direct option depending on context. While Echo, Siri, Cortana, and Google Now aren’t quite on the level of ‘Computer’ in Star Trek, the quality and specificity of voice interaction is rising rapidly. In the very short and quick interactions on a wearable device, voice input lends itself perfectly to sending a quick message, a basic search, or fine tuning a setting within an ‘open’ app. Matched with visual output of the GUI, location awareness, knowledge about time of day, schedule, and how you are moving, this can become a powerful way to manage notifications and messaging.

Having said that, we still need to figure out a number of elements for voice. How do you activate voice control? What are accepted social customs and good keywords (only Michael Knight looks badass talking to his watch)? What is the best action in order to receive feedback from the device? As FJ van Wingerde recently posted try and ‘Use it in a bus some time, then get back to me’. Fair point, but probably a red herring in the long run, as we got over the ‘Bluetooth headset drones’ and accept people walking through cities without looking up from their smartphones. A solid pair of Bluetooth headphones with a mic probably resolves this issue.

There are other interactions that hold a lot of promise as well. Contextual triggers have been a popular idea for a while, and make a lot of sense for wearables. When your ability to input and interact with data is limited by the screen size itself, having contextually aware information provided to you becomes even more important. Google Now/Assistant, and push notifications are the best customer example of this (so far). With different data sources, we see ways that this can become actionable in the wearable’s space. Turn by turn directions, evaluating distance travelled for fitness, identifying elements from a grocery list when you walk near a grocery store, or alerts based on weather and time of day are just a few examples. After all, having your boarding pass pop-up on your watch as you walk up to the security gate is a lot easier than having to dig your phone out of your pocket. While there is a trade in kind for privacy and data, the ease of interaction of these pushed interactions for always on wearables is immense. As we need even more targeted interaction, the understanding of context will become more and more reliant on triggers in the world. This then reinforces the somewhat lacklustre beacon technology as a way for 3rd parties to push information to users. The first experiments and iterations, between the device and the environment, will lay the foundation for future of this type of interaction.

In all, the viable methods breaks down to 4 distinct areas:


  • Interacting with the device as a window to the cloud of data


  • Beacons, NFC, Location-based, and temporal based triggers


  • Text to Speech, dictation, Voice search


It is the growth of the later 4 areas that wearables in general provide the greatest opportunity to grow. While Apple and Google have provided limited availability to these API’s so far, the platform and creativity of developers provide a springboard to break from screen input. Most likely it will be a mix of interaction methods that will grow together. A user will get a push notification, action it via voice, receive haptic feedback confirming the device captured the information, and the user will gesture to close the application or something of the like. Even if wearables are deemed a failure by Wall Street, it will be the innovation, standardization and growth in interaction that will lay the ground work for future success.

This is simply an opinion, and I certainly appreciate any feedback or thoughts. If you’re so inclined, a recommendation is always awesome. image courtesy of Pixabay

Like what you read? Give alexmoseman a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.