Me Too Wearables

Why today’s wearable devices are ultimately boring. And why they don’t have to be. 


As a UX designer, I’ve spent a significant portion of my career trying to squeeze, distill and massage information and functionality into meaningful, intuitive and hopefully delightful user journeys. This usually requires taking business drivers into one hand, user expectations into the other, and attempting to mould the two into a holistic experience that takes into account the limitations of the target platform — or in the best cases, one that turns those limitations into unique advantages.

This is why the wearable devices space, said to be worth $1.4bn this year and $19bn by 2018, should be such an exciting space for new UX: a target platform that is not a single screen, or even a number of them, but a mixed ecosystem of interconnected devices and technologies that are centered so intimately around their users as to almost be a part of them. Yet, it would seem that this exciting opportunity is currently being sidelined by a wearables market that is unwilling to look further than the creation of new smartphone accessories.

So why is the current crop of wearables seemingly aiming so low? And what should we as UX designers, developers and industrial designers really be aiming for?


Cross-device user experiences are not new. The convergence of tablets, smartphones and smart TV’s have opened up many opportunities for the creation of user experiences that seamlessly hop from one screen to another. With the Internet of Things becoming one of 2013's biggest technology buzz phrases, and a whole range of connected appliances joining the connected device fray, those opportunities are only going to multiply .

Wearables, spanning the gamut from single-purpose fitness trackers to smart rings, watches and even underwear, right through to smart glasses such as Google Glass, have the potential to play a central role in the Internet of Things. However the design community is only just beginning to appreciate the challenge of defining and fine tuning the right interplay of IoT devices, and those very challenges are only exacerbated for those technologies that are intrinsically centered around the user’s person.

So, with all the challenge and opportunity presented by the IoT revolution, what are the key use cases for wearables today? And perhaps more interestingly, what should we demand of our wearable technology going forward?


“Wearables today are conceptually exciting but practically boring.”

Once the novelty wears off, any unbiased observer of today’s wearable devices market can only rationally come to one conclusion: what we’re now seeing are the first tentative steps of a technology trend which holds great promise for the future, but which is still clearly grasping at the low hanging fruit. Ultimately, wearables today are conceptually exciting but practically boring. To illustrate this point, we only have to look at the three use cases that are dominating the current crop of wearables:

Different shapes, same purpose

The Quantified Self is the one use case around which a disproportionate majority of the effort in the wearables market seems to have clustered. The market is increasingly flooded with wearables whose key promise is to let you keep track of your daily activities, and whose main aspiration is to motivate you to become a better version of yourself: fitter, slimmer, healthier, more aware of your own body, and bursting with energy and motivation. From FitBit, to Nike, to BodyMedia and Jawbone, the number of companies offering wearables with slightly different takes on this same principle is likely reaching a saturation point.

Unfortunately, the sort of ‘deferred biofeedback’ these devices provide is only as useful as the wearers’ sustained motivation to change their habits. Sadly, as we all know from those seldom-used gym subscriptions, the baseline behaviour for most of us is to eventually retreat to our old routines, once the allure of the numbers, graphs and achievement badges finally wears off. For those with the willpower to keep it up, it is debatable how substantial an impact the added information provided by an activity tracker has on their sustained performance.

Notifications runs is a close second in terms of wearables use cases. A significant amount of investment in technology and design has been sunk into the seemingly trivial pursuit of reducing the number of occasions in which the user is forced to retrieve their smartphone from their pocket.

The main challenge with this use case is that in a substantial number of instances, the wearable device form factor imposes limitations on the amount of information that can be displayed. These limitations in turn create a disparity between the wearable device’s ‘light’ rendition of a notification, and the smartphone’s ‘full fat’ version. Additionally, many types of notifications (such as incoming phone calls and text) require a follow up action that cannot be completed on the wearable device itself .

In these instances, the wearable device’s role is suddenly diminished from gatekeeper to mere intermediary, trivially and imperfectly passing on a message only to cause the user to reach into their pocket for their trusty smartphone like it was still 2012. The mere fact that this is seen as a ‘killer’ use case for a whole category of smart watches and other wearables is a strong indication of how much work needs to be done before the promise of wearable computing is realized.

For something a bit less mundane, we have to look into the domain of Smart Glasses and their potential to not only provide us with on-demand information and timely notifications, but to enhance our experience of the everyday physical world through the promise of Augmented Reality.

Lumus DK-40, Google Glass, Recon Instruments’ Jet

Augmented Reality has long been trying to find a solid foothold in the smartphone user experience, but with the exception of a few single-purpose apps, often well executed but of narrow usefulness, AR has failed to grow much beyond its original incarnation as a marker-less alternative to a QR code or a way to display information on what’s nearby without resorting to a map.

The problem for AR is no longer a technical one (current leading AR platforms use incredibly sophisticated computer vision algorithms to accurately recognize and track anything from logos to 3D objects) but one intrinsically linked to AR’s physical user experience: in order to effectively use AR the user first needs to know that AR content is available for a given object, image or location. They then have to point their phone in the right direction, with the right app loaded for that particular content, and hope the effort was worthwhile. Often, it is not — and AR gets forgotten in the back of the ‘other’ apps folder only to be dusted off when we want to show a friend something cool over drinks.

But what if AR content could come to you instead, seamlessly, discreetly, and in a timely manner? That’s the promise held by the next generation of Smart Glasses from Google, Vuzix, Lumus and Meta: an augmented reality experience that turns the laborious, user-driven experience of AR on the smartphone into a passive, always-on, content-driven experience where the right content is shown in the right place and at the right time. If used in the right context, AR could be a powerful tool helping to realize the new categories of use cases I feel we should be aiming for in the wearable devices of tomorrow.


“Wearable devices show the potential to be the closest consumer technology has come to becoming a true extension of the self.”

One thing wearable manufacturers are beginning to realize is that, in order to succeed, their product must increasingly justify its presence on the wearer at the expense of something else: a fitness tracker for a bracelet, a smart watch for a fashionable timepiece, a smart ring for a piece of jewelry and one day soon, smart glasses for bifocals. There are only so many viable ‘attach points’ for technology to live on a human body, and one day soon these new wearables will be competing against each other for a very limited piece premium real estate. In order to stake their place, wearable devices will need to prove their mettle.


To claim their full potential as a technology category, wearables will need to outgrow their current role of smartphone accessories. Their role in the IoT ecosystem should increasingly be one of master, not slave . Wearables should empower their wearers to take control of their proximal device ecosystems, not just meekly relay messages for other devices smarter than themselves.

It is with this in mind that I’d like to put forward three areas which I believe hold wearables’ true promise:


Control Projection: wearable devices have the potential to provide new ways of controlling IoT devices in natural, seamless ways — inertial-based gesture control, voice activated commands, and even electromyography are all within the realm of technical capabilities for even the current generation of wearable technologies. The most significant challenges in this space will lie not necessarily with technology, but with the willingness of IoT manufacturers to agree on a common standard for communication and interaction, and a common language for how to perform certain interactions no matter what device is being addressed. After all, ‘On’ and ‘Off’ should apply to a light switch as well as a TV, and ‘Volume Up’ should be the same regardless of whether you’re addressing your digital radio or your games console.

We’re already seeing some instances of these use cases being put to the test. Thalmic Labs’ Myo loftily promises to do to gesture control what the Harmony remote did for living room appliances. Google Glass hackers have already worked on a simple IR mod that allows them to control their home electronics. Last but not least, a small startup called Bionym is promising a secure way to authenticate on your devices using the unique signature of your heart beat. In the end, only time will tell just how big a role wearable devices can play as controllers of the IoT, but the potential is considerable.

Context Awareness: in the last few years, context-aware features have increasingly become a standard part of the smartphone user experience. Apple’s Siri and Google’s Now both attempt to pre-empt the user’s needs by analyzing data such as current and frequently visited locations, relationships between contacts and, in Google Now’s case, recent web searches to present relevant information to the user in a timely fashion and with minimal user input.

If we extrapolate current context-aware functionality to the added sensor capabilities provided by wearable devices, from data about the user’s activity to their heart rate, stress levels and even mood, we can imagine how the sphere of user-centric context-aware computing is likely to expand as a result of the growth in the wearables devices market. One day soon, we may have our wearable devices instructing our home electronics to set the right ambiance to match our state of mind, or have news content formatted to match our current cognitive load.

Cognitive Extension: taking the above two use cases together, I believe the next logical step will be to create user experiences that enhance the user’s ability to process and respond to their environment, as well as providing the right information at precisely the right time without requiring the user to disengage from their physical surroundings for the time it takes to perform a web search on a smartphone.

Today, Google Glass’s search-based functionality hints at such a future — ask a question, and receive an immediate answer readily available in your peripheral vision. Tomorrow, we may not even need to ask — a combination of location awareness, face and image recognition, and audio fingerprinting may well whisper information relevant to our situation into our ear, or display it in the corner of our vision. We may never forget a face or a conversation again — or the name of that song that’s playing in the background. We will be able to translate the written and spoken word in real-time. One day soon, the guy wearing the goggles may always end up looking like the smartest (and potentially most annoying) person in the room.


Even from the relatively mundane examples we’re seeing today, wearable devices show the potential to be the closest consumer technology has come to becoming a true extension of the self.

Predictably, the evolution of wearables will face, and even amplify, many of the issues faced by connected technologies today — how to protect increasingly personal data, how to guarantee today’s activity sensor logs won’t be mined tomorrow to infer new (and potentially embarrassing) information, and how to devise a universal language to promote natural cross-device interactions. Wearable devices’ intrinsically personal nature will also demand some adjustments in the rules of social conduct and fashion, and new expectations will have to be set in the way these technologies are used in public.

Yet, as they overcome these challenges to claim their rightful place in the IoT ecosystem, wearable technologies may well become the primary means by which we make sense of, and operate within, the sea of connected devices that increasingly surrounds us.

We’re clearly not there yet, but the promise is undeniable.

If you found this article interesting, please support the author by pressing the Recommend button below.


About the Author: Giuliano Maciocci is a User Experience professional in the R&D sector, with a strong personal interest in Wearable Technologies, Augmented Reality and Virtual Reality. He runs a blog tracking these technologies at Augmentl.io, and can usually be found tweeting about this stuff @augmentl.

The views expressed in this article are the Author’s and his alone. These views are not endorsed by any third parties, nor are they themselves an endorsement of any of the companies or products mentioned herein.