Trends in Consumer Digital Technology for 2019
The smartphone decline, the rise of smart assistants, the second wave of voice, the post-mobile platform, and more.
For the past few years I’ve got into the habit of starting the new year with an article consolidating my thoughts on where we’re at with consumer digital technology; looking at the landscape, and at what the biggest players are doing — my focus is mostly (but not exclusively) on Amazon, Apple, Facebook, and Google. I want to tease out a few trends to help orient myself in my role for the year ahead. I try not to make predictions, but perhaps play out some possibilities.
There are two big declines at the core of this year’s trends, which I think set the tone for where consumer tech might head in 2019. They are the smartphone decline, and the Facebook decline.
The Smartphone Decline
The smartphone market is in recession. To be clear, hundreds of millions of new smartphones are still sold every year, but that number seems to have peaked (it’s 6% down from last year in the last reported financial quarter, according to IDC). There are many reasons for this, but the fundamental truth is that most of the ‘easy’ growth is gone; it’s harder to sell to new customers, and people who already own smartphones are upgrading less frequently.
Despite what Apple say in each of their annual new iPhone presentations, hardware has pretty much reached a plateau; there’s very little differentiation year-on-year, so people are hanging on to their phones for longer. Also, Chinese brands like Oppo, Huawei and Xiaomi are making mid-range phones with excellent technical specifications that are taking sales from flagship phones. It’s only the first week of 2019 and already Apple and Samsung have reported lower than expected sales.
There are a number of noticeable effects of this: firstly and most obviously, that brand-loyal fans are being charged more for new flagship smartphones; for example, the ‘budget’ iPhone XR is $50/£50 more expensive than last year’s iPhone 8, while the iPhone X and this year’s XS start at $999/£999. Secondly, phone makers are ramping up their service offerings; from storage subscriptions such as iCloud, Google One, and Samsung Cloud, to media stores such as iTunes, Apple Music, and YouTube Music, the companies want to make more money from their customers through software sales after the hardware purchase.
The third noticeable effect is that there’s a lot of experimentation with smartphone form and features as manufacturers look for ways to stand out from the competition. From sub-screen fingerprint scanners to in-screen speakers to foldable screens, there are many new innovations/gimmicks. But the current arena for competing to differentiate is the camera.
While some manufacturers compete on camera hardware — like Huawei’s Mate Pro with its three rear-facing Leica cameras — most are making a show of how good computational photography is becoming. It was notable that Apple spent a long chunk of its last iPhone presentation highlighting new computational camera features, and Google’s Pixel 3 was sold almost entirely on its photography software (and it’s probably the best single-lens smartphone camera available).
As well as improving photographs through features like Smart HDR and Night Sight, there are also early steps in making the camera a key input to the device, with services like Google Lens, Bixby Vision, and Pinterest Lens. Google’s camera recognises plants, fabrics, text, phone numbers and website addresses, and a claimed one billion products, four times more than at launch. More categories are set to be added in 2019 and I expect that Google will allow brands and services to add custom training to Lens to make their own products and promotional materials searchable and, eventually, shoppable.
Computational photography also drives the adoption of on-device neural engine chips, for the complex calculations required for image processing. Apple’s A12 Bionic, Google’s Visual Core and Huawei’s Kirin 980 are all examples of dedicated mobile chipsets for this task. These chips, with pre-trained on-device machine learning models, also enable other machine-learning powered functions and will be increasingly at the heart of future mobile and wearable devices.
The fourth effect of the smartphone plateau is that phone makers are diversifying their wearable hardware ranges. It’s interesting to note that at this year’s Consumer Electronics Show (CES) some phone manufacturers chose to display wearables directly alongside phones, not as a separate category. The first wave of wearables is on the wrist (such as Apple’s Watch and Google’s renewed support for WearOS), and in the ears — with headphones, earphones and earbuds featuring built-in support for smart assistants, such as Apple’s AirPods with Siri, Google’s Pixel Buds with Assistant.
These hearables (to use the neologism) are an interesting category. Last year chip-maker Qualcomm released a new system-on-a-chip (SoC) that’s specifically designed for wireless earbuds with third-party smart assistant integration, making it much easier for electronics manufacturers to make their own smart earphones. Later in the year they teamed up with Amazon to provide a ‘reference design’ for Alexa-enabled hearables, giving manufacturers further incentive to choose Amazon’s platform.
An emerging category to watch out for is glasses/headwear. Some head-mounted devices have already launched, such as Microsoft’s HoloLens, the Magic Leap One, North Focals, and Vuzix Blade. But there are still some fundamental issues to overcome before they hit the consumer mainstream: they need good battery life and high quality sensors, both without making the device too heavy to be comfortable; the cost needs to drop significantly; they need to look like something you’d be happy to wear in public (unlike Magic Leap or HoloLens One); and, crucially, there are issues with the lenses still to be worked out (more about this later in this article)
A couple of other emerging technologies will boost wearables in the future. eSIM puts SIM card functionality onto hardware chips, so devices can directly send and receive data over mobile networks. This opens the door to autonomous wearables that don’t require your phone as a data hub, such as Apple Watch 3 & 4 and a smattering of Wear OS devices. eSIM technology could also work for hearables, making them into independently-operating portable smart speakers. And the roll-out of 5G networks will make it possible to stream high-quality data to future smart glasses, providing the 3D assets that make rich augmented reality experiences possible.
Smarter wearables indicate a move away from the phone screen and towards an atomic computer, where actions are invoked from different devices as appropriate; for example, seeing messaging notifications on your wrist, or translating a phrase through your ear.
When the computer is fragmented across many different surfaces you’ll need a system to manage your memory, identity and history across them. And for surfaces where there’s no screen available, like earphones, the primary access to internet services will be through voice commands. The solution to both of these is the smart assistant.
One of the biggest contests in consumer tech in 2018 was between Amazon and Google. More specifically, the contest is to make their respective smart assistants, Alexa and Google Assistant, dominate as a new commerce platform; and the first piece of the prize is your home.
An estimated 23%-32% of US households (and some 18% in the UK) now own a smart speaker; Alexa powers most of them, with Google Assistant in second place. While Google may have sold fewer devices in the home market, they still have greater reach because Assistant is also available on most new Android phones (outside China) — while Amazon are obviously delighted with 100 million Alexa-powered devices sold, Google say that by the end of this month Assistant will be available on one billion devices (although it must be pointed out that one billion devices doesn’t equal one billion active users).
In the home Amazon have more at stake, because unlike Google’s Android they don’t have a platform of their own, which Alexa provides them with. Creating this market is so important to Amazon that they have a reported 10,000 staff working on Alexa.
Sometimes markets need to be made, not simply seized.
Google’s mission it to prevent Amazon from running away with the market, because a consequence of Alexa becoming the primary interface in your home is that it then extends Alexa’s usefulness into your phone, and anything that prevents Google from being your first port of call for finding things out is an existential threat to that company. (This is a similar threat to the launch of the iPhone and Apple’s control over the nascent mobile web — to which Google’s response was Android.)
Just four years after Amazon announced the first Echo as a standalone home device, an increasing number of household objects have become controllable by, or enhanced by, smart speakers. Bret Kinsella of Voicebot describes this as the second phase of voice assistants:
Voice is quickly becoming an expectation. For several product categories it is now a minimum market requirement
Using Bluetooth and WiFi, smart speakers are able to directly control new devices in your home even if they’re not ‘smart’ devices. Google’s freshly-announced Assistant Connect program is for exactly this purpose, while Amazon are a step ahead with their preview late last year of their Alexa-enhanced microwave oven and wall clock, which gain extra powers when connected to an Echo speaker by Bluetooth.
Not every device incorporates a microphone and voice-processing functions: greater support for programming interfaces and the ability to relay commands from speech-activated devices such as smart speakers brings voice control to otherwise “deaf” products.
The problem with voice-only devices — especially smart speakers — is that, while they’re great for giving simple answers, for controlling media playback, or for certain heavily-prescribed functions such as setting alarms and timers, a lot of their capability is hidden. A usability study by the Nielsen Norman Group found users were very unaware of what their smart assistants could do for them.
Considering that 62% of the needs could be fully or partially solved by today’s intelligent assistants, users employed their current assistants only in one of the 9 times when they could have used them with some success.
This continues to be a problem for Amazon and Google as they want to get more brands onto their platforms in order to make those platforms more valuable — but there’s still no way for users to find out about new branded applications without some other form of promotion (both Alexa and Assistant have catalogues of third-party actions and regular emails telling about new features). I talked about this in a little more detail in my previous article, Thinking Out Loud: Understanding Voice UI, and How To Build for It.
One approach to solving this discoverability problem is with intent-based discovery; rather than the user having to know how to say “Hey Google, talk to Nike Coach”, they might ask “Hey Google, how can I start running?”, to which Assistant might suggest Nike Coach as an option. This means third-party assistant applications, especially from brands, should very clearly be designed to meet a user’s intent; and, further, to meet the very different contexts of (for example) using an assistant on a speaker in your home, and using an assistant in your ear while out of home.
Another potential solution lies in the move to add screens to voice devices, whether through smart home hubs (Amazon Echo Show, Google Home Hub), the TV (Amazon FireTV Stick with Alexa, Google Home + Chromecast), phones, tablets, and laptops, or by turning those phones and tablets into smart home hubs using docking stands (Amazon’s Fire HD Show Mode Dock, Google’s Pixel Stand). A smart assistant with a screen makes it easier to present more options to a user wanting to find third-party applications, and opens up the possibility of suggesting more branded applications ambiently.
While Amazon and Google dominate the smart speaker market, that’s not the end of the story. Apple’s HomePod is unlikely to take much market share (although will make them lots of money), but Siri is doing very well on phones, home media (Apple TV), and AirPods and Beats earphones. It was also notable to see many new devices at CES come with support for Apple’s Airplay 2 protocol.
As I’ve said frequently throughout 2018: Amazon’s end goal isn’t an Echo in every home, it’s Alexa in every thing. And the same goes for Google with Assistant, Apple with Siri, and every other player in the market. The ultimate prize to be won by any company is to have their smart assistant be the meta-operating-system across all the surfaces of your interactions with the internet — from your home to your car to the wearable tech on your body. Ecosystem lock-in will be much stronger when an assistant is integrated into everything you own. I covered this in more detail in a previous article, Why Is Every Company Making a Digital Assistant?.
As for other players, Samsung’s Bixby hasn’t made much of an impact this year but a recently-released Bixby 2.0 is apparently much-improved and Samsung have already declared their intention to make their 500m annual home appliances Bixby-compatible by 2020. Chinese makers Alibaba, Baidu and Xiaomi are impacting on the global market, and could launch their smart speakers and assistants outside of Asia soon.
Microsoft’s Cortana is… well, I’ve no idea what their strategy is as their partnership with Amazon puts Alexa on Xbox and Windows and so Cortana doesn’t seem to have much of a role anymore. Finally there’s Facebook, who launched their video-calling device / smart speaker, Portal, late in 2018; I’ve no idea how sales have gone but they’re going to have an uphill struggle after their very bad year.
The Facebook Decline
Facebook was rarely out of the news in 2018 for a series of scandals, from privacy to fake news to bias to being an “enabling environment” for the genocide of Rohingya in Myanmar. They have a further, perhaps existential, problem as can be seen from two charts: the first shows how growth of time spent on Facebook’s feed is in a slight decline, as digital video consumption increases.
The second shows how, while globally Facebook’s growth remains strong, it’s less impressive when considering users in Europe and North America — the most valuable users in terms of monetisation.
Together these charts indicate that Facebook’s not acquiring new high-value users — as with smartphones, the ‘easy’ growth, which saw it bloom to over two billion users in around ten years, is over — and the users they have are spending less time engaging with the news feed (which is, more precisely, where the decline is happening, not in the company as a whole). A decline in attention to the news feed means a decline in advertising revenue.
A lot of the public, permanent sharing on the news feed is moving instead to private, ephemeral sharing through messaging and Stories. The good news for Facebook is that they’re well-placed to benefit from that shift with their own Messenger and their acquisitions of Instagram and WhatsApp.
This is the future. People want to share in ways that don’t stick around permanently, and I want to be sure that we fully embrace this.
The Stories format is the great breakout hit of the latter days of the smartphone age. From its beginning in Snapchat it was — ahem — adopted by Instagram, WhatsApp, and Messenger and usage on those platforms now eclipses those of Snapchat. Subsequently we’ve seen the format taken up (with varying degrees of success) by a great number of other apps — most notably, Chinese behemoth WeChat’s latest major update was entirely Stories-focused (albeit called Time Capsules), and Google are getting behind the web-based AMP Stories format in search results pages.
Messaging in general, after the bot gold rush a couple of years ago, seems to be coalescing around customer support. There should be an opportunity there for brands to turn customer support into further engagement, especially post-sales — starting from utility rather than campaign (which should always have been the approach).
A second wave of messaging applications could come through WhatsApp’s Business API, Google’s local business messaging, and the Google-backed Rich Communication Service (RCS) — although the success of RCS relies on every mobile network carrier getting on board, and Apple supporting the protocol in iMessage, which currently seems unlikely.
But both Stories and messaging are harder to monetise than the news feed, and the question is whether these properties can increase monetisation without alienating users. Facebook are trying to diversify their revenue streams with tools for small businesses (especially fashion) to become more shoppable on Instagram, while the WhatsApp Business API aims to charge businesses for access to customers, as they try to reduce their dependency on advertising.
The Advertising Shitshow
The ad tech ecosystem needs to be burned to the ground. Until that happens everything that’s wrong with the internet will continue to just get worse, because ad tech creates the incentive to make it worse.
Billions of pounds are being spent on digital advertising, and billions are being wasted. Ad companies themselves are doing very well, but publishing businesses that rely on ads for revenue are struggling. One of NiemanLabs predictions for journalism in 2019 is “goodbye attention economy”. I really hope so.
The attention economy is toxic. It’s responsible for garbage content, fake news, and the excessive power of the giant social-media platforms. Competing for money forces media to think about how to give their users long-term value instead of short-term gratification
The consumer backlash isn’t against advertising per se — just that ads are too prolific, annoying, irrelevant, intrusive, and cumbrous. Annoying overlays, autoplaying video, Bitcoin scams… little wonder that a survey early last year found that over 40% of users said that they’d used an ad blocker in the past month.
Ad-frustration, whether from annoyance with ads or a feeling that they’re excessive, is the most popular motivation to block ads in all age groups.
People hate digital advertising, and digital advertising hates people. I don’t know what’s going to happen in this field in 2019, but it feels like its reaching crisis point.
I worry about the long-term relevance of the web because browsing, especially on phones, involves running a gauntlet of popups and overlays and buttons for granting or refusing permission to be tracked. It’s a combination of the advertising shitshow and some well-intentioned but misguided GDPR and cookie regulations from the EU, and it’s a terrible experience and potentially counterproductive to preserving privacy.
Websites throw up pop-ups and overlays that no one reads, or ban entire continents, not because their users care but because a regulator said so.
Unlike many of my peers, I hope the AMP project is successful, because it imposes strict rules about page loading speed and advertising which essentially benefit users. I fully understand concerns about being a ‘land grab’ for the web from Google, but its new governance model, including independent advisory and leadership boards, hopefully indicate that this should no longer be a concern.
XR: the Post-mobile Platform
XR is a spectrum of the interaction between our senses and the digital world. At one of the spectrum is cold, hard, reality; at the other, our senses are fully captured by a digital layer — this is virtual reality (VR). In between we have augmented reality (AR), where a digital layer is overlaid on (our sense of) the physical world but without any awareness of its space; and mixed reality (MR), where the digital layer interacts with the physical space of reality. XR is generally understood as an acronym for eXtended Reality, or sometimes the X is understood to be a variable, like the 𝑥 in algebra; as a placeholder for ‘your reality here’. The XR spectrum is sometimes called the immersive spectrum.
I still don’t believe that VR is going to break into the home consumer mainstream. It has a small but dedicated market in gaming, in business applications, and perhaps experiential destinations (although IMAX closing their VR centres isn’t a good sign), but it’s just too isolating to be used regularly — at least with current hardware devices.
The virtual reality market is fundamentally constrained by its very nature: because it is about the temporary exit from real life, not the addition to it, there simply isn’t nearly as much room for virtual reality as there is for any number of other tech products.
I think VR might stand a chance on notional future hardware which is capable of displaying the full XR spectrum, from lightly-enhanced AR to full VR, where VR is simply a mode that’s toggled on and off without requiring dedicated specialist hardware.
I believe that XR is going to be the post-mobile platform (or, at least, a post-mobile platform), but there’s no existing hardware solution that’s sufficient to take us there yet. Smartphone-based AR is great for augmented selfies, as enhanced ‘mirrors’ (especially for cosmetics brands), for AR stickers in messages, and perhaps for sharing (passing the phone to a friend) to, for example, show a friend an object such as furniture; but it’s not comfortable or convenient enough for extended use, which eliminates a lot of potential use cases.
XR will really hit its stride when someone makes a breakthrough head-mounted device — probably glasses. I mentioned earlier some of the issues that still need to be worked out, including sensors, battery life, and appearance. But many of these are already being worked out through phones and wearables, and I don’t see them as insurmountable — a new headset by startup Nreal looks to have made some promising steps forward this year already, especially in appearance.
The biggest blocking problem right now is the lens. Some early AR headsets, such as those from North and Vusix, use a heads-up-display; information projected onto the lens, in a similar way that Google Glass did a few years ago. What you see is a flat image overlaid on your vision, like a smart watch screen being dangled in front of your eyes. It adds information to your vision, but there’s no sense of it interacting with the physical world. This interaction, or mixed reality, requires the illusion of dimensionality which is provided by sensors building a picture of the world around you and feeding it to a visual display. There are currently two approaches to this display.
With video see-through (VST) lenses, a camera records what’s in front of you and passes that to LED screens, so your eyes are seeing a captured vision of the world, then adds digital information to the signal; you’re never seeing the world directly, only a mediated image. VST lenses have issues of social awareness, as they block your eyes from people looking at you, and of focus as everything is on a flat plane a few centimetres from your pupils, which will tire your eyes.
With optical see-through (OST) lenses, light containing digital information is projected onto darkened glass lens while you look through it, so digital objects appear to render directly in the scene. The major drawback with OST is that it can only add light to a scene and not subtract it, meaning they don’t work well in bright light. The mixed reality headsets HoloLens and Magic Leap currently use OST, but it will be interesting to see if that’s where the market eventually lands.
Regardless of the technology approach, I believe that XR offers new opportunities in visual user interfaces; it could free computing from the confines of a 15cm rectangular screen, and could be the dominant platform of the next ten years — if the problems can be overcome. I don’t foresee the breakthrough device coming this year, but I hope to be proved wrong.
One interesting cultural trend to watch will be the development of digital identity in XR. Apple’s animated Emoji, Snapchat’s Bitmoji, Samsung’s AR Emoji, even (to a degree) Google’s Gboard Minis, are all developing our online visual identities from static avatars to animated characters based on our physical attributes. Machine learning can create a model of our speech patterns, as in Gmail’s Smart Compose and Smart Replies. Extrapolating from this, we could have digital selves that create entirely virtual identities to augment our actual selves. I don’t know where this is going yet, but it’s interesting.
2019 vs 2018
Looking back at last year’s equivalent trends piece, I’m struck by the similarities to this year’s. For all that it sometimes feels like we’re moving at a whirlwind pace, technological progress is mostly quite stable, with true breakthroughs like the smartphone happening more rarely than we might think. Rather than constant revolutions, we have more of a punctuated equilibrium model; occasional bursts of rapid transformation followed by periods of relative stability.
Originally published at Peter Gasston.