Technology adoption; the often lack thereof

We’re often made to believe that the tech industry releases the cutting edge products of the future. While to a certain extent this may be true, sometimes consumers are less willing to initially adopt said technological change.

The recent events at the World Mobile Congress in Barcelona serve to remind us that technologies of yesteryear can be re-introduced into the mainstream due to a confluence of factors.

Virtual Reality typifies this, with the technology becoming “mainstream” in the 1990s through the use of video games. Sega, developed the Sega VR headset in 1991 which was later released as the Sega VR-1 motion simulators prevalent in 1990s arcades. Today, virtual reality is being heralded as the next frontier of immersive technologies with the increased potential for sharing experiences, learning and enhancing visuals. Currently, most VR headsets require a powerful CPU and GPU or games console to power the content, still making it a relatively niche product. Expect it’s killer app to be a few years away until it proliferates and integrates through smartphone usage.

Mobile video calls were initially popularised in Japan in the late 90s/early 00s and later expanded through the West with the advent of 3G/UMTS networks and increasing connection speeds. The adoption of video calls was slow to take off, much like the jittery wireless connections that the services ran on. End users’ lack of general adoption was evident, with issues around pricing, network coverage and usability. The prevalence of Wi-Fi, 4G and smartphones has greatly aided apps such as Skype, Apple’s FaceTime and Google Hangouts. Today, usage extends beyond personal and professional means with an increasing number of startups leveraging video to offer healthcare, banking services and customer care, amongst others.

Artificial Intelligence has been evolving since the very early days of Alan Turing; with the founding stakeholders of this research collaborating at Dartmouth in 1956. Since then, the research and development has seen several AI winters, with no less than 5 periods of developmental solitude. AI has perennially been 10–20 years away from breakthroughs of orders of magnitude larger than previously realised. Though evidently progress has been made in machine learnings, deep learning and neural networks. AI has seen the goal posts moved consistently, with previous breakthroughs such as Deep Blue’s victory over Garry Kasparov seen as the pinnacle of computing to more Deep Mind’s most recent victory at Go. Now, just believed to be one of the many milestones to the consumerisation of this complex field.

Today, AI has become a buzzword with a slew of startups promising enhanced user experiences through complex network computations. Everything from predictive keyboards, personal assistants, trading to mass medical analysis. Though the technology has reached prevalence once more, it’s unclear how much more weak-AI as opposed to complete-AI we will have to endure.

Gartner 2015 Hype-cycle

The Internet of Things which predicates the increased connectivity of network enabled devices within physical objects. The philosophical mutterings of the technology have been around for decades, with the spark being ignited in the early 2000s through the use of RFID and NFC in devices. IoT experienced a malaise in the subsequent years though today is becoming increasingly more frequent within households through the use of devices such as Amazon Alexa, Chromecast, Apple TV and Nest. Much like the early days of the Web, IoT may struggle to gain the oft discussed lift-off until there is a unified protocol that allows cross-device interconnection, we continue to wait.

An associated technology; Wearables have been hugely popularised in the last few years with fitness tracker companies such as Pebble, Jawbone and Fitbit. Pebble were the first to tap into latent demand from consumers for watch based fitness/activity trackers. More recently, Apple have thrown their hat into the ring with the Apple Watch which is positioned as a complement to the iPhone and priced accordingly. Regardless of the popularity of wearables, they still lack a killer use case, hamstrung by screen size (if any), imprecise data measurements and inherent dependancy on a smartphone for increased functionality.

Smartphones have heralded the ubiquity of the computing age, with approximately 2 billion handsets globally increasing to 6.1 billion by 2020. Users now have more computing power in the palm of their hand than many PCs of the mid to late 1990s. Though, we now accept this platform of connectivity and processing power , the smartphones history saw a number of incarnations. The evolution of the smartphone really started from the advent of the PDA in the mid 90s, this saw a number of connected handsets such as the Nokia 9000 and later the Sony Ericsson R380 released, though the reserve of the higher-end user. The early 2000s continued to see a general lack of mass adoption, with the Palm OS dominating the PDA/smartphone market. It was only until the 2007 launch of the iPhone, that user adoption began to take off. An increase of screen size and quality, processing power and price point. This was accelerated with the launch of the (Apple) App Store in 2008 bringing increased function to handsets. Today, Android and iOS are the dominant smartphone OSs having rapidly taken market share from PalmOS, BlackBerry, Symbian, and Windows Mobile.

Apple is somewhat known to re-release products and popularise them as their own. Products such as the App Store, tablets, FaceTime, Apple Watch/wearables, voice activation/Siri among others. Releasing products with significantly uprated internal specifications within a supportive ecosystem of hardware products. A post-millenium Apple has proved consistent in bringing technologies to consumers at precisely the right time.

The Venture Capital industry often gets a bad rap for purportedly funding repackaged technologies instead of innovation, this is summarised by Peter Thiel’s famously pessimistic quote: “We wanted flying cars, instead we got 140 characters”. Though arguably his frustration is at how technology is used and the lack of inherent good it often fails to create. As Clayton Christensen, the popular theorist on disruptive innovation and author of The Innovator’s Dilemma, argues:

“The technological changes that damage established companies are usually not radically new or difficult from a technological point of view. They do, however, have two important characteristics: First, they typically present a different package of performance attributes — ones that, at least at the outset, are not valued by existing customers. Second, the performance attributes that existing customers do value improve at such a rapid rate that the new technology can later invade those established markets.”

Consumer attitudes also has to shift at a parallel to the development in technology. Would companies such as Uber and Airbnb thrived in the early naughties? The thought of renting your apartment to a stranger from the internet would have seemed somewhat perverse. At a parallel to all this, is the ongoing and often contentious change in local and national regulations within certain technologies (sharing economy, drones, privacy/encryption etc.).

Conclusion

Developing the right technology is not only about the product itself but also about timing, consumer attitudes and general market environment. Sometimes the consumer isn’t ready to experience the future quite yet and market timing can be as hard as product development itself.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.