The hype bubbles of the 2010s: Part 5 of Tech That Reshaped our Lives in the Last Decade

Dominik Lukes
TechCzech
Published in
12 min readJan 9, 2020

--

This is part 5 of a 6-part series. Read Part 1 for Background. Go back to Part 4 on Media consumption trends. Next: Predictions for the 2020s.

“”
Photo by Lanju Fotografie on Unsplash

With reflections on the end of any period, there come not only summaries of the greatest successes but also the flops and failures. And the top of this decade proved no exception. The Verge lists 84 biggest flops across tech and Audrey Watters came up with 100 debacles for ed tech alone. While such schadenfreude may be entertaining or even occasionally edifying, that is not my purpose here.

I want to take the motte-and-bailey approach to looking at the hype balloons here. Scott Aaronson did something similar for quantum computing (an area which is so far outside my knowledge that I’m going to skip it entirely). What I want to look at is technologies and transformations that didn’t fail as such. Some of them might even be considered a success. But their success is much more constrained and mundane than it might appear from the hype.

Ignorance of what is possible is an integral part of innovation. So we cannot blame the hype merchants too much. Many of the changes on the list of transformations were also hyped up — and who could have known in 2010 which bubbles would burst which into colourful rainbows of joy and which would produce a dull mist quietly falling to the ground.

The choice of which items to include in this section are very much subjective and dependent on how we judge the transformative nature of what was achieved against the magnitude of the hype surrounding it. And also how strongly we feel that effect with ourselves and people we know. Many would argue that fitness trackers and smartwatches are much less transformative than I give them credit for and things like smart home and tablets are more so. On a different day, I might make different choices, as well. But while the category may be in dispute, I hope that the underlying reality is much less controversial.

Social graph and personalisation

For better or worse, social networks have defined the 2010s. The roller coaster of Twitter and Facebook going from saviours of democracy to its mortal threats may have left many people dizzy. I was always skeptical of the first and remain unconvinced by the second. But the fact of the matter remains, that in the 2010s, mass social networks have become inextricably interwoven into the fabric of our society. On balance, their direct impact in most people’s lives was positive — it connected them with those who they otherwise may have lost touch with, but most importantly it made it easier to connect to those who they’re closest to. Places like Reddit allowed people of niche interests to find like minded souls. I could follow this with a litany of bad things that happened to somebody on social media — bullying, doxing, fomo, burn out, etc. — but it’s not clear to me that this would be anything than an availability bias. Bad things happen.

But what has definitely failed to live to its promise is the idea that social graph is going to completely revolutionise everything. I remember listening with incredulity to pundits in the early 2010s that Facebook was threatening Google because of its social graph — people would no longer have to search for anything — like where can I get a plumber — because the social graph would provide the answer. It is true that Twitter and social media “killed” RSS — or rather Google did when it discontinued the much lamented Google Reader. RSS, of course, keeps binding together much of the internet (including podcasts) and Feedly amply took up the mantle of Google Reader.

Social graph did not (and never really could) replace search but it took Google probably a billion dollars to find out by the simple expedient of starting and killing a social network in all within about 8 years. Yes, I’m talking about the failure that was Google Plus. Google Plus failed not because it was bad or useless but because Google went into it with the wrong expectations and motivations. Google wanted Facebook’s and Twitter’s data now how people are connected to each other and it thought it could just get it directly from the source. For a while Google Plus was at the heart of everything Google did — even tying employee compensations to its performance. Google Plus was a nice alternative to Facebook and Twitter — sitting somewhere between them in terms of what it could be used for. But it didn’t create the social graph Google was looking for and now it is dead.

What the social graph is useful for is targeting advertising and, of course, finding people who share some connection to you. This made microtargeting a much easier proposition — but it is not necessary for it. The fear is that combined with other data available about us, a marketing or election campaign can target messages to groups as small as 20 people and convince them more effectively of this or the other. The hope is that we can get all sorts of things personalised just to our own unique snowflake needs. Personalised advertising, personalised medicine, personalised education — the dream of the perfect prediction. But so far, it has been mostly a failure. Ads for things we already bought follow us around the internet like faithful puppies, medicine still works at population level and children are taught in large classes. It is very easy to convince somebody that something was individually tailored to them but in fact personalisation only works if you belong to a big enough group with sufficiently similar interests or needs.

Google News is a perfect example of this. Many people swear it knows them. But most people’s news interests are very similar. So Google just needs to do a better job of sorting them into the right groups. I only want important political and international news — and local news about events and places — no scandals, no crime, no celebrities, no human interest stories, definitely no sports. I may occasionally be interested in some of those things but I want to seek them out — I don’t want them forced on me. For that reason, I also don’t want any tech news in my general news feed. So I set about training Google News to see if it could learn my personal preferences. It was a failure. Google News shows me local news from places I don’t live anywhere near, it randomly inserts celebrity news. The reason for this is clear — I just don’t generate enough signals to make my preference clear. And that is the case with most personalisation efforts — they work as long as you fit into a big enough bucket.

Ereaders and ebooks / E-textbooks and tablets

“I read it on my Kindle” is now a phrase that will hardly raise anybody’s eyebrows. And reading on a phone or a tablet are no stranger now than reading a newspaper was 10 years ago. But despite all the promise, ebooks have not made printed books obsolete.

The Kindle has revolutionised my life. But then I used to travel with 2–3 books anywhere and choosing what to bring with me to read was an agony. Now, all I have to do is pack my Kindle and half my library is available on it and I can add to it anytime I’m within reach of WiFi. On short trips, I may not even bother and just read on my phone. Almost any new book is available as an ebook (although shockingly there are still exceptions) and like audiobooks, ebooks have opened the world of reading to people who could not access print in the old days. I can borrow ebooks from a library (albeit in a limited selection) and if I’m tired of all that, I can access a whole universe of reading with online fiction, fanfiction and out-of-copyright works.

So far so transformative. Why, then, do I have ebooks, e-readers and tablets in the section of overpromise and underdelivery? Well, for one, ebooks have not only not displaced printed books, they have not even overtaken them in sales. Not even close! Ebook sales are not even growing and if anything have fallen in some years. This is mostly due to publishers DRM schemes — no one really buys and ebook — and also horrendous quality control. Ebooks are not helped in the EU by having higher VAT applied to them. New ebooks cost as much as a print book, so why buy something you cannot share, give as a gift, or pass down to your grandchildren? Paperbacks are often cheaper a few years in, used books are easy to find and libraries still offer more print books than ebooks.

I hate reading print books when it comes to fiction and non-fiction but for non-sequential reading, such as a textbook or a magazine, the ereader or tablet is not as good an experience as their paper counterparts. Which is why, despite their potential, digital textbooks have mostly been a failure. This is partly exemplified by the monumental non-event that was Apple’s vaunted iBook Author. What was once expected to be the future of textbook creation never gathered any steam and was completely abandoned by Apple. Almost nobody knows how to actually make an ebook, let alone an e-textbook. Least of all, it sometimes seems, the publishers, who are often content to release a PDF of a textbook as the digital offering. But that cannot be read on a Kindle and even the Tablet reading experience is inferior. Despite their potential, most school tablet projects fail.

The developments in the technology powering reading devices have also stalled. Colour e-ink that would refresh fast enough for complex pages is a promise always just a step away and there hasn’t been a step-change in ereader design in years — even the Kindle Oasis was a marginal improvement at best.

Smart home and internet of things

Smart thermostats, doorbells and lightbulbs are a thing. Smart home is a dream. Many devices exist, but apart from the smart speakers which truly reshaped the world, they play a very small role in people’s lives. The promises of the fully connected and automated home — such that we know from the imaginings of past futurists are still far away. What is more, the reality is that a home will have one thing that is sort of automated. I have smart lights. But only here and there. Nothing works together. And if there is one scenario that is more familiar than the fully connected home is a TV set that only one person really knows how to turn on. Every now and again, we hear a pronouncement about the one and future interoperability standard that will make the automated blinds work with the connected lights and microwave. But so far, what is more likely to be a reality is a security breach of a connected child monitor or a smart doorbell.

Yet, this is a category that will likely reshape our world in the future. It just hasn’t done that in the 2010s.

Reality will not be televised: The failure of 3D, VR, AR

Avatar smashed box-office records in 2009 and for the next 5 years, 3D was going to be the Future, 3D movies, 3D TVs, 3D Gaming consoles. 3D movies definitely did catch on if only in a limited way but nothing else 3D did. It was just too flaky, hard to use, did not work for everyone and did not improve the experience enough. People are forever dreaming about holographic displays but despite 2PAC at Coachella, nothing is just around the corner.

Next came VR and AR, and there is no doubt that the technology is here and it works. But it is far too niche and limited to have made an impact that could be called transformative. Yes, there are VR games and Pokemon Go showed that AR of sorts is possible. There are credible ways to use VR to teach people and AR glasses like Microsoft HoloLens can be used in medical or engineering training. Google Glass is also a success in certain industries.

But none of the big transformations once predicted happened. VR in education and gaming is still more of a gimmick and it is not clear how much more it adds to the experience over just plain video. Google’s and Apple’s once-grandiose plans in Augmented Reality have been scaled down greatly. Having said that, there are some clear wins in AR — using the AR when navigating with Google Maps is very handy. Apple’s ability to put people in photos is also great. These technologies are not failures — but their unarguable usefulness is very much limited and constrained. And I don’t expect that to change much in the next 10 years.

AI as a category

AI seems to be on everyone’s lips. New AI initiatives are springing up left and right, so how could AI have a place in the category of bursting bubbles? Yes, research in the AI field has produced some of the most transformative products of the last decade — speech recognition, image labelling, machine translation, text prediction, computational photography, etc. Those are things that are here to stay and will continue getting better.

But the attitude of — “we have a problem — let’s slap some AI on it” is just not helpful. For every great advance, there’s a failure. IBM’s Watson which kicked off the decade by beating Jeopardy! champions has been an almost complete failure in medicine, education and business. Chatbots are now very common but are almost completely useless and require manual set up. Summarising and synthesizing complex issues is still far out of reach of any automated system. And what’s worse, there’s not much on the horizon that will fix these issues. The great fears of automation of the majority of human jobs are nowhere.

Self-driving cars, the poster child of what can be done with AI, completely failed to live up to their promise. Most likely, the dreams of fully automated driving will probably not even be realised in the next decade.

Many are now fearing the second (or third, depending on how you count) AI Winter. That was the result of overpromise and underdelivery in the past. In the 1970s and again in the 1980s, funding for AI research dried up as project after project ended up in failure. I don’t think it will happen now. The AI projects of the past came up with useful algorithmic procedures we’re still using but were pretty much useless as products. Today’s research has resulted in the release of useful products and will continue to do so. Which is why for a company like Google, making ‘machine learning’ the centre of their platform, is a smart bet (as it is for Amazon, Apple, Baidu, Microsoft and others). But the current scramble for ‘let’s make sure we’re using some AI so that we don’t fall behind’ is likely to end with a whimper.

Blockchain / bitcoin

When Bitcoin briefly became worth $20,000, the word briefly went crazy. And ‘blockchain’ that Bitcoin is built around became the next big thing. Everybody started talking about ‘distributed ledgers’ and the future of unbreakable accountability, self-executing contracts, and the death of centralised databases. Companies started issuing ICOs (initial coin offerings) and the future of money was clear — the blockchain. And then, nothing happened. The hullabaloo died down and people researching and innovating with blockchain technologies got on with their work.

Blockchain is real, it works, it can be made robust and it can be useful. It is a true innovation in how data is managed and shared. But its real applications will be boring, done by companies nobody’s ever heard of or will ever hear of. Yes, it will make its way under the hood of our financial vehicles but it will most likely be in the same way most automotive innovation happens — slowly, invisibly and without much outward notice.

Robotics

While AI has produced something useful. Robotics has the Roomba and its imitators. There have been advances but the Roomba of 2002 and the Roomba of today do not produce markedly different results nor do they look all that different. We are probably over a decade away from useful robot that can truly clean the house and then take out the trash. The current self-emptying Roomba just docks and leaves the dust for a human to take out. Robotic hands, robots walking, robots doing almost anything useful is simply much harder to achieve than most people thought. I’d go with Rodney Brooks — the father of modern robotics on this.

3D printing

For a while, 3D printing was all the rage. Going to an education tradeshow for a few years was like walking into a 3D print lab. And if you are a manufacturer prototyping new products, or a producer of small runs of certain simple widgets like smartphone cases, 3D printing has entirely changed your world. Any engineering student will also have been able to do much more with their trainee designs than ever before.

But the desktop 3D printers in every home that would save us going to the hardware store to buy a bracket or a sprocket are still not here nor are they on the horizon. Cory Doctorow’s dream in Makers remains what it was when he put pen to paper, Science Fiction.

USB C

Oh, the promise of a single standard plug design for all our data and charging needs. And what if that plug was compact, symmetrical and could be inserted without looking. If we went by all the inputs we see on Apple’s devices (other than the iPhone), we’d think that USB C is the present rather than the future. But buying a USB C cable is still a bit of a lottery, knowing which USB C port will work with my monitor and which with my harddrive is not a trivial thing to know and that’s not even me trying to buy a USB C hub or a USB C memory stick. USB C is here, it works, it is better than the other USB standards. But it has added to rather than replaced the other USB designs. And exploring the warrens of my cables in 2020 is no happier an experience than it was in 2010.

--

--

Dominik Lukes
TechCzech

Education and technology specialist, linguist, feminist, enemy of prescriptivism, metaphor hacker, educator, (ex)podcaster, Drupal/Wordpress web builder, Czech.