The Cloud of Smart Things
The Future of Technology is Intelligent, Invisible, and Connected
As we head into 2017 I reflect on the last 10 years of advances in technology, and look forward to then next 10 years — and imagine what that might look like.
From Hundreds of Thousands to Billions in Five Years
The opportunity smartphone apps represent for developers and entrepreneurs is obvious today, but it was less so in 2009 when my presentations boasted to potential Android developers of the 12,000 apps then available in the Android Market.
10 years ago there were no real smartphones. The very first Android handset was released in 2008; the first iPhone in 2007. I joined Google in March of 2009, just in time to help launch Android 1.5 Cupcake (API level 3 — Widgets! Live folders! 3rd party keyboards!)
According to Wikipedia, in 2009 Android accounted for 2.8% of new smartphones, and smartphones represented less than 14% of new phone sales. Wikipedia now suggests that today Android runs on 85% of new smartphones — which in turn represent 74% of all new phones.
So what comes next?
As with everything I post here on Medium, what follows is my own, personal opinion. It does not necessarily represent the views of Google, Alphabet, or any of the people who work there. I have chosen a 10 year timeframe to avoid any suggestion that these guesses are based on any knowledge I may have because I’m a Google employee; they are not. Further, they should not be taken as an indication of anything Google is, has, or will be doing now or in the future. I’m just making this shit up people — please read accordingly.
The Post Smartphone World
Smartphones and laptops aren’t going anywhere anytime soon, but that doesn’t mean things aren’t changing.
I believe a new industrial revolution is underway; driven by machine intelligence that’s increasingly enabling autonomous cloud-connected devices with which we interact in natural ways.
Tech giants like Google, Amazon, and Microsoft are progressively externalizing their infrastructure — both hardware and software. This makes it possible for us as developers to build products and services that take advantage of decades of progress and 10s of billions of dollars of investments in machine intelligence, data center design, big data processing, and site reliability.
The smartphone represents the ultimate evolution of general-purpose computers. From room-sized, down to mini-computers, desktop PCs, laptops, and finally hand-held devices, we’ve finally shrunk it enough that we can take it with us.
The next step is making it disappear entirely. The technological advances that make a $5 Raspberry Pi possible, allow to integrate computer control from everything from our thermostat, to our lights, sprinklers, curtains, and mailboxes. Machine Intelligence will allow these devices to be autonomous, or controllable using conversational voice commands.
In 10 years it won’t be weird to talk to the things in your house to control them. You’ll talk conversationally and your home will recognize you (and your family members) individually and personally. Anything with a switch will be updated, and your home, car, office, and hotel room will learn your habits and preferences, acting autonomously to control everything from the thermostat, to windows, curtains, and lighting.
In 20 years: Computers become cheap enough to be disposable, allowing us to put a computer in everything. You won’t have a smart fridge, you’ll have a smart milk carton that orders more milk when it runs out. Plants will tell your reticulation how often they need to be watered. Big Data will allow you to analyze your dietary intake based on the food you’ve consumed.
The QHD AMOLED at 534ppi screen on the Pixel XL is a thing of beauty, but it’s still only images on glass — and the size of the glass is determined by the need for it be small enough to be comfortable and portable.
Google Cast lets you use any TV as a screen, and AR/VR like Daydream and Oculus can give you a personal cinema experience. Google Home and Amazon Echo eliminate the screen entirely for non-visual operations like getting answers to questions and listening to music.
In 10 years, I expect to see stand-alone devices and personal audio- and video-based personal augmentations to provide feedback for our queries and requests wherever we are, without us needing to look at a phone.
In 20 years: Rectangular glass devices as a form-factor are an anachronism, wholly replaced by in-ear headphones and retinal projectors.
The proliferation of smartphone and other connected devices has exploded in the past decade, but the availability and affordability of connectivity hasn’t enjoyed the same boost.
In 10 years, globally, Internet access will be more easily obtained than running water or electricity. All your devices will be able to connect at all times to fast, reliable, and cheap (if not free) Internet access.
In 20 years: People will see references to getting “4 bars” in old movies and won’t know what the hell they’re talking about. The idea that you or your belongings might not be able to connect to the Internet will seem as unlikely as it does horrifying.
No Physical Device Interaction
Touching and swiping are a great way to interact with glass surfaces, but it’s the wrong paradigm for a future with invisible computers. The quality of voice recognition and speech synthesis have already make interacting with devices through voice a reality.
Improved voice recognition, natural language parsers, and image analysis are already available to developers. These services will continue to improve, allowing companies like Google and Apple to create increasingly intelligent AI-powered assistants, while developers create more intuitive products and services.
In 10 years physical interaction with hardware devices will be unusual, with voice and gesture control the norm — likely intermediated by an intelligent assistant able to contextualize interactions with multiple services and hardware devices.
In 20 years: Artificial intelligence will progress to the point where little interaction is required for your assistant to manage most mundane interactions with devices. Mind control will replace vocal instructions and text entry.
10 years ago, the idea of running your services on someone else’s servers was a radical concept. Today, using a public cloud provider like Google or Amazon is a given for any new company — and increasingly powerful services like BigQuery and Machine Intelligence are being externalized.
In 10 years public Cloud providers will be making advances in quantum computing, biological storage, and Machine Intelligence available to all developers. References to CPUs, cores, drives, SSDs, and Virtual Machines will disappear as out-dated metaphors for long-deprecated hardware. As Cloud resources become increasingly commoditized, they will become significantly cheaper. The idea of running your own server — even for development or debugging — will be vaguely ridiculous.
In 20 years: My generation of developers will laugh as they tell horrified new grads about the days when we ran production services on non-virtual machines with spinning disks under our desks or in our garages. Everything that doesn’t happen on-device will be processed and stored in massively redundant bio-organic data centers that generate electricity instead of consuming it.
You Will Build the Future
The intersection of Cloud, machine intelligence, and connected devices feels eerily similar to the excitement of 2010 — just before Android really took off.
The active, passionate, vocal, and enthusiastic community of Android App Developers was — and continues to be — a critical factor in Android’s success and growth. Those early Android developers took advantage of what quickly become a revolution; I believe a similar opportunity exists today for Makers.
The combination of Machine Intelligence, the public Cloud, and the cheap hardware that enables the Internet of Things represents an opportunity to propel the Maker community of awesome, enthusiastic hobbyists into the forefront of the next industrial revolution.
I love learning new things and getting involved in early in potential step-changes in technology, so I’ll be keeping a close eye on these areas, and getting more involved in the communities around them.
As with Android in 2010, all the pieces you need to get a head start in this new opportunity already exist. There’s a range of providers, but I work for Google — so no surprise that I’m going to suggest you start by heading over to Google Cloud and checking out the machine learning APIs.
How will you help define the future?
Once again, this post is my personal opinion. I am not speaking on behalf of Google, Alphabet, or any of the people who work there. Any resemblance these predictions may have to actual Google plans, projects, or products is purely coincidental. Seriously.