Revisiting Mobile Computing
In this space I will share my ideas about modern (twenty-teens) ubiquitous computing, people’s interactions with computers, and how human-computer interaction fits into our day-to-day lives. If I were a really good writely writer, I’d provide supporting research, be wordy and expository enough to make a book out of these thoughts. But that won’t happen here.
I have 2 foci of note. First, I believe that mobile computing is a misnomer, and if were more thoughtfully considered we’d realize that it represents even a greater shift than assumed. Second, as a lifelong technical writer and new-media gadfly, I want to explore how mobile computing shifts much of the recent underlying fundamentals of our industry — this will be in a follow up article.
I’ll start with my own background, for credibility’s sake. I am indisputably nerdy — I love keeping pace with the latest science and technology, and especially the culture of the Internet. More to the point, my interests and passions have always been around what people typically think of as academic or career-focused, on science and engineering topics. I tinkered with electronics as a child, saved for my first computer in middle school, spent countless hours on BBSs in high school, and entered college as an electrical engineering major.
As I left college and entered the work force, to my delight it seemed that the late 90s and aughts were a golden age of widespread computing. “Everyone” owned a PC or laptop, and the world wide web was emerging as a center of people’s lives. In those years, the computer became a necessity for commerce and started pushing into spheres of personal life.
I’m rehashing what most of us lived through for good reason. The period roughly between 1990–2010 was when computers came to proliferate our day-to-day lives. I’d say that in those years, their public-life and private-life place was firmly established. And over the course of those years, baseline computer literacy increased in kind. Crucially too, user interface development and refinement reached a tipping point that heralded in mobile computing.
2007 was the introduction of the iPhone, and 2010 was the introduction of the iPad. While I stopped using Apple products around 2005, I think it’s hard to argue that the iPhone and iPad were the start of this next chapter in computing, the so-called mobile computing revolution.
I spend a lot of time keeping up with the digital zeitgeist, and have done so since the 90s (my pile of Wired magazine was heavy to move house-to-house!). In addition to being head-down in Google Reader in the late aughts, I absorbed what much of the digital press observed as mobile computing emerging as a paradigm shift. Most opinions cheered on the ability to take your computer with you, and not be shackled to your desk. The subtext probably being that our natural state was on the move, and location-bound computing conflicted with human nature.
For the record, I deferred owning my first smart phone, a Nexus 4, until late 2012. Based on all my other technology acquisition patterns, I’d be classified as an early adopter. But the smartphone purchase was probably 1–2 years after the average professional middle-class individual did. Why was this the first time in my life that I was a hold-out for new technology?
In retrospect, it blows my mind that I sat out the initial uptake of the one technological development in my lifetime that changed people’s day-to-day lives within a short period of time.
I clearly recall that I delayed my initial smartphone purchase because all of the same information and applications were available, arguably in a richer format, on my desktop computer (and like hell was I going to pay for a limited, slower data plan). And to this day I feel that I could manage OK without a smartphone, if that means anything.
The adoption of smartphones really occurred across many demographics and age-ranges; 20-somethings to 60-somethings! I’m fairly certain that mobility (by choice) tapers off as you get older. That mobile computing was purely a mobile phenomenon always seemed suspect. I was raising small kids during this period, and my life-stage certainly did not include that much mobility, it was diminishing in fact. Thus, I questioned the notion that computing finally caught up with any sort of universal, on-the-go lifestyle.
After much reflection and observation, I’ve concluded that the emergence and dominance of mobile computing reveals not a demand for computing on the go, but that people never embraced the mechanics of the first wave of location-bound computers in the first place.
As I’m writing this, it comes as a revelation to me, that people don’t love computers because in-and-of-themselves they are just awesome to use. For contrast, I’m coming to this conclusion as someone whose enduring preference has been a wide-open operating system, 2 monitors, full-sized keyboard, 15 tabs spread out in 2 different browsers, even a console window or two.
The true innovation in mobile computing is that the interface is more user-friendly than the PC for just about all applications. But there’s more - In mobile computing, the computer itself has moved out of the way so that the applications themselves are appliances; the underlying platform, the computer itself, is removed from our view. And people prefer appliances because they allow us to merely accomplish our tasks at hand.
As I see it, computers first came into our lives, and then the computers got out of the way. Using our computers on the go was the impetus for refining interfaces, but once that development settled, the innovation really took off as the new appliances pervaded all aspects of our lives. I’m a big believer in scarcity being the primary driver of innovation; we lost screen real-estate (21" -> 5" screen) and we lost input devices (104 Keys -> 1 physical button), and that made our computer programs more attractive and intuitive to use.
Returning to the idea that mobile computing is a misnomer, I offer an alternative take. We are in the era of computers truly as appliances just like your oven or other common fixture such as a faucet. For the most part they are easy to use; this is how complex everyday objects should be. And we’ve made it to this point in-part by closing the gap over 20 years between user interface advancement and computer literacy.
Appliancization is out of its infancy. So where are we now? Well two things — First, the current crop of smart speakers (Google Home, Alexa, etc…) continue this trend. They aim to remove even more of the computer interface from our lives by providing computer applications and data services with strictly a voice interface, and programming that directs a non-computer experience. As an aside I remember considering how smartphones essentially did away with the keyboard — one of my thoughts was that the next step would have been eliminating the screen via advanced voice recognition, and here it is. My thoughts at the time were mostly focused on car use. However, with the size of screens in cars today as any indication, I was obviously wrong that the market would opt for all-vocal-interfaces.
On the technology that enables appliancization, artificial intelligence (AI) APIs are being used in all sorts of apps and services. AI has has existed in the consumer realm for a while, in small part. Now just about every data service (and applications as upstream from that) can easily include AI driven functionality. Rather than freak out that the computers are thinking, consider that these APIs provide input and output processing of speech, sound, and action such that we can be less conscious of how we input data into the computer. As AI continues its proliferation, keep in the back of your mind that it will drive applications and services to require less direct control from the user.
It’s paradoxical because most of our lives exist partially online, and all of our data is online — yet in as much as we access the virtualized universe, the trend is is to present it apart from the context of using a computer, and that’s the main result of mobile computing’s dominance.