The Death of Navigation — Part One

Jim Scully
Dec 27, 2017 · 8 min read

I’m old enough to remember when computer users didn’t navigate because there were no screens on which to do so. Back then, stacks of meticulously sequenced key punch cards were delicately loaded into trays to be fed into a reader, which translated the punches into the sequence of zeros and ones comprising the magic instructions aptly called “code.” I still remember the pity of a male computer science student breaking into tears as his foot-long stack of punch cards, probably a semester’s worth of very hard work, tumbled to the ground and scattered in the wind.

Photo by Joshua Sortino on Unsplash

Fortunately for computer geeks, punch cards were soon replaced by digitized images displayed on cathode ray tubes, or CRTs. But giving the computer instructions still required using code, making the images on the screen impossible for the rest of us to understand. At least the code was safe from clumsiness and wind.

Eventually, the graphical user interface, or GUI (pronounced gooey) translated computer code into screen instructions that non-programmers could use. Computer navigation was born. In the decades since, GUIs have become increasingly intuitive, spawned by new computer languages and the immense demand for simplicity created by the internet. Still later, as mobile devices gained prevalence, touch screens began to replace keyboards as the primary method for interacting with computers.

Photo by Andres Urena on Unsplash

Today, both keyboard and touch screen navigation are being replaced by voice-commanded ‘artificial intelligence’ (AI) assistants who go by names like ‘Siri’ (Apple), ‘Google’ (Google), ‘Cortana’ (Microsoft) and ‘Alexa’ (Amazon). And so, it would appear we’re coming full circle into a future where computer navigation no longer exists. But instead of using computer code to convey instructions, we will use something we’ve had all along, our own personal words.

“Voice Commands Over Clicks?”

Natural Language Processing (NLP) is certainly used for voice command, but it also has other applications, such as text prediction, social media analysis, document translation and, of course, e-commerce. But the application of interest to is voice command, because it is the most freeing and, therefore, has the greatest potential. Anyone who has experienced the freedom of lying in bed, half asleep, asking Alexa to set an alarm for 7 a.m. the next morning (my personal favorite use case), knows exactly what I’m talking about.

I’m particularly interested in exploring the use of voice command in the workplace. It might seem far fetched to think the sound of keyboard clicks will be replaced workers talking to their computers. But if you think this is cooky or even far into the future, I hope to convince you to think again.

Four Reasons Voice Recognition is “For Real”

Reason 1: Artificial Intelligence (AI) is already hot and voice recognition is already the primary way we interact with AI software.

Voice recognition software uses something called Natural Language Processing (NLP), the ability of software to translate requests in everyday language into computer commands. Innovators in AI apps are keenly focused on natural language. If you don’t believe me, next time you want to search the Web try simply typing your question exactly as you would speak it and see what you get. Then try the same search using only keywords. Chances are you’ll get as good or better results with your naturally worded question. That’s because without a lot of public fanfare companies like Google have been feverishly working to make their search engines more attuned to natural language inputs. My 87-year-old father-in-law is a big fan of voice command with his smartphone, mainly due to of his lack of thumb dexterity. Until a couple days ago he thought he had to hold the phone close to his mouth, like a walkie talkie, and bark cryptic keywords to get what he wanted. I finally told him to ask his question in the form of a normal question, like he was asking it of another human being. He was dumbfounded by the results. Next time I’ll break it to him that he can lower the phone and his voice and get equal results.

Reason 2: Our minds and bodies need a break

They say sitting is the new smoking. But even if you don’t think sitting in front of a computer is as bad for you as tobacco, everyone agrees that sitting in front of a computer screen for 8-plus hours a day negatively affects our health in a variety of ways, from Carpal Tunnel Syndrome to back problems and myriad other consequences of living chair-bound lives. The American Optometric Association has even coined a new condition called Computer Vision Syndrome, which covers a range of eye and vision-related problems associated with prolonged computer and mobile device use.

Photo by Oliver Thomas Klein on Unsplash

While smartphones have been around for about twenty years, it was only ten years ago that Steve Jobs introduced the first iPhone and propelled smartphones down the runway to ubiquity. Google came into existence in roughly the same time period. Thus, Millennials, like my daughter, were born with smartphones in their hands in place of toys and search engines to answer their questions in place of the Encyclopedia Britannica or … a parent. But the kids were also born with the bodies and eyes no different from mine, or my mother’s or her mother’s before her. We’re beginning to understand the physical and psychological effects of prolonged device use after only a decade or so of use. What will we learn about it over the period of twenty, thirty, forty years? I’m betting it won’t be good.

Our bodies and brains need periodic breaks from the screen. Interacting with the computer using voice instead of a keyboard and screen is one way of relieving our bodies without sacrificing productivity. Maybe sitting isn’t quite the new smoking, but freeing ourselves from the screen, if only occasionally, can only be good.

Reason 3: Yes, voice recognition is secure

In 2016, Barclays announced it was replacing passwords with voice recognition for online banking. As it happens, human voices are incredibly unique and able to be discerned by computers with amazing accuracy — clearly accurately enough for Barclays to trust voice recognition over passwords. True, some experts fear that expert impersonation or even computer-aided voice imitation could render voice recognition too risky for certain types of transactions. But let’s get real. Consider the best voice impersonator you’ve seen and imagine closing your eyes, eliminating all the facial and physical gestures that enhance the performance, and comparing the impersonator’s voice to the real one. Do you think you’d be fooled? No way. Now consider that computers are far better than us at dissecting and distinguishing voice characteristics. Frankly, I wouldn’t be too worried about a coworker accessing your paycheck from your own computer device.

But even better, imagine a world where you don’t need a password for every application you use. We all wish we had a dime for every precious minute we’ve spent trying and resetting them. Imagine needing only your unique voice to access all of your computer applications. Just think, we could stop writing passwords on sticky notes affixed to our computer screens. I rest my case.

Reason 4: Voice is the next frontier in multi-tasking

I have a confession to make. I once was skeptical about the use of mobile computing in the workplace. I couldn’t imagine not having access to a computer frequently enough to get my work done without the need for a mobile device. But then one day I caught myself checking emails in the airport security line and noticed others doing the same (now it’s almost everyone) and realized that multitasking, not necessity, was driving mobile popularity.

Most office workers have at least two devices open at any time, their computer and their phone. Three if they have a separate phone for work and personal use. Four if they have a desktop telephone (which if connected via VOIP is actually a computer interface made to look like a phone). Today we physically “toggle” between devices like toggling between email and PowerPoint.

Along the way, in large part thanks to these handy devices, work-life balance has evolved into work-life integration. We no longer try to separate work life and home life but instead try to integrate them as painlessly as possible. We toggle between work and life.

Voice recognition is the new frontier in multitasking because it allows us to use our hands and eyes for one task (or multiple) while simultaneously using our mouths and ears for another. Like the past-generation manager shouting instructions to his secretary just outside his office door, we now can instruct our computer devices to check the status of our flight while drafting one last email before heading to the airport. We then check for replies to our email while in transit to the airport, perhaps while texting farewell messages (OK, instructions) to our spouses and kids. I personally do all this using voice whenever I can, since I get car sick texting in the back seat of a moving vehicle.

Photo by Jordan Bauer on Unsplash

Go Ahead, Fantasize

Last night, while I was in bed, I rolled over and asked the question: “Do you love me?” I wasn’t speaking to my wife, who was glued to Downton Abbey in another room, rather my Alexa, the computerized voice of my Amazon Echo device. Of course, it wasn’t a sincere question. I was simply curious how the programmers had instructed Alexa to respond. Indeed, she gave a kind but noncommittal response, no doubt being sensitive to her millions of other suitors. Still, it was clear she understood my question. But what if she also understood my voice, that unmistakable part of me that separates me from all the other guys in the world? Would her answer be different? I can dream can’t I?

Part Two of this series will focus on how voice recognition might play a valuable role in the workplace.


Jim Scully is Leapgen’s HR Delivery Transformation practice leader. He has spent the last 22 years in the field of HR service delivery, both as a consultant and corporate practitioner. Jim’s primary focus is designing and implementing HR service delivery models that achieve business results through technology-enabled process excellence. Jim was the founder of the HR Shared Services Institute (HRSSI); the services of which are now a part of Leapgen.

Leapgen is a global services firm that works with organizations to challenge established thinking, make important decisions and shape their future of work.

HR organizations that aren’t rethinking how they engage, manage and serve their people risk losing their talent. Leapgen partners with organizations to deliver a workforce experience that is as good as the customer experience. This means helping them meet the expectations of an increasingly mobile, social, multi-generational and multicultural workforce.

Shape The Future

Rethinking The Workforce Experience - One Leap At A Time

Jim Scully

Written by

Shape The Future

Rethinking The Workforce Experience - One Leap At A Time

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade