Assisting the Masses

Tech giants such as Amazon, Google, and IBM want you to ask for help. Their Alexa, Assistant, and Watson, respectively, are among the virtual helpers that are bringing their smarts to millions of people, changing how we live and work along the way.

Flex
Intelligence Magazine
12 min readJan 25, 2017

--

Those who have used word processors may remember Microsoft’s animated advice-giving paper clip, Clippit — or “Clippy” as it later became known. It was first introduced as part of the Microsoft Office 97 software suite, and though it was largely criticized by the public — Time magazine named it one of the 50 worst inventions, calling it an “assumption-prone office assistant” — Clippy gained another kind of fame: It was the first mainstream virtual assistant.

Twenty years after Clippy insisted on trying to teach us how to compose a letter, advancements in artificial intelligence have led to a new age of computer-based assistants, from chatbots to stationary devices, that can help us conduct business from the couch. They are transforming the ways we set our schedules, what we listen to, how we work and communicate, and even how we interact with the real world. While AI assistants still have a ways to go until they become a prominent presence in our lives, this is a thrilling time to scan the market and see who — or what — is ready to work for you.

Since Apple introduced Siri in 2011, changing how iPhone users communicate with their smart devices, virtual assistants have been at home on our smartphones. Microsoft’s Cortana, a digital assistant service that set itself apart with its ability to prompt actions based on what you’re doing, such as, “Remind me to buy oranges when I arrive at the store,” first launched in 2014. Samsung’s new Galaxy S8 includes Viv, a virtual helper created by the co-founders of Siri, the company Apple acquired. Facebook is also fine-tuning Facebook M, its nascent text-based assistant. Last year, CEO Mark Zuckerberg announced he’d made progress in building an AI assistant phone app called Jarvis to control his own home through text or audio commands. Clever virtual assistants offer a line out to the world at large. Ask about a new movie, and Siri or Cortana can pull up showtimes at a nearby theater. After several years on the market and steady improvements, a smartphone-based assistant has become expected. But as AI becomes more advanced, virtual assistants are branching out into new frontiers.

Services like Siri and Cortana have recently moved beyond the basics of weather updates and are now capable of handling more complex tasks like managing a calendar. Products that take up residence in our abodes, like the Amazon Echo and Google Home, are a sign of a larger shift in how humans interact with machines.

Off the Phone, Into the Home

The Amazon Echo was the first screenless AI-based assistant device specifically designed for use in the home. Last summer, in a Q&A with Fortune, Amazon’s senior vice president of devices, David Limp, revealed that the company’s use of machine-learning algorithms to create product recommendations on its website led to thinking about how those algorithms could be used in other areas. The result was an idea for a voice-controlled device that could harness the power of the cloud and, in Limp’s words, “do exactly what the Star Trek computer did.”

In 2011, when Amazon began experimenting with what would eventually become the Echo, there were no major screenless virtual assistants on the market. While there were clear models for how to interact with AI-based virtual assistants through a smartphone — Oh, hey, Siri — an ambient home device that could respond naturally to verbal requests was new. A big hurdle for the Echo to overcome was latency, or the time it took for the AI assistant Alexa to respond to a user query. Back then, the average voice-recognition software could respond in 2.5 to 3 seconds, Business Insider reported. Amazon CEO Jeff Bezos gave his team the daunting target of getting that down to 1 second. Through thousands of tests and data analysis, the Echo team managed to reduce latency to 1.5 seconds. To duplicate the responsiveness of a real assistant, Echo technicians performed what they called a “Wizard of Oz experiment.” They fed multiple variations of questions to an off-site “wizard,” a real person who sent answers back through the Echo’s voice to test subjects. Following several years of prototype testing, the Echo launched to invited users in 2014 before an official rollout in 2015.

After about seven months on the market and the subject of a high-profile Super Bowl 50 ad, some e-commerce experts predicted that the Echo could be Amazon’s next billion-dollar business. The device quickly became a top seller, with more than 5 million units purchased by late 2016. Reviews were also overwhelmingly positive. New York Times tech columnist Farhad Manjoo wrote, “The Echo has morphed from a gimmicky experiment into a device that brims with profound possibility. The longer I use it, the more regularly it inspires the same sense of promise I felt when I used the first iPhone — a sense this machine is opening up a vast new realm in personal computing, and gently expanding the role that computers will play in our future.”

According to Limp, utilizing the power of the Internet of Things was key in developing the Echo. “We saw internally how fast the cloud was growing and how quickly [Amazon Web Services] was taking off, and how efficient a developer could be by using Compute and Storage,” he told Fortune. Limp challenged the team to envision a not-so-distant future where everyone has infinite storage and to create a service that would improve and add value to customers’ lives. To that end, the Echo was designed to be compatible with connected appliances like a Nest thermostat and the web-based productivity service IFTTT, allowing the device to function as a hub from which to manage the home in addition to daily life.

The idea of a full-service hub was such an important consideration that the Echo excels in its function as an open platform. With just a few lines of code, third-party developers can easily add what Amazon calls a “skill” to the Echo, widening the scope of tasks that Alexa can do for you. That means you can say, “Alexa, ask Uber to get me a car,” or “Alexa, ask Kayak where I can go for $400.” Bolstered by the Alexa Fund, $100 million worth of investments intended to support developers, manufacturers, and startups, the Echo now has more than 1,000 connected skills ranging from cooking to opening the garage door. Since the Echo’s release, Amazon has also added a device, called the Dot, that allows Alexa to run through home speakers, and the Tap, a more portable, battery-powered version of the Echo, to its product lineup.

Google’s virtual assistant products haven’t been available to consumers for as long as Amazon’s have, but the tech behemoth is still a leader when it comes to the utility and breadth of responsive AI. The company’s offerings include the Google Now assistant, which is available in the Google app for Android and iOS, and Google Assistant, which is integrated into the Allo instant messaging app. Assistant also powers the Home device, the largest and most comprehensive AI assistant from the company, which was released last November. “It will let anyone in the family, kids or adults, have a conversation with Google,” Mario Queiroz, the company’s vice president of product management and the man behind the Home device, told a crowd of more than 7,000 in Mountain View, California, for its debut. Queiroz described the small, predominantly white device that offers several interchangeable base colors as “a beautiful addition to any room in your house.”

Google’s technological ubiquity makes integrating its assistant into daily life less challenging than it would be for other companies. As Forbes contributor Harold Stark explained, Google “has been a significant part of our everyday lives for quite some time, rivaled only by Microsoft and Apple. We can’t go past a single hour of our average day without looking something up on its massive search engine.” Home doesn’t quite have the third-party support that the Echo boasts, and some users have expressed frustration that only one Google account can be accessed through the device, but Google seems intent on turning Home into an important part of users’ households.

The company has tapped engineers from smart thermostat company Nest, which Google acquired in 2014 for its Home team, signaling a focus on increased connectivity and the IoT. “[Google Home] draws on 17 years of innovation in organizing the world’s information to answer questions which are difficult for other assistants to handle,” Queiroz told The New Yorker.

Google has been experimenting with voice-based recognition software since its 2007 GOOG-411 telephone directory assistance service and the strength of its virtual-assistant offering lies in the company’s commitment to advancing artificial intelligence, illustrated by the slate of robotics and AI companies it has purchased over the past several years, including the $650 million acquisition of DeepMind Technologies in 2014. DeepMind, a London-based AI firm focused on machine learning, has been widely praised as a revolutionary force in the field, demonstrating skills that range from creating a shockingly realistic human voice to spotting people at risk for developing life-threatening conditions. Its technology, combined with the wealth of data that Google collects and stores on its servers, makes deep learning, a crucial component of AI, a reality.

Through neural net training and deep-learning techniques, Google Home’s speech recognition technology has improved so much that it can use the human voice to learn the nuances of our behavior. Conversing with a device may seem awkward at first, but advances in technology have made the AI responses more human. The key here is the idea of deep learning, an AI skill-building practice that involves feeding vast quantities of data into a software system and teaching the software to learn to recognize patterns and make inferences on its own. Advancements in cloud computing and data storage have made it possible to aggregate so much data that deep learning has taught AI systems to understand speech patterns and recognize images. This is the differentiator between traditional programmatic computing and the cognitive computing we are moving towards with all these AI advancements.

Using this deep thinking, the Google Home device can master complex tasks like language translation or alerts for upcoming calendar appointments via data analysis of your Google account. Google Assistant, the device’s brain, illustrates not just what modern AI assistants can do, but what they should all be able to do in the future: come up with intuitive, intelligent, and predictive solutions to complex problems. It’s the enormous difference between displaying a search result and providing an answer.

Beyond the Home

The role of a virtual personal assistant goes beyond the home, something that companies like IBM certainly understand. Early last year, the software giant brought on Weather Company vet David Kenny to head its Watson tool, an intelligent computer system capable of understanding and answering questions posed in natural language. Watson made headlines in 2011, when it defeated Jeopardy champs Brad Rutter and Ken Jennings to win $1 million on the television quiz show. Since then, IBM has taught Watson to become an invaluable tool across the enterprise sector, changing how fields from finance to health care operate. “Watson is in the cloud, and increasingly embedded in the real world as part of the Internet of Things,” says Kenny. “If you think about the number of sensors feeding data to us, the numbers are incredible. There are sensors in vehicles, in facilities, in our mobile devices and wearable technology, and as we get more sensors — and thus more data — we’re better able to understand the world around us and are consequently better able to predict what will happen.”

IBM has positioned Watson to operate across industries from insurance to education to environmental protection. Institutions such as the Mayo Clinic and Memorial Sloan Kettering Cancer Center have already started using Watson to analyze data, make decisions, and provide support and information at speeds and levels of detail that an individual doctor would find impossible to meet. The analytics from the IBM unit Watson Health could also lead to a level of health care that patients in rural or developing countries can’t usually access. After a series of business acquisitions, IBM now has health-related data for approximately 300 million patients worldwide. Besides being smart, Watson is incredibly well rounded, with experience designing clothes, writing recipes, and even fighting cybercrime. And while Watson’s power and scope make the tool ideal for large enterprise systems, the platform has made inroads into the consumer sphere. Over the holidays, at Minnesota’s Mall of America, IBM debuted the Experimental List Formulator, or ELF, a personalized shopping chatbot powered by Watson.

“When it comes to Watson, we always want to conduct deep, impactful work on big societal challenges,” Kenny says. He thinks that can be achieved through cognitive computing, which he says introduces a new level of collaboration between person and machine that augments and expands both human intelligence and creativity. Research from IDC predicts that 75% of developers will be including cognitive computing and AI functionality in one or more of their applications by 2018. With this in mind, IBM has expanded the Watson Developer Cloud, a self-service platform that offers a range of Watson APIs. “It takes a lot of hard science to create cognitive technologies that work in the real world, and it requires a lot of sophisticated engineering to build the platform approach,” says Kenny. “But a superior cognitive computing platform must hide this complexity to embolden developers and to empower its users. This is what we mean by ‘self-service AI.’” The goal, Kenny says, is to create development environments for developers to build and launch apps — whether they’re a data scientist at a big bank or a coder in a hospital system. Developers, in turn, must make AI easy to use and focus on natural human interaction so this “new era” of computing can take hold.

Other companies — such as IPsoft, with its customer-service-oriented AI assistant, called Amelia — are focusing on the business world, too. Last summer, Google also announced Springboard, a digital assistant designed to help enterprise customers. The rise of AI assistants in the workplace is important for both users and the technology itself. The more comprehensive the adoption, the more opportunity for these AI programs to learn, advancing their cognitive skills and our comfort with them.

“For IBM, ‘AI’ stands for augmented intelligence. Watson is man plus machine,” says Kenny. “We see Watson’s value as extending and enhancing human expertise, whether it’s the oncologist, the lawyer, the teacher, the investment adviser, the cybersecurity analyst, the filmmaker, or the chef.”

Whether we’re trying to save lives or just hail a ride, cognitive computing and AI assistants can provide solutions to our dilemmas. As these systems continue to advance, there are even greater implications for their roles in our daily lives, from self-driving cars to advanced medical decisions. Thanks to Amazon, Google, IBM, and more, the digital assistant of the future is virtually limitless in its capabilities. We’ve come a long way from that talking paper clip.

##

If you enjoyed this article and would like to read more, please check out www.theintelligenceofthings.com

INTELLIGENCE explores the concept of co-innovation and the “Intelligence of Things,” that Flex sees as the building blocks of the post-Information Age era. More at flex.com

--

--

Flex
Intelligence Magazine

Flex is a leading sketch-to-scale™ solutions company that designs and builds intelligent products for a connected world.