AI In the Here and Now

AI might sound like the technology of the future, or at the very least like the cutting-edge of what’s available in the tech world today. Self-driving cars, for example, are far from widespread consumer use, and a recent Uber crash that killed a pedestrian has led many industry pundits to wonder if even this very limited AI deployment happened too quickly. Alexa and other home assistants are sold as AI products, but consumers expecting a friendly robot assistant from a sci-fi novel were quickly disappointed.

Consumers might be surprised to learn that many of the mundane services and devices they use every day rely on AI functionality. Cutting-edge AI such as self-driving cars attract headlines, and fictional depictions of human-like AIs continue to dominate the popular imagination, but early generations of AI are already here, helping consumers get things done. Here are just a few ways ordinary consumers who don’t have the technology budget of Elon Musk still benefit from AI:

Online Recommendations

Anyone who has an Amazon or Netflix account sometimes has the uncanny sensation that the program knows them. Both platforms recommend content based on consumers’ prior behavior. If you found your new favorite TV show from Netflix recommendations, you likely have an AI algorithm to thank. AI-based recommendation engines harvest customer data, such as past purchases, and use them to predict what consumers might purchase next by comparing customers to similar customers and their purchases to similar purchases. As users continue to input data into the platform in the form of, say, frequently watched Netflix shows, the platform continues to refine algorithms for suggesting future content.

Because better recommendation engines result in happy customers, companies continue to invest in making improvements. IBM engineers, for example, are building recommendation engines that take goods durability into consideration, reflecting how consumers tend to make some types of purchases (e.g. clothes, movie rentals) frequently and others (e.g. major appliances, cars) only very occasionally.

Video Games

Video games have historically been an ideal digital environment for developing and improving upon artificial intelligence. They allow entities to practice situations without actually demanding the resources enacting those situations in real life would incur. That’s why AI Gaming and DeepMind both use video game scenarios to further AI development.

But AI has contributed to the development of video games just as much as video games have contributed to AI. “If you have ever played a video game,” Harbing Lou wrote for Harvard, “you have interacted with artificial intelligence.” Non-player characters execute rudimentary decision-making in response to player input and game parameters. Games since the 1990’s, for example, have used Finite State Machine algorithms that cause non-player characters to take different pre-programmed actions depending on player behavior. Lou writes that more sophisticated algorithms such as the Monte Carlo Search Tree allow AIs to extrapolate risk and reward of different reactions to player actions as the player makes them.

AI and video games will unquestionably continue enjoying a close relationship in the future. AI is already assisting in the video game design process itself: an AI called Angelina can design its own game elements. Non-player characters guided by artificial intelligence are growing more and more dynamic.

Voice and Image Recognition

Between smartphones and home assistants such as Alexa, voice functionality has become mainstream for consumer devices. Learning to communicate through spoken language is so basic for most humans that its true complexity can be deceiving. How do voice-activated programs recognize the messages people speak to them?

Adam Geitgey explains that while forms of speech recognition software have existed for decades, they only work well in restricted environments, such as an American-accented voice speaking a certain set of words to trigger a specific command. These conditions don’t replicate how people really talk. Humans will send the same message using different languages, word choices, accents, tones, etc.

Speech recognition programs have become usable enough to be built into consumer-facing products such as Alexa, thanks to machine learning algorithms. These algorithms interpret raw data on successful and unsuccessful attempts to turn a sound file into a command. As they get feedback on their success rate, they become more and more sophisticated in turning speech sequences delivered in a wide array of environments, accents, languages, etc. into the commands the speakers intended.

Furthering speech and image recognition technology could have immense implications across multiple industries, including medicine. DeepMind, a leading Google-owned AI company, is developing a program that can help doctors diagnose and treat early-stage eye disease. The program has been fed thousands of high-resolution photographs of retinal scans with information on the photographed subject’s eye health history. Thanks to this input and machine learning, the program is now “trained” to identify early signs of eye disease even when those signs are so small that doctors might miss them. The program can learn to analyze other images too. DeepMind says that the program will learn to interpret radiotherapy scans next.

AI’s importance will undoubtedly continue to grow across multiple industries. But it’s also providing utility to everyday users in the here and now.