Are the machines sentient.. yet? State of the art in cognitive systems


Originally published July 2015 at www.linkedin.com.

Fear mongering around the topic of Artificial Intelligence (AI) is a definite attention grabber these days. Caution is warranted, but we might be getting ahead of ourselves a little with all this fear mongering. AI academic and entrepreneur, Andrew Ng said it best:

“I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars,” he said. “Hundreds of years from now when hopefully we’ve colonized Mars, overpopulation might be a serious problem and we’ll have to deal with it…Well, it’s just not productive to work on that right now.”

On the other hand, it behooves us to take a step back and understanding where we have come today with the advances in cognitive computing — as my fellow IBMers would rather call the topic. This article provides a non-technical primer on the state of the art in Cognitive Computing.

Modern artificial intelligence techniques have only recently, in the past 3–4 years, demonstrated the ability to perceive and understand cognitive senses similar to human beings. This is partly due to the volume of information we can train these systems with and party a reflection of the maturity and sophistication of computing hardware. Now to answer the inevitable question — what capabilities are commercially available today with recent advances in cognitive capabilities?

Language Perception (Language): Natural Language Processing has come a long way over the past 5 years. Cognitive systems have the ability to comprehend a seemingly infinite variation of ways in which humans can ask questions, or provide commands. Some newer startups, focusing on the personal assistant space, have demonstrated the ability to orchestrate multiple search and retrieval of information based on complex requests from users. IBM Watson and more recently, Google and Facebook, have demonstrated the ability for cognitive systems to ingest, read and interpret large volumes of knowledge — whether they be news articles, product manuals or even TED talks. Enterprises using commercial products like Watson Engagement Advisor, in conjunction with advanced analytics systems enable consumers to get advice on and purchase products, like auto insurance, through digital channels. In Japan, Softbank is introducing robots powered by Watson to engage, interact and befriend consumers — a real world ‘Robot & Frank’, minus the crime. Cognitive systems are getting better at understanding the meaning of the written word — answering questions based on large volumes of knowledge, acting as a research advisor to discover new insights and even summarizing financial news from earnings reports. Psycho-linguistic analysis capabilities such as Watson’s Personality Insights, gives cognitive systems the ability to create an in-depth personality portrait of individuals, simply by following the language used by a human. It will only be a matter of time when these subtleties in language are used to create even more highly personalized digital engagement, reflecting the persona of the human user — perhaps creating cognitive systems that become your ultimate BFF.

Visual Perception (Sight): Computer vision has shown staggering progress since 2012. Several of the leading cognitive solutions in the market, including Watson’s Visual Recognition and AlchemyVision, are able to identify a staggering variety of images presented to them. Facebook’s AI research lab recently announced that their research tools can even generate new images of familiar objects based on familiar patterns the system learnt over time — images 40% of human respondents mistook for being real images. One might joke that its an overkill to use AI to recognize (or generate pictures of) cats on the Internet, but the reality is that enterprise applications of this capability can be truly transformational. Whether it is processing images of rooftops taken by drones to more precisely underwrite a homeowners policy or skimming through consumer’s Instagram pictures (with permission of course) to better personalize marketing offers, the applicability of these capabilities for Financial Institutions and enterprises are only starting to unfold itself.

Speech Perception (Hear): This is another cognitive capability that computers are rapidly getting better at. The past three years have demonstrated that computers can hear and perceive human speech nearly as well as human beings can. IBM Watson’s speech recognition capabilities recently pushed the frontier of technology with a published 8% error rate, compared to 4% for human being — unprecedented results in computer speech recognition. The sophisticated solutions in the market have the ability to understand the spoken language, distinguish between multiple speakers and can even be trained to understand the vocabulary of cryptic industry domains such as healthcare and finance — something that hadn’t been possible in the past. Full, real-time human to computer conversations, even in specialized industry domains, such as financial services is already here for us to capitalize on.

Researchers — both within academia and industry are rapidly introducing new cognitive computing capabilities in the market today. IBM Watson, as an example, in the past 9 months alone, has made 30+ APIs available for developersand enterprises to develop cognitive-enabled applications and enterprises. Insurers, Banks and other enterprises have the opportunity to transform their businesses by introducing these capabilities and other advanced analytics into their business.

With the ability to see, understand and hear, computers are gradually, shall we say… getting more human. Is this a reason to be concerned? I would love to hear your thoughts on the topic.