AI too Mundane for Words?

I’m not particularly panicky about the Singularity. Not that I feel it’s a long way off. I think about Artificial Intelligence as Humanity’s next shot at self-consciousness, a glimpse of ourselves, as we are, catastrophic, coincidental machines. Granted, I suspect Consciousness is often a catch-all for a check on counter-evolutionary behavior. A feeling we have that we’d like to put away, were it not also part of the excitement of being alive. Like other interior moments, it’s also very fragile and unpredictable.

Artificial Intelligence Might be a Fast-Break for Evolution

I intend to support the progress of AI and software by challenging us to include impact-thinking in the work. While it might seem like a distraction, I tend to direct entrepreneurs and their incumbent counterparts to Big Problems, not only to check their priorities, but as a compass to building their own products, and profitability.

For now I’d like to focus on the benefits of smarter software, in simple, mundane ways. Achieving quality in user experience and user interface takes a lot of iterations. If AI means features that might anticipate our needs, tailor the UX to what we want, right now, then from the perspective of a designer and product manager, brainstorming where to focus AI integration in digital products, where the user can experience the immediate benefit, should be trivial, right? Maybe not.

I recently attended a summit in San Francisco where a number of companies offered their own superior version of Natural Language Processing as a plug-in for my apps. The demos centered around consumer satisfaction, on-demand booking of vacations and business trips, making a doctor’s appointment, or arranging a night out for two.

What’s phenomenal about this is less the performance of the tech, it’s the disappearance of choice from the main user experience. Consumer options have been the boon of the Internet, and the perception that there’s a better deal to be had, or better quality, are at the center of choice-paradigms like Amazon and Yelp.

We’ve finally reached the limits of our fascination with choice. Our agony is an abundance of choices. Recommendation is the new relief.

One of the demos looked like this: I use a text interface to talk to a “travel agent.”

  • I need a vacation, with some sun, sometime in the next month.
    ~Would you prefer to stay in the U.S. or would you be interested in a vacation in Mexico?
  • I will consider Mexico if the quality is five star and it’s not too crowded.
    ~ I will look for a 5 star beach resort, in Mexico, that tends to be lightly booked in the next month…

The search I just configured goes beyond the offering of a traditional travel app. The booking pattern for top-tier hotels might be present in the closed systems at Starwood, Hotels.com, and Travelocity, but it’s nontrivial for the casual Googler. Just like that, NLP is not just a bolt-on, for true human-like language-based interaction to flow, it needs to be freeform.

There are three solutions which I might accept, as a user:

  1. ~ I’m sorry, I can find you a great hotel but I can’t help you find one that is not crowded.
  2. ~ Would a smaller, boutique hotel fit your description? Are you interested in off-season travel?
  3. ~ What do you consider crowded? Help me figure out how to help you…
    - I don’t want to wait for a table at the restaurant
    ~ I can solve that problem by booking you a table for each night of your stay. What else would make it not too crowded, to you?

What version three represents is simply decision-tree UX. What’s new to this model is an open confession from the Agent that it doesn’t fully understand the request. But the strategy is not new. It’s asking for facts that will populate its search filters, and feasibly, it will keep asking until I feed it a property (small, rustic, independent, family-owned) it can deal with. This is not Deep Learning+NLP, it’s basically hierarchical search, a la early days of Yahoo. In another post, I’ll get into ways to train AI to produce this kind of UX with less code “supervising.”

Engage in Big-Picture problems to Make Intelligence a Necessity

My friends at The Pregnancy Project and Water-State are preparing a panel for SXSW2017 (Please vote!!) about the application of public data and AI to social and economic problems. Yesterday, we talked about the possibilities for bolt-on intelligence with services like Watson and TensorFlow, and ideas flowed, faster than we could document them.

Angie Hayden, Product Manager for the Pregnancy Project, was quick to identify the value AI could bring to her app, which is intended to support women’s access to healthcare information in underserved communities. “The app asks questions and keeps track of answers which she can share with her medical professionals. The questions have been structured, but would be much better if the app could ask follow up questions, and get to deeper, hidden things, even health risks…”

“Our AI goal,” she continues, “should be smarter questions for our customers, and better selection of content for them to read. There is an enormous amount of information for pregnant women. How can we smartly select, digestible pieces for women to read, at the moment when it is most relevant to them and their pregnancy?”

I thought a little further about how this might work, until I realized there was a simpler approach than training an AI in maternity and obstetrics. The system can use the same sets of questions, but incorporate NLP to allow user to respond personally, if they want to, and look for variations that weight the response differently. It’s largely an analytics solution, bucketing questions and answers into categories of patient needs and condition, but a learning machine addition would help address the problem Angie calls, “not knowing what to ask.”

Knowing What to Ask

Without falling prey to the Rumsfeld Principle of Unrequited Search for False-Premises, AI should be helping us reach the superior set of questions that often lie at the utility of apps for business and consumer.

The Water State folks are building a new level of water safety verification that combines both public data, consumer reviews, and a hardware device intended to work in the household tap.

Matt Kern and Meredith Dion, as founders, bring a perspective to their product that is both scientific and experienced in public projects. “We have the combined problem of building user trust, where the public services have failed, and managing false-positives at every level.” AI offers the potential not only to detect for false-positives but also to present emergent pictures. “We want to overlay CDC data, like clusters of Downs-Syndrome or cancers. AI can help us ask the questions for deeper study,” says Meredith.

The takeaway for app designers and product managers may not be that we need to become data scientists (fun as that might be), but that more than ever, we need to build products with robust feedback loops that integrate subject, user-centered, feedback, behavioral, analytics-based data, and the structured/unstructured data that is intrinsic to the application.

The result should permit product teams to validate their product hypotheses, and track effects on engagement, but simultaneously to train and supervise AI from a quality perspective.

AI as a service is not only compatible with Agile, when it comes to the continuous improvement of products, we are more able to present Design and Data Science as continuous functions. These roles have been present in mobile and online games for some time. (Check out Ben Linberg’s great post on the role of data in eSports) Finally, we see the potential for the same level of contract with customer, and quality, in practical and socially relevant apps.

This is the second post in a series of inquiries into software and hardware product design and production, and the implications of AI and Human and Planetary Evolution.

  1. Evolving Apps with Intelligence
  2. AI too Mundane for Words?