IxDA London meetup — Algorithms, Machine Learning, AI and us designers

Talk 1

Sanita Lustika
theuxblog.com

--

First one was Giles Colborne from CX partners.

He was talking about 4 ways we can improve user experience using algorithms and machine learning.

  • Shortcutting user input. [Think taking photo of your food for calorie estimate instead of manual input.]
  • Identify patterns in data. [Think Airbnb smart pricing to inform optimal property value hosts should ask for.]
  • Anticipating user needs. [Think Google now tells you when to leave for meeting if there is a traffic jam or informs about plane delay before your airline company.]
  • Coordinating complex gadgets. [This IoT homes and Amazon Echo or Google Home controlling everything around the house.]

In all of these Giles was emphasizing the role of the algorithm, which like a cog sits between collecting the raw data and spitting out the extracted, packaged up information for the user to act upon.

One of the points Giles made was around finding the right balance between the complexity of data needed for analysis. Complexity = time and effort so you end up looking for the sweet spot of minimum data to get the desired information.

Another point was to decide between :
- High bias — you aim to be precise, but if you get it wrong it is noticeably wrong. [Think bus stop times or airplane arrival times.]
- High variance — you are aiming for approximation, but you will get it right almost always. [Think step tracker.]

One of the most interesting points Giles made was around Etiquette of suggestions that we need to consider when designing interactions for AI. In a retail world, this would means provide smart suggestions based on social rules in the society.

An example of a fail mentioned was offering to buy wine to go with a diaper pack or Durex to go with cucumbers in your online shopping basket. This was a real life example or retail fail, to say the least.

Another great example of having a smarter way to analyse data was Spotify, which analyses the music that you listen to more compared to something that may not be your usual choice:

In the Q&A being able to manage user expectations about the machine they are interacting with came up a lot. Just like thinking about the Etiquette of interactions considering how to make the machine not feel like a human is also important to avoid Uncanny Valley.

Designing conversational interactions are slowly becoming a practice, not just theory An interesting example from my own experience is HMRC call center, where you are asked to describe your problem to a machine instead of pressing a number. To be honest, it threw me off the first time, as it did not feel natural to explain to the machine what I need in full sentences rather than keywords. I guess it’s similar to the bot design rules which Intercom describes in their blog post on Medium would apply when constructing an experience like this.

Talk 2

The second talk was by Ed Moffatt and John Morgan from IBM Watson.

Ed opened the talk with a great question:

And the answer, without context it is impossible to tell. Even when he gave the question from which the number derived: ‘How well Watson performs answering something it has not been trained on?” it is still hard to know if this is good without comparing how accurate and expensive was the system before Watson.

When training Watson, things which people training it was the keenest to know was:
- Are users satisfied with the experience?
- What are the most popular topics users ask about?
- What new user questions are emerging?
- Which words did Watson not understand?

One of the principles the guys mentioned was not to try to answer questions that they system is not sure about, instead offer a couple of options or give a direction.

Both Ed and John also mentioned Uncanny valley with Google Now being the closest thing off the peak on the Uncanny Valley side.

One of the main challenges the guys at IBM Watson currently face is creating a system that would feel cognitive when the user interacts with it, but at the same time still represent its actual capabilities in a way that’s positively perceived by the user.

Their perspective on the challenge was to rather make something feel more like a machine to avoid the Uncanny Valley effect if you are not sure you can make it to the other side. An example of what happens when something feels too real was abuse that Siri sometimes gets with the random questions people ask it.

A great question from Q&A was around the unconscious bias of the people training the AI, which currently doesn’t have a solution beyond choosing a diverse set of trainers.

--

--

Sanita Lustika
theuxblog.com

Senior UX Consultant and user researcher, navigating the world of product and government