What is Human-Centred Machine Learning?
This sunday we are running a workshop at ACM CHI 2016 called “Human Centered Machine Learning”. I thought I would write an article to explain the general idea (though the workshop itself is a way of better understanding the idea).
Statistical Machine Learning is one of the most successful set of techniques to come out of Computer Science in the last decades, and one that a lot of people are thinking about at the moment. It’s often presented as quite an impersonal process: machines that learn for themselves, even AI that risk taking over the world. But, in fact, there is a lot of human work that goes into machine learning and not enough people have been talking about that. So that is what (I think) our workshop is about: understanding how people interact with machine learning and how we can make it easier for them to do so.
That description potentially covers a huge range of things, so I thought I would outline some of the main themes that come out of the papers in the workshop (you can find the full list of papers on the workshop website).
1. New ways of interacting with machine learning
The classic way for a person to interact with machine learning is to gather a lot of data and divide that data into a set of classes that you want to the computer to recognise. This fits well with the learning algorithms but there is a lot of useful information that the computer could give that this process misses out. Also, there are many ways in which people want to interact with data that aren’t just about labelling data with classes.
For example, Raphael Christopher and colleagues created a system for optical recognition of musical scores where a human and machine learning algorithm collaborate to get a correct reading of the score. In their system, the person supplied constraints on what the answer should be, for example, “this pixel should be part of a sharp”. Baptiste Caramiaux’s gesture recognition system doesn’t just learn the gesture that people perform but also the subtle variations that people do, which is often treated as useless noise but in musical performance can be an important expressive element. Mike Schaekermann and colleagues use fairly standard class labels, but they acknowledge the fact that people can disagree on the correct label or be unsure of those labels, and that this disagreement can be important information.
So how do we design interaction with machine learning? A vital part (like any interaction design) is understanding users. Alison Smith’s suggestions are from a survey of users about how they would like to interact with machine learning. One way of designing interactions is to base them on processes that people already know well, for example the document coding techniques used by qualitative researchers or the stages that web designers go through in creating a page layout.
2. Human-Computer collaborations
Lots of these interaction styles are really about creating human-computer collaboration for learning. For example, in Raphael Christopher’s Optical Music Recognition System human and computer have to work together to get the right answer. Human-Computer cooperation has been common form many years in active learning, where the computer suggests to the human which data items it wants labelled. Han and colleagues show that the computer can support this collaboration better by giving more relevant information about why a data item needs to be labelled.
Human-Computer Collaboration doesn’t have to be one-to-one, some of our authors have looked at how big groups of people can work with machine learning systems, whether they are crowd workers or musical ensembles.
Mark Cartwright and Bryan Pardo start their paper by saying: “It is common for the teacher’s understanding of a concept to evolve and change as they teach.” This really excited me because it shows that it doesn’t have to just be the machine that learns in machine learning, the human teacher also learns from the process (Nan-Chen Chen makes a similar point). Todd Kulesza and Rebecca Fiebrink (both workshop organisers) have both shown how their users ideas of what the machine should learn change as they try to teach the machine (something Todd calls “concept evolution”). This is very different from our standard ideas of how machine learning works.
3. Simpler, easier machine learning
A lot of human-centered machine learning is about making machine learning easier for people. For one thing, it can involve labelling a lot of data, which can take a lot of time. Several papers have looked at how to reduce this effort in evolutionary computing, active learning and brain computer interfaces.
But making the process easier isn’t just about doing less work. Alison Smith says that machine learning systems should support users to help them focus on the most important tasks. Though Annika Wolff, Daniel Gooch and Gerd Kortuem say that it isn’t just about making software easier, we should increase users’ “data literacy”. This kind of support becomes very important when users need to debug machine learning systems that don’t work: training a system can be quite easy, but figuring out why it goes wrong is very hard.
4. Opening up the black box
Debugging machine learning is about understanding what is going on inside the algorithm, which is really not easy for most machine learning systems. Three of our papers (Nan-Chen Chen, Jules Francoise and Scott Cambo) all say that we have to “open up the black box” of machine learning.
One way to open up the black box is to provide visualisations of what is happening in the machine learning system. For example, visualising the states of a Hidden Markov Model or the features extracted from motion capture data. Jessica Zeitz Self’s paper says that these visualisations shouldn’t be static: we should be able to interact with them to better understand what is happening and also to update the learning model.
Visualisation is just one way of showing what a machine learner is doing. Michael Paul points out that, though topic models are normally represented as text, not diagrams, there is a lot of effort put into making them interpretable. Marco Tulio Ribeiro describes a way of explaining why machine learning systems classify things the way they do, and says that these explanations can make people trust classifiers more. But Simone Stumpf shows that this can back fire, and people can have too much faith in algorithms that aren’t always reliable.
5. A huge range of applications
Maybe the most exciting part of the workshop is the sheer range of different types of work presented. Many types of machine learning: Evolutionary Computing, Topic Models (and more Topic Models), Bicluster Analysis and Hidden Markov Models (among many others). These are applied to many different types of data: musical scores, human movement, text documents and brain signals. And the techniques have been applied to an increadibly wide range of tasks and problems: mobile health tracking, art, optical recognition of musical score and ancient papyri, climate science, social science research, web design, medical devices, music, more music, and even more music.
So, with all this we are going to have a very busy day tomorrow. I’m really excited about it!