We Must Be The Teachers

“Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.” ~What is Machine Learning?

There is a a great deal of excitement, and some trepidation, about Machine Learning, and its promise of delivering insights beyond those which have been programmed in. Especially when combined with the enormous amounts of data available today, there is an anticipation that AI will finally provide the benefits which BIG data have long promised.

But many who would like to use machine learning misunderstand both its strengths and its weaknesses. In talking with a number of people about the promises, dangers and failings of machine learning both in its current state and in past, more limited, technology, I am reminded of Arthur C. Clark’s famous quote, “Any sufficiently advanced technology is indistinguishable from magic.”

As humans, we have a tendency to view the inexplicable in terms of magic. But in spite of its complexity, Machine Learning is not magic, and treating it as such will greatly reduce its chances of fulfilling its promise. It can be used to detect trends and correlations, but it can’t wave a wand and conjure up data relationships and implicit meanings that we take for granted.

If we discard the idea of AI as magic, we can begin to treat machine learning as a process related to other similar processes. When we want our children to learn, we don’t simply leave them in a library and hope for the best. We guide them to the most appropriate resources. We correct their mistakes. We answer their questions, and ask questions of our own that will stimulate their thinking. We are their teachers.

In short, if we want to understand how to best accomplish Machine Learning, we need to be cognizant of educational strategies. While machines do not learn in exactly the same way as children, there are parallels. We should pay attention to both the similarities and differences to guide our Machine Teaching.

Children have the advantage of learning through an ever more complicated feedback loop with their parents, teachers and peers. We cannot replicate that with machines, but we can provide feedback. IBM Watson did not learn to win on Jeopardy by being fed an encyclopedia, though it no doubt was fed many. It learned by being fed questions and having its answers corrected.

We cannot simply dump data in and hope that the right connections are made. We need to put in smaller amounts, try out the kinds of questions we would like answered, and give feedback. In The King and I, Anna sings “…if you become a teacher. By your pupils you’ll be taught.” Just so, as we get answers back from the AI, we must learn both how to create questions that will elicit the answers we want, and also what additional information we need to feed into the system so that it can answer the kinds questions we might ask.

Machines have an advantage in their ability to absorb vast amounts of data very quickly, which poses both an opportunity and a threat. With children, we have time to recognize when they are headed in an unexpected and undesirable direction. With machines, it may be hard to have them unlearn without starting over if the wrong lessons are learned.

This brings us to one of the most important lessons we must teach both children and machines. We cannot assume that learning will instill values. When AI has been fed vast quantities of social media content, some of the first things it learned were racism and intolerance. While you might not think your company’s data sets could lead in that direction, they might well lead into darker directions than you would suspect.

Let’s assume your company is a profit-driven enterprise. In theory, this means that decisions made should be entirely driven by the motivation to minimize costs, maximize profits and bolster the bottom line. But except in the most extreme cases, people take lots of other factors into consideration. They weigh the good will of their customers, the well being of their employees, and the social ramifications of a given decision. They have values, and those values are balanced against the bottom line in thousands of small and large ways.

But machines, like children, only learn what they are taught. If you provide feedback to your AI every time it answers a question, but your only criterion is whether it maximized profit, don’t be surprised if it makes horrifying recommendations later. You must build in both the data to support value-based decisions, and the encouragement to give weight to those decisions.

In a simple example, if you do not provide any data about the environmental impact (e.g., packaging or energy use) of different product types, the AI cannot weigh that in its decision. Similarly, if you don’t provide virtual praise when the AI recommends environmentally sound choices, it will not learn to value those choices.

Machine Learning is not magic. While it may provide wonderful ideas and insights you would never have considered, it can only work with the inputs and values and lessons it has been taught. We must be the teachers. Take seriously your role in guiding and teaching your AI, and as with children, your reward will comes when it shows the wisdom and knowledge and values you have instilled.