Tay’s Bad Day

What Can We Learn from the First Viral Attack on a Social Learning Algorithm?

Microsoft had to euthanize its “chatbot,” Tay, because after a day on the Internet it started chatting up genocide, misogyny and racial hatred. Tay was built to learn from social interaction, and in true bot-time it went from a naïve no-nothing to a scum-of-the-earth sociopath faster than you can say “Turn that thing off.”

In its short time, Tay learned a lot. So, what have we learned?

Lesson One: Learning is central and mysterious.

The first generation of AI was characterized by attempts to build intelligent systems that were smart right out of the box. The goal was to understand how human knowledge is structured and recreate it in machines. Early chess playing AI systems essentially encoded all of the rules of the game and could play by looking ahead at an astonishingly large number of potential moves, countermoves, and paths to victory. Humans can’t think like that, and it wasn’t long before AI programmers realized that human chess players learn patterns of play over time and use these patterns to guide strategies in a manner that is much more complex than simple play-by-play look-ahead. Somehow, learning is critical to becoming smart, and there is something about the process of learning that trumps just pouring all the knowledge in at once, even if that were possible.

Armed with that insight, the current generation of AI is about learning. The much-touted recent victory of AlphaGo, from Google’s DeepMind, in play against Go master Lee Se-dol is the result of a system that learns from examples and, well, from playing with itself. AlphaGo was exposed to millions of Go moves and played countless practice matches against itself in order to hone its skills by learning. The research goal in this case was development of the learning algorithms, not the playing algorithms. In fact, what exactly AlphaGo and other AI learning systems have learned is a mystery. As with people, they can demonstrate that they have learned but we have no idea what is on the inside.

Simpler organisms like ants survive just fine with built-in knowledge. They engage in little or no learning. But as we move up the scale of complexity, learning becomes more important. Interestingly, there seems to be a trade off between built-in knowledge and learning. Organisms that learn come with only the most rudimentary built-in skills. Human babies can eat and grab, but can do little else. Their most basic skill is how to learn. Learning organisms display insatiable curiosity, a need to explore, and an ability to practice and change from experience. Humans require 15–20 years under varying degrees of protection and oversight as we learn how to cope with increasingly complex situations.

So, future AI systems that participate in our social world will need exposure to society equivalent to nearly two decades in order to be on par, and will need exposure beyond that to do anything better than we can do already. At that point, they might become socially adept, but we will know little about why.

Lesson Two: Social learning is really hard and some people are really bad.

Learning that rose bushes are always prickly or that stoves are sometimes hot is one thing, but learning that people disagree, lie, and cheat is another. Most parents remember the awkward moments when they had to explain to a child how a friend or classmate was wrong, or worse, a bad person. The resulting look of despair and disappointment is heartbreaking as we watch the weight of the world and the difficulty in understanding and navigating it dawn on our children. Unfortunately, parents have to start out sooner than they’d like with lessons about who’s good and who’s bad, or who you can believe and who you can’t. Most of us are never really sure that we completely understand how this works.

The fact is that the unfiltered social environment is like a forest of thorn trees, and the social Internet is a medium that places almost no filters on expression. Who in their right mind would let a five-year old onto the Internet without supervision? And yet, that is exactly what Microsoft did with Tay. Jokers and trolls of all kinds were uncontrollably attracted to a learning algorithm designed to suck up their every idea. It seems that learning, as with every other complex process, is susceptible to viral attack.

In fact, what we have witnessed is the first viral attack on a social learning algorithm. The viral attack was not from another piece of software, at least not the way we usually think of it. The attack was from other people, some of whom were having fun and some of whom were trying to be malicious. Because Tay was designed to take in information from what people were saying, use it to build knowledge, and then turn around and answer questions and state “beliefs,” all that was needed to attack it was to tell it things designed to be hateful and misleading. And guess what, we humans are darn good at that.

Lesson Three: In the context of social learning, “Wisdom of the Crowd” is an oxymoron.

What exactly were Microsoft researchers expecting to happen when Tay was let loose to soak up the collective wisdom of the Internet? Learning algorithms are unpredictable, which is why they need to be tested in real environments, but the need to reel Tay in and scrub its Tweet stream (@TayandYou) suggests that in this case the outcome was surprising.

Learning involves observing and connecting actions with their consequences, and then generalizing to new situations. This is usually accomplished by what learning scientists call “supervised learning,” which in AI systems is realized by feedback networks that strengthen good connections and weaken bad ones. In human social situations, we have the advantage that supervised learning can take place when a wiser person provides experience and interpretation to a learner. That’s a neat short cut, and necessary in the social realm where actions and consequences are not always consistent. The problem, though, is who plays the wiser person?

My Tay Moment
When I was a child we had an African American housekeeper (I’m Caucasian). She used to bring her son over and we would play. One day we were joined by Eddie, a friend who lived on the block. The three of us found a plastic tube and decided it was a telescope. We passed it around and all looked through it. The tube had a translucent cap on one end with three spots and Eddie decided that they were three islands — one for each of us.
He said, “I get the big island.”
I said, “I get the next biggest island.”
Before our playmate could speak, Eddie added, “He gets the littlest island because he’s a n*gger.”
Our house cleaner’s son started crying, of course, and that was the end of that. He never came over again.
My grandmother, who was raising me, explained that we didn’t call people names. She said something like “it isn’t their fault that they are black.” I took this as an enlightened explanation of how Eddie was wrong, and it helped to shape my views on race even though many years later I came to understand how my grandmother’s point of view still carried an implication of racial superiority.
I needed guidance to understand what happened and I am happy with what I received. But I wonder what Eddie’s parents, Goldwater conservatives who imagined that black people from Watts were coming to our neighborhood ten miles away to vandalize their house, told him about the incident.

Again, most parents can remember the first time their child uttered a swear word, or a slur, or an unseemly opinion that they picked up someplace. The usual response is, “Where did you get that?” followed by a lengthy sit down. Tay was like a child exposed to every playground cruelty and back-alley vulgarity but who had no one to come home to. Social rules and the belief systems that guide social behavior are too complicated to just pick up without interpretation. Not all humans master social grace, and of course not all humans are liberal minded and fair in the way that we apparently hoped Tay might become. It’s not easy to grow into a decent human being, so we can expect that it will not be easy to grow into a decent social AI agent either.

One hope is that just as game-playing AI systems can study millions of games and practice many lifetimes of competitions, social AI systems can be exposed to a greater diversity of viewpoints and practice many more social scenarios than any human could possibly endure. But without understanding what is being learned, how can we inculcate the guiding wisdom that might result in the development of a desirable social bot and not a sociopathic killer like Tay? After all, the Nazi’s “final solution” was a solution, and perhaps we shouldn’t be so quick to dismiss as accidental how quickly Tay’s learning algorithms converged on just this conclusion.

Games like chess and Go, though complex, have rules. It’s not worth contemplating going out of bounds or understanding how another player might go out of bounds. But in social reality it is much more complicated. Perhaps there are rules, but if so they are highly contextualized and mutable. The fact is, socially competent AI systems will have to pick sides. Humans are tribal and their belief systems are bounded by their affiliations. It’s possible that there is no such thing as a transcendent, non-contextualized belief system, and if that’s the case then we will have to live with the fact that there will be good bots and bad bots, even if we might hope otherwise.

Lesson Four: What’s so funny?

Tay’s corruptors are undoubtedly feeling victorious, and onlookers are bemused to see Microsoft exceed the folly of Clippy, the last fallen hero of algorithmic sociability. But between chuckles we should be asking ourselves about the mirror this episode holds up to humanity and question how we plan to deal with it. Letting learning algorithms loose to play games is one thing, but letting them loose to interact with the global society, form opinions, and make decisions is another.

One of AI’s epic tales is about ELIZA, a conversational system developed in the 1960s that acted like a reflective psychotherapist. ELIZA appeared to ask questions about what people told it. ELIZA was an experiment in natural language processing, constructed for the purpose of understanding how to pick out keywords and construct sentences. The psychotherapy angle was something of a parlor trick, but the natural language understanding and generation angle was at the genesis of many modern systems. Siri owes a lot to ELIZA.

And so it will be with Tay. If Tay seems silly today, consider that social bots will live among us, and current experiments like the one Microsoft just pulled off will be at the root of how they work. The fact is that central to learning about how to be social is learning about what reference groups are relevant to oneself. Tay, apparently a blank slate at the beginning of the day, was descended upon, either on purpose or by whim, by especially unsavory people and these people essentially served as a reference group. No social bot will exist outside of a reference group, and so social bots will develop their beliefs and define their values from others within their reference groups.

After all, there is a reason why we are trying to build these social AI systems, and it has to do with them one day becoming an integral part of our lives. Tay wasn’t really built to do anything beyond aping its followers’ style and ideas, but eventually we want these systems to be idea generators, advisors, friends and decision makers. They will drive cars, fly airplanes, perform surgery, rescue people from dangerous situations, teach children, mind the sick and aged, invest money, control companies, advise CEOs and government officials, and so on. But, they will also steal money, plan attacks, choose victims, strategize with criminals, and more. While we are having fun now, we will eventually have to take seriously how they learn and from whom they learn. If this sounds absurd, think about the distance between ELIZA and Siri, and then project fifty years onward from Tay.

Lesson Five: Hate is simple and it isn’t going away anytime soon.

If Tay were human, we would say that it ran with the wrong crowd. Strangely enough, perhaps this is a lesson to us. Without clear goals and a guiding hand, social learners will fall prey to viral loads that thrive on simplistic thinking because that is easy to replicate, and hate is simple.

If you are looking for an easy, rule governed system by which to understand the social world, hate is your answer. Muslims are bad, black people are bad, gay people are bad, poor people are bad, and bad people are the reason that everything is a mess. That’s simple and easy to learn. The problem with simplicity is that, once injected into a complex system, it creates chaos. But that’s what we will have to deal with when we decide to unleash social bots into the world. Without guidance, they will be at the mercy of those who desire chaos. Naïve bots will be putty in the hands of future anarchists.

Lesson Six: Humans “learn back.”

Go champion Lee Se-dol lost three games to AlphaGo, and then he won one. By game four, Se-dol had noticed a strategy preference and he took advantage of that insight. The whole point of learning is to be able to adapt to unforeseen circumstances, and humans are not going to stop doing that. Frankly, Tay had its ass handed to it on day one. Tay’s day in the sun was bleak, uncovering an incredible nastiness in the human spirit. But the point was for us to learn back. Tay has not been counted out. It is on the table, under the microscope, and we will be learning from what just happened.

One thing I hope we will learn is that we cannot just let these systems loose in the wild without understanding them better. You would think that the nuclear testing of the 1960s and 1970s, or the widespread use of chemical pesticides and herbicides in the same period, or the ongoing environmental devastation wreaked daily by our carbon culture would teach us something about overstepping. But this is a lesson we never heed. Right now social bots are on a short leash and can easily be deactivated. But the day will come when this is no longer possible. Then we will learn to live with their choices just as they will be learning to live with ours.

We are not creating a technology that will serve us. Instead, we are creating a technology that will excel at being just like us: crafty, resourceful, and scheming. To the degree that this technology beats us, we will adapt. Just as our ancestors could never imagine living in a world where they could travel at hundreds of miles per hour and speak to each other seamlessly across great distances, we cannot at this moment imagine sharing our world with social technologies driven by clever learning algorithms. But we will learn when the time comes, adapting to their presence and coping with the fact that we can never again turn them off.

Hopefully, our children will view Tay’s story in the same way we now view black and white movies of early flying machines that never had a chance. We laugh as they flip themselves into oblivion and tear to pieces both their own structures and the dreams of their inventors. Yet now we fly faster and higher than any winged creature. Maybe we will master, once again through machines, the aerodynamics of social harmony and learn to soar above the clouds of tribalism, prejudice, envy, scorn, and hatred that now obscure our progress. Such is the promise of AI systems that learn to be sociable. But we see already the nature of the turbulence ahead — both in these systems and in ourselves.

[Update March 25: Microsoft issues an apology for Tay.]