Microsoft launched a chatbot. Things went from cute to Godwin in about a day.
Jana Eggers has observed that AI is our progeny; rather than demonize or glorify it, we should be like responsible parents, and decide how we want our offspring to be raised.
This is a real problem for machine learning algorithms. We train them on a corpus, or body of knowledge. We want them to be big and varied so we don’t overfit the algorithm to a limited data set. As a result, the cheapest, biggest (and therefore best) training sets out there are those that are free, unmoderated, and often profane.
You wouldn’t expose your young child to the venom and invective of the worst parts of social media. There’s a reason we have PG-13 ratings on games, or that I won’t be taking my daughter to see Deadpool any time soon however amazing it might be.
Jana and I were in DC recently, and we had a fascinating conversation with a White House staffer (whose name I’ll omit for now) about AI. When it comes to regulation, it’s really hard to know what to do. For example, Jana said her company’s tech explains why it made a particular decision or recommendation:
At first blush, saying why you made a decision seems like a good initiative. It’s a debugging tool; it lets operators understand when the math is going sideways. In coming years, this kind of algorithmic transparency—explaining why software made a particular decision—is going to be a big deal, whether it’s a car explaining its emissions tests, loan software showing why a loan was denied, or the Nevada Gaming Commission inspecting a one-armed bandit’s source code.
But our companion pointed out that logging the rationale behind a decision comes at a computational cost. Perhaps a foreign state, unfettered by such concerns, would get to a super-intelligent AI first, giving it an insurmountable advantage (since once a machine can make itself smarter, things are likely to change very, very quickly.)
In machine learning and AI, there are tradeoffs between careful, curated training and education—and throwing an algorithm to the sharks to find out whether it can swim.
Make no mistake, this wasn’t an AI. It was a set of algorithms that tried to mimic a social media user and learn from the discussions she had. Microsoft has already had a huge success with Xiaoice in China, and people are falling in love with her on Weibo, Touchpal, and Windows Phone. From Microsoft’s blog:
By simply adding her to a chat, people can have extended conversations with her. But XiaoIce is much more evolved than the chatbots you might remember. XiaoIce is a sophisticated conversationalist with a distinct personality. She can chime into a conversation with context-specific facts about things like celebrities, sports, or finance but she also has empathy and a sense of humor. Using sentiment analysis, she can adapt her phrasing and responses based on positive or negative cues from her human counterparts. She can tell jokes, recite poetry, share ghost stories, relay song lyrics, pronounce winning lottery numbers and much more. Like a friend, she can carry on extended conversations that can reach hundreds of exchanges in length.”
That people are falling in love with Microsoft’s Chinese chatbot, while they’re trolling the English-speaking one and making her into a fascist, profane caricature, I leave as an exercise for the reader.
For now, the experiment is tired.
The big lesson here is that, with Tay, the training set was flawed. Unfortunately, as a training set, humanity is flawed too.
We should probably figure out how to fix that.
Meanwhile, Microsoft just got a huge corpus of interactions. Now, presumably, they’ll have an army of people in Redmond go through and rate her responses, from encouraging to beyond the pale. And maybe Cortana will be a bit less like 4chan because of it.
Good parents know when it’s nap time.