Researchers at Facebook Created a New AI And the World Went Crazy

Noah Gundotra
Imploding Gradients
8 min readAug 2, 2017

Thanks to everyone who responded to my last article, 4 Counter Points for Dr. Gary Marcus. Since then, AI journalists have been sensationalizing research done at Facebook to insane levels that has enraged the entire AI community. In the words of some of these news outlets: Facebook Shuts Down Rogue AI Programs, Which Started Talking to Each Other. I wish I could add a sound of myself sighing because just writing *sigh* doesn’t convey my disappointment.

Basically, yes Facebook trained a new type of neural net for an advanced chat bot, let them chat, and published a paper. In the process, they had to shut down some of their neural networks that trained improperly and had started to talk in a degraded sort of English.

But the story about the Facebook research and the AI community’s response is an invaluable lesson about the dangers of AI sensationalism. I hope that by the end of this article you’ll be properly and accurately up to date about the state of AI research, and unlike others, you’ll actually know what parts of AI are worth being wary of… (see end of article)

Where This All Started…

A few weeks ago, FastCo wrote an extremely intriguing piece interviewing Facebook AI Research (FAIR) scientists about their research. In fact, I think it is one of the best articles I’ve read covering AI research. The reporter, Mark Wilson, not only explains the research, but also the far-reaching implications of what Facebook’s Research team is doing. I highly recommend reading that article: it’s a great blend of technical details, anecdotes from the FAIR team, and business insight.

Unfortunately, the article’s title is very good clickbait. And that attracted less technically-advanced news sources from all over to pick up on phrases from researchers like:

“Agents will drift off understandable language and invent codewords for themselves”
–Druv Batra

And questions like:

Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.

This was a poor stroke of luck for the AI community. News sources like The Telegraph missed the “Maybe” part, and further articles, like this insane conspiracy-level piece from Inc just missed the mark completely. That article is complete with a video of spinning brains overlooking chessboards, pictures of Elon Musk, and topped with music dramatic enough for a Christopher Nolan Batman movie. It’s worth watching just to understand the level of hype.

These news sources reported solely on 3 things:

  • Facebook
  • AI talking to each other
  • Researchers shut it down!

Whereas the FastCo article talked about:

  • The researchers
  • Difficulties with the research
  • Implications of the research

Notice that the FastCo focused on the research, while The Telegraph and Inc both focused on sensationalizing AI. And basically, journalists around the world picked up the story this week and went crazy on their reporting. Some outlets went out of their way to insinuate Facebook thought their AI was going “rogue,” and others thought that it signaled an immediate jump to Artificial General Intelligence.

Here’s What Actually Happened

In reality, the research was (surprise!) nowhere near AGI, and the (surprise!) the code was under their control. The research itself is quite cool however. Facebook’s brilliant research team implemented a new type of neural network architecture to build bots capable of bargaining. This new architecture, called Dialog Rollouts, was used to infer the bids of the opponent bargainer and to plan responses. It looks really cool, especially because this type of architecture is usually only tested in neural nets in video games. (See recent work by Deepmind in video game planning.) The only sentence in FAIR’s blog post about their AI concerning creation of “a new language” was this:

To prevent the algorithm from developing its own language, it was simultaneously trained to produce humanlike language.

The quotes in the FastCo article must have been adapted from a personal conversation between Wilson and the scientists at Facebook. With regards to model, when FAIR’s neural net did started speaking pidgin English–they shut it down and optimized for proper English language by adding more terms to their loss function. It’s nowhere near the sensationalism of the articles that covered it. It’s just math and optimization tuning. It’s not sexy. It’s not clickbait worthy. And it doesn’t get a lot of attention. That’s because it’s hard, state-of-the-art, significant research. Part of the problem with articles that sensationalize Facebook’s AI is that their coverage turns the research into a target of pop culture right alongside the Kardashians and The Bachelor. Sensationalism trivializes these researchers work, obfuscates their intentions, and gives the whole field of AI a bad name. The scientists in the AI community are hard at work trying to understand the limits of this fast growing technology, meanwhile the news media draws excessive attention to least pressing issue in the field. Media that exaggerates the “AI Uprising” narrative misdirect the public’s attention from the small advancements that do have impact to clickbait journalism. Thus, I will do my best to address the applications of FAIR’s novel architecture.

Applications of FAIR’s Research

Mentioned in the FastCo article, having software that can talk to each other without human intervention could be useful for application development. Today, companies that provide operating systems (e.g. Apple) have to invest lots of resources to allow applications and other services to communicate with each other. Allowing an AI to develop a language to communicate between applications and services might be able to help address that problem.

Here’s What AI Researchers Think

The response from the AI community was incredulousness. Many individuals criticized the reporter’s work as unethical and extremely inappropriate. This is because these sorts of articles obfuscate most people’s understanding of AI, and lead people to distrust AI research. Plus it adds to the already “power level over 9000” amount of hype in the field.

Current hype level in AI. I really like this gif. It just feels right. Might just be me.

When it comes to AI literacy, there are two parts: 1) understanding where the field is and 2) understanding where it is going. Even though experts disagree about both(!)–most agree that this type of sensationalism and hype is dangerous. Google Brain researcher hard maru tweeted:

Which is a pretty harsh condemnation of The Telegraph’s coverage. But there was wayyy more backlash to The Telegraph and others in the following days. Director of Data Science at iRobot, the people who brought you Zumba, also said something similar:

Director of Data Science at iRobot condemns AI sensationalism. This tweet was directed at those in scientific disciplines irresponsibly resharing the Telegraph’s article.

Top researcher’s at Salesforce also commented on this viral spread of sensationalism. I mean, pretty much everyone in the field had something to say about this.

Local AI expert Xavier Amatriain also wrote a response to this hype on Quora. He’s an been a leading force of the Machine Learning teams Netflix and Quora. His response is here. Since then, data scientists and practitioners from all over have formed #shutdowntheai on Twitter which has now become filled with jokes about sensationalized reporting.

An AI Uprising Is Still A Laughing Matter

Facebook’s AI was under control the whole time. (No, I was not paid or bribed to say that.) Fake news is journalism that questions basic facts and tells exaggerations as possible truths. Articles that question whether or not AI systems are under the control of researchers are most likely perpetuating fake news. AI systems are still under our control, but it’s mostly the people outside of AI research who don’t know that. For example, on Monday, there was this tweet that separated the AI researchers from the AI wannabes:

This is a joke from an Associate Professor at Georgia Techs School of Interactive Computing.

The joke is funny only if you’ve actually played with AI code yourself. Of course, very few journalists or armchair AI speculators have actually done so. In order to understand the joke, you have to know that most AI systems today are run in the command line. The invocation looks something like:

$ python skynetai.py
...training...
Ready to take over human race
Extinction Initiating...

And once your AI goes rogue like that, you execute this amazing command called ctrl-C! Ctrl-C makes your AI stop. It’s a command that overrides your code and shuts it down. It’s used by noobs to stop infinite while loops and AI experts to stop training. Or to prevent an AI apocalypse.

Conclusion

There are some dangerous things to worry about in AI, but Facebook’s AI bots are not one of them. It’s very unlikely that we are going to be taken over by an army of bots that talk to each other in their own language. You just Ctrl-C the damn things.

I hope that in the future, AI journalists will be more careful about their headlines and tone when it comes to talking about AI. This field is so technically difficult, rapidly growing, and so hyped up that additional sensationalism just makes it that much more difficult to follow AI.

All jokes aside, these articles suggesting an AI takeover is likely are as ridiculous as saying the Large Hadron Collider will create a blackhole that destroys us all.

Google Brain Researcher who created an alternative framework to Tensorflow. It’s called Keras. It’s great. You should follow him.

If you want something to be concerned about, worry about the things that AI can already do which are poised to disrupt society today:

Just plain scary how simple these AI hacks are. No reporting on these though!!

And that’s just the tip of the iceberg... I guess I’ll save the rest for another blog post :) Stay tuned for more of my posts in the Imploding Gradients data science publication on Medium.

Follow Imploding Gradients to hear from brilliant young minds in the data science community, like 16 year old Mikel Bober-Irizar who just published a paper at the world’s premiere Computer Vision conference.

AI current events can be muddled with factual inaccuracies and sensationalism. If you’d like to continue to hear my views on current AI news, follow me on Medium and leave a ❤! Follow me @ngundotra on Twitter.

For more coverage of this, see Gizmodo’s review.

--

--