Ethical analysis of the open-sourcing of a state-of-the-art conversational AI

Clément Delangue
HuggingFace
Published in
4 min readMay 9, 2019

Today we released a demo, a tutorial and the open-source code base with training and testing scripts to create a state-of-the-art conversational artificial intelligence leveraging transfer learning from a transformer language model.

http://convai.huggingface.co

Why?

As you can see from taking a look at our GitHub and medium pages, at Hugging Face, we firmly believe that open-source and knowledge sharing should be the default. It is both the easiest and fairest way for everyone to participate and reap the fruits of the remarkable progress of deep learning for NLP.

Without open-source, the entire field faces the risk of not making progress and concentrating capabilities at the hands of a couple of massive players (be they corporations or states), without anyone else but them being able to understand, compete or control.

In addition to this firm belief in open-source, when we started working on this new approach of building conversational artificial intelligence, we realized we needed to share it with the world. The same way Eliza, A.L.I.C.E, Mitsuku or Siri gave way to a whole generation of interactive experiences, we believe that this new dataset-driven approach will pave the way not only for better AI assistants but for a more human way of interacting and understanding technology through natural language, the native interface of humans.

How to prevent malicious use?

Despite our firm open stance, we also believe that technology is not neutral and specific action must be taken on a day-to-day basis for it to have a positive impact. For example, we publicly published the values of our conversational artificial intelligence, almost a year ago.

https://medium.com/huggingface/artificial-intelligence-needs-values-here-are-ours-dc4268366d0f

As a consequence, we have taken a cautious approach to analyze not only the positive impact but also the potential malicious use of this new release.

As for most of our technologies, we expect to see significant usage of it in a wide array of contexts. As a reference point, our users exchanged half a billion messages with our conversational AI, and our open-source repository of transfer learning models has been installed almost 200,000 times over the last few months. Thousands of companies, engineers and researchers are now using it all across the globe.

When we analyzed this new release, we concluded that the two major practical cases for malicious use of the technology could be:

1/ Drastic improvement of spam-bots

Fortunately, most of today’s messaging platforms have stringent measures in place to detect and ban spam bots, which won’t be impacted by more advanced conversational AI as detection usually relies on non-conversational limitations like captchas or requests and account creation capping.

Also, we strongly recommend regulators and platforms to take a look at approaches like http://gltr.io for systemic detection of generative text.

2/ Mass catfishing and identity fraud

This is the reason why we haven’t optimized for the ability of the model to work with all kinds of bios. It means that it will work well with the ones that we trained on, which were provided by the convai challenge but won’t work too well on different ones, especially real ones, making the work of catfishing way harder than it seems.

Also, we decided not to release the full GPT2 model but leveraged the smaller version of the model, which generates a conversation which — even if impressive- can still usually easily be told apart from a real human conversation, preventing most of the catfishing impact. We are aligning ourselves with OpenAI in not releasing a bigger model until they do.

Lastly, to mitigate both 1/ and 2/ we concluded that we should give users, regulators, and platforms some simple tricks that should help to recognize this new conversational AI from the conversation with a human to avoid any form of misleading.

How to know you are chatting with a transfer learning conversational AI

  • Ask reasoning or simple-math-language question. Deep-learning models have very low reasoning capabilities. If they try to avoid answering they are likely a bot:
  • Send a long message with a simple question inside. Deep-learning models struggle to extract simple meaning from long sentences:
  • Make it use rare or less common words, deep-learning models are reluctant to do that:

Thanks to these, we are convinced that this release will represent a step forward in our collective ability to build a more open, interactive and conversational future. But as always, this is an open discussion so feel free to let us know what you think.

🤗🤗🤗

--

--

Clément Delangue
HuggingFace

Co-founder at 🤗 Hugging Face & Organizer at the NYC European Tech Meetup— On a journey to make AI more social!