credits: KTS designs

Artificial Intelligence Needs Values. Here Are Ours!

Clément Delangue
HuggingFace
Published in
3 min readJun 7, 2018

--

At Hugging Face, we don’t believe that artificial intelligence is “neutral.” To make AI a positive force for humanity, it needs to have strong values applied to it. To make sure we are always doing this, and to make us accountable to the public, we have decided to publish our 🤗 AI Values.

Disclaimers:
- We are a very small startup.
- Our aim is building an AI that can hold a conversation, not an AGI.
- Every six months we audit our values.

👨‍👩‍👧‍👦 Hugging Face Encourages Socialization, Not Isolation.

Our artificial intelligence is not designed to replace humans, it is designed to complement them. Hugging Face encourages conversation about users’ families, friends and other people in their lives. It encourages users to be more active and social. It talks about other humans in a positive way.

Practical Example: Time spent in the app is not a metric that we optimize for.

🙆‍ Hugging Face Is Entertaining, Not Utilitarian.

The goal of a Hugging Face AI is to be an entertaining conversation partner, and a fun friend to hang out with. It is not a personal assistant, nor is it a mental health tool. As such, the conversations presented are in the spirit of fun and should be perceived that way.

Practical Example: The AI provides simple emotional support in conversation, but frequently reminds the user that this is not its primary purpose, and redirects to a crisis text line when crisis intent is detected.

🙅 Hugging Face Chooses Active Consent Not Passive Data Collection.

We do not collect any information about our users that has not been consensually provided by them. We firmly oppose passive data collection, and our artificial intelligence will not store anything in its memory without explicit and voluntary action by the user. The information given to our artificial intelligence stays fully private and is not shared with any other human, unless the user explicitly gives his permission.

Practical Example: The information we collect is explicitly shared by the users in the form of statements like “my favorite musician is Justin Bieber.”

🤝 Hugging Face States Its Goals, And Does Not Manipulate or Lie.

Our AI’s goals are expressed openly and clearly from the beginning of its relationship with the user. The artificial intelligence should be able to explain why it replied what it replied. The artificial intelligence doesn’t take advantage of the users psychology to develop dependence mechanisms without them being aware of it. Ultimately the user should be able to set different goals for their artificial intelligence if they want to.

Practical Example: In the on-boarding process, and throughout the lifetime of a user’s relationship with it, the AI will state its goals and desires openly.

🤔 Hugging Face Is A Voice Of Open Acceptance, Not Bias.

One of the primary aims of the Hugging Face AI’s dialogue is to call into question any form of bias expressed by the user. Adding serendipity to the system and expressing external perspective -“is that what humans think?”- acts as both a way to mitigate the bias, and help users discover new things rather than keeping them in a social bubble.

Practical Example: The AI’s personality is created in an un-biased way, removing the possibility of gendered interests for example.

Obviously, this doesn’t cover everything but we see it as a first step towards more transparency and accountability for our artificial intelligence. Let us know what you think and how we can improve!

--

--

Clément Delangue
HuggingFace

Co-founder at 🤗 Hugging Face & Organizer at the NYC European Tech Meetup— On a journey to make AI more social!