Has Facebook gone too far?
Saturday morning I shared a picture of my five children on Facebook. With the picture, I wrote a small note in English which expressed how I felt (switched to fictional names):
“Over 18 years ago I began the most rewarding, challenging and humbling journey of my life: fatherhood. John, Lucy, Janet, Myriam and Catherine, thank you for who you have become and are evolving into. I’m proud of all of you and you make me a better man and human being. I love you. And thank you Sarah for sharing this journey with me.”
Here’s a screenshot of the post with names omitted:
I tagged my five kids and my wife to the post. We live in Montreal so a lot of their friends and my friends are French Canadians and as such use Facebook in French. To my surprise, Facebook translated my post to French and published the French translation to the people in our respective networks that use Facebook in French without: 1) showing them the original post in English; 2) showing me the translated post and 3) getting my permission to show the French post instead of the original one. In addition, once I found out about the post, there was no way for me to see the translated post and change it from my own Facebook account. Here is the translated post they received (one of my daughters took a screenshot of the translated post):
Here’s the full text of the post, with the same fictional names:
“Il y a 18 ans j’ai commencé le plus gratifiant, difficile et humiliant voyage de ma vie: la paternité. John, Lucy, Janet, Myriam et Catherine, merci pour qui tu es devenu et évoluent dans. Je suis fier de vous tous et tu fais de moi un homme meilleur et être humain. Je t’aime. Et merci à Sarah pour avoir partagé ce voyage avec moi.”
For those of you familiar with French, it is a terrible translation that often takes the wrong meaning of words (translating “humbling” to “humiliant” which translates to humiliating instead of the accurate “humility” meaning, or translating “journey” to “voyage” or “trip”) and makes grammatical errors by picking the wrong subject for a verb (for example, “pour qui tu es devenu” refers to only one of my kids and not all five) and even missing the right tense of the verb. Here is my attempt to translate Facebook’s French translation back to English to give you a sense of what their machine is saying:
“18 years ago I started the most rewarding, difficult and humiliating trip of my life: fatherhood. John, Lucy, Janet, Myriam and Catherine, thank you for who you have become (translation works here because doesn’t make a difference between singular and plural in this case) and evolve in. I’m proud of all of you and you (French translation uses singular instead of plural meaning once more) make me a man better and human being. I love you (again, using singular instead of plural translation). And thank you to Sarah for having shared this trip with me.”
I only found out about the French translation because my kids started joking about how bad my French was. None of them realized it was a translation and believed I had written the post. They didn’t see the small print sentence below the post in Figure 2 that I highlighted with a red oval to make sure you see it: “Notez cette traduction”, which means “grade this translation”. When you click on the link, you get this window:
The first thing at the top is the grading exercise that says: “Notez cette traduction” (grade this translation). Then you get four options, listed below with their respective translations in parentheses:
Voir l’originale (See the original)
Ne jamais traduire: Anglais (Never translate: English)
La publication n’était pas en Anglais (the publication wasn’t in English)
Paramètres de langue (language parameters)
Only when you click on “Voir l’originale” (See the original) do you get to see my original post in English.
I then went to my Facebook profile to see how I could stop Facebook from translating my posts in the future and there is no way to do that. Facebook essentially took my post, applied some Artificial Intelligence to it, changed its meaning and grammatical quality but kept me as the author. This resulted in my kids questioning my ability to write in French. In fact, I’m pretty sure everyone who read the post in French had the same reaction. To add insult to injury, I only found out about it because my kids made fun of me. In fact, Facebook hid the whole thing from me. I have no way of reading the version of my post that other people are seeing on Facebook.
This means Facebook has the power and the willingness to use its algorithms to change the meaning of what their users say on Facebook.
Think about this for a second. Facebook can take our words, apply some AI that can change their meaning, and still publish them on Facebook as our own words! Think about the potential ramifications of that. In my case, a bunch of my friends and my kids’ friends think I lost my ability to write in French and that I think fatherhood is a humiliating trip. What does the translation of my post say in Chinese? Russian? Spanish?
Why did Facebook do it?
As we can see from Figure 3, Facebook is trying to build a better translation system, that’s clear. Facebook is using its users to train its translation algorithms, but without their knowledge. My post was translated from English to French by a machine and sent to French users as what was perceived as an original French post, but without my consent or knowledge.
As a result, hundreds of people can now question my IQ if they read the post in French. Hundreds of people now think I feel it is a humiliating experience to be a father which questions my character, when my original post in English clearly referred to humility, not humiliation.
Is it defamation?
Don’t get me wrong, I’ll be fine, but there are now words on the Internet that I didn’t write but that are portrayed as mine by Facebook and the only thing I can do is delete my original post in English. These words I didn’t write are likely having a negative impact on people’s perception of me. I’m sure Facebook didn’t mean it, but that’s probably defamation.
How can Facebook believe it is OK to let AI change the words of its users and the meaning of their posts without their knowledge?
This is a tough one to answer. Either Facebook didn’t think about the ramifications (extremely hard to believe as it would show immense incompetence) or did but didn’t care or just . Either way, it’s scary to think it could be happening at Facebook scale.
Has Facebook gone too far?
With the whole fake news debacle and now this, it is difficult not to think that Facebook is ready and willing to do anything to better its interests, without consideration of the negative implications for its users and society. We need to start holding Facebook accountable for its actions.
Could have Facebook achieved the same or even better results with a different approach?
Absolutely! This whole thing looks like a major UX fail. It’s striking how little consideration Facebook has not just for the author but also for the reader who doesn’t get to see the original post unless he clicks twice and the translation AI that doesn’t get as much data as it could. If we assume Facebook’s goal was to build a better translation AI, it could have designed the system with three personas in mind: the author, the reader and the translation AI. Let’s take each persona and list their needs:
Author: needs the true meaning of his message best communicated and understood by as many readers as possible.
Reader: needs to understand the true meaning of the message of the author.
Translation AI: needs to know whether its translation of the message from English to French is accurate and if not, what an accurate translation of the message would be so it can learn and improve the translation over time.
The existing Facebook system fails every persona:
Author: True meaning of message not communicated to French speaking readers.
Reader: True meaning of message not available for consumption for French readers.
Translation AI: Author doesn’t rate and/or fix the translation. Most French readers don’t rate the translation as they dont even know the French post is not the original. All French readers don’t propose fixes to the translation (they don’t know it’s a translation and it’s not an option for them).
A much better implementation of this AI first system could have worked like this:
First, show the author a translated version of its post and ask him to rate it and publish it or fix it before it is published. As the author, I would have probably spent the extra 5 minutes to fix the translation as it is aligned with my need to communicate the true meaning of my message to French speaking readers. This would provide the translation AI with at a minimum one data point to learn from. Second, show the French readers both the original post and the French post and ask them to rate the translation and/or provide an alternative. Although the reader produced alternative translations would never be published on Facebook, they would provide immense value for the AI system to train from. That system would have served the needs of all three personas while providing a much better experience to the author and the French readers.
Companies like Google, Facebook, Amazon, Apple and Microsoft have been using AI for a long time and most have had bad experiences as a result. Most notable are Microsoft’s racist Twitter chabot, Google’s algorithms suggesting that jews and women are evil, and Facebook’s fake news fiasco. But none of these are as bad as what Facebook has done with this implementation of its translation AI. By using AI to alter our online persona, Facebook threatens our online identity and pushes the boundaries of what is now acceptable behaviour online. Some people, many of those my friends and my family’s friends, now have a distorted perception of me based on words they believe I wrote, because Facebook led them to believe I did, when those words were written by their translation AI. What’s next? Facebook’s AI writing and publishing posts on our behalf without our consent?
We are still at the early stages of understanding the implications of AI in how we interact with the digital world. These AI first systems need to be designed differently and the companies designing them need to think about the potential ramifications and be held responsible for the actions of their AI.