Challenges and opportunities in online communication
Have you ever found yourself disagreeing with someone online? When someone says something you find upsetting in a comment, in a post, or in response to an email with many people cc’d, how does it feel to try to write your reply? Do you think the other person has any idea of the impact that their comment, post, or email had on you?
Misunderstanding, disagreement, and conflict are inherent to human experience, yet more often than not, people reconcile and find resolutions that help us live in rich, complex communities that accomplish great things. When we communicate face-to-face we have many built in ways to understand each other. We hear, see, and feel each other which helps us shape our communication, empathize with each other’s needs, and find common ground.
When we communicate through text online, we lack most of the social signals that help us navigate in-person communication: vocal intonation, facial expressions, touch, and posture. We read something online that we find exasperating and we are alone with our emotions. When we feel aggravated, it is more difficult to craft a socially constructive response, which is hard enough when you’re one-on-one. Now imagine having to do this on a stage with an audience, which is what it feels like when your e-mail or post is seen by co-workers or friends. At the same time the friend or co-worker who wrote whatever got us going has no explicit information about the impact it had on us.
Then there is free speech. The right to be able to express ourselves is so important. Yet, what are the places in the world where we can do that without regard of the place we are in? What are the things you shouldn’t say in a meeting at work? Or in a family gathering? At school? There are social norms that make those environments safe, where we can relate with each other without worrying about who voted for who in the last election.
Laws and policies are essential to our society. It is illegal to harass or hit someone. We need police and a legal system to come in and adjudicate when laws are broken. When we do something that violates social norms, usually someone takes us aside and we talk about it or there are natural consequences like someone leaving the meeting or dinner table. We learn from each other. It would be awkward to have an authority, say the “comment police”, come in and decide for us.
We are living in a world where political parties, news outlets, and other institutions share articles that are polarizing in order to further their own goals. When these articles resonate, we share them with our friends. Yet there will be friends who find them upsetting, if one of them wants to share a different opinion, imagine them doing so on that stage. What would it feel like if someone contradicted something important to you in front of all your friends or co-workers?
How are we to learn to be with each other online?
A few notes
I used to work at Facebook as an engineering director from 2009 to 2015. Amongst other things I supported the engineering, product, user research, data analytics, design, and content strategy teams that worked on customer care and stopping spam. One of the areas I focused on was building tools to help people navigate difficult life experiences, from bullying to suicide prevention. One billion people started to use Facebook while I was working there. Since I left Facebook, I’ve been thinking of the challenges and opportunities in online communication and want to share some thoughts. I have two hopes: (1) to give people who use technology tools to help navigate challenging moments online, and (2) to share with people who build technology, questions that offer opportunities for new work that helps build and maintain community.
Describing things with multiple parties can be tricky. I will use ‘Alice’ when I’m talking of someone who shares something online, and ‘Bob’ for the person who is reading it.
N.B. Everything I share about the time I was at Facebook was publicly disclosed while I was there.
Part One: If you’re someone who uses technology
Have you ever been part of an email discussion where someone said something you strongly disagreed with? What happened when you replied to everyone with your own opinion? How about when you see a political post on social media? Do you feel you can express what you think or feel? How does it go when you do?
The most surprising finding from my time at Facebook was survey research which found that 90% of people whose content was reported shared it with a positive intention. Amongst other reasons they shared it because they thought it was important or funny.
It took a few years to get to these findings. Early in my time at Facebook we found that most of the photos being reported did not violate any Facebook policies or community standards. For example, a photo that was reported for nudity did not have any nudity in it. Instead it would be two people with their arms around each other, waving at the camera. When we investigated we learned that Alice shared a photo with Bob in it, and Bob did not like how he looked in the photo that was just shared with their friends. When we found this, our first approach was to give people a chance to send a message, with an empty message box. Few people used it.
Why was this? I have found it most helpful to reason about these things by removing technology. When we’re feeling embarrassed it is hard to tell a friend that what they did was embarrassing to us. Even in the case where they might have done so unintentionally. When we’re in person we communicate that embarrassment with our body language and the other person can correct. Online Alice has no has no access to real-time, in-person feedback, and without knowledge of the impact of what she shared she cannot adjust or correct.
Recognizing this the team built a tool that made it easy for Bob to ask Alice to remove the photo. First there was an option — “I’m in the photo and I don’t like it”, then there were options about what you did not like about the photo: “It’s embarrassing” or “It’s a bad photo of me”. Then a message box with a draft message came up. The draft was written using the science of polite communication. Something along the lines of: “Hi Alice, there is something about this photo that is embarrassing to me, would you please take it down?”
90% of people surveyed appreciated receiving such a message. Would you like to know when you’ve unintentionally had a negative impact on someone else? How would you feel if that was communicated in a respectful way?
The other surprising finding was that half of people who received the message would take the photo down, and the other half would engage in a dialog and the photo would stay up. From surveys people completed after their experience, we learned that the dialog helped people understand each other (“What do you mean embarrassed? You look great!” or “It is my granny’s favorite picture of me!”). That mutual understanding led to positive sentiment by both people most of the time.
In that process, we learned the importance of communicating emotion. When we become aware that someone is sad or embarrassed, we are more likely to empathize.
Dealing with misunderstanding, disagreement, or conflict online is now part of our lives. So what can you do?
- Take a breath. Name your feeling, get some distance. Avoid reflexively responding while triggered. Avoid responding with an audience.
- If possible talk in-person, or video chat, or voice chat.
- Send a private message that uses respectful language and discloses how you feel (e.g. what emotion(s) you are feeling, such as embarrassed, sad, upset), not what you think of the other person (e.g. I am upset vs. you are a jerk).
- The goal of sending a message is to offer the signals that are missing face-to-face, and invite a respectful dialog. The goal is not to change their mind, the goal is to start a conversation.
When we are negatively impacted by the actions of others, it is important to let them know. If you’d like to learn more about effective ways of doing this, I recommend reading “Non-Violent Communication” by Marshall Rosenberg,
There are also people who share with a negative intent to provoke or hurt. Or who get defensive or upset even if we communicate carefully. It is important to recognize when the conversation goes astray and walk away. When doing so, it is key to connect with someone you trust and who will help you process what happened.
Part Two: If you’re someone who works in designing, building, measuring, or supporting technology
What % of people who use your product would say:
- An interaction in your product has distanced them from someone they know?
- They feel it is not safe, or pointless for them to comment in a post or reply to a message they disagree with?
- They feel that there is something they can do with posts or messages they find upsetting?
- They don’t feel safe in your product?
When I started working at Facebook, I had come from a security background. The rules were simple: there were good people and bad people. You created tools to detect and stop the bad people in order to protect the good people.
I then learned that there were more categories:
- People who behaved as good members of the community (no spamming).
- People who behaved in ways that were harmful to the community, but given feedback would improve their behavior (people who mean well but spam).
- People who behaved maliciously (spammers, trolls) — a small number with great effect.
There was a big difference between malicious spammers whose goal was to scam people, and people who were trying to start a business, or believe in a cause, who meant well but sent unwanted messages or posts.
The same pattern holds in other areas. A few years ago Facebook shared the results of a large scale study about bullying: of 800,000 reports, around 1.5% was the kind of bullying that you read about. The remaining 98.5%, hundreds of thousands of reports were people that did things that were annoying or bothersome to each other, like memes or sharing embarassing photos from a party.
This means that it is important to have two kinds of efforts:
- High intensity low volume issues: malicious bullying, people out to hurt other people.
- Low intensity high volume issues: when people do things that other people find upsetting, but given feedback and respectful dialog both people would learn.
If this is an area that you’re working on, a few questions:
- How do you discover high intensity low volume issues?
- How do you discover low intensity high volume issues?
- What are your measurements of each?
For high intensity low volume issues like bullying I learned it was very important to partner with people in the scientific community who study these issues, as well as practitioners in the field. I liked thinking that I could come up with good solutions, but I was proven wrong many times. What worked the best was an understanding of what works for people who work these issues within their communities, and to seek out people who do evidence based measurement of their work. It is only by understanding what the best practitioners do that you can start building good solutions that can take advantage of the scale technology offers.
Low intensity high volume issues give you a great opportunity to work on peer conflict resolution and civil discourse. The issues that come up usually do not violate policy, but are a negative experience for the people you serve. There are many opportunities for innovative work here. Messaging is not the best solution for many problems. What would be other lightweight, private, ways of giving and receiving feedback? How well does emotion-based feedback work? How does social connection affect the feedback that is received? How would public or private reputation scores work? The best way to measure work on this class of issues is by understanding sentiment of the parties involved.
Understanding reports
There is no such thing as a bad report, if someone is reporting an issue there is a reason for it. How well do you understand the reason why somebody reported an issue?
One of the key things I learned is that what you perceive is shaped by the environment you’re in. The reporting flows you build are based on the policies that you have committed to enforce. When you review issues, it is against the categories that you have defined.
How do you discover other issues?
The best insights I got about reported content involved putting the policies aside for a moment, and asking people why they were reporting. Most of the time the answers were surprising, and did not fit any of the policy categories. The American or European definition of ‘hate speech’ does not apply in other cultures, in Latin America it was used for posts about soccer teams.
The only people who understand the reporting categories as defined by your companies policies are the people who do an amazing job of reviewing the content that was submitted, and who create the policies that make that work possible. Those are such hard jobs and the people doing them deserve our deepest respect.
But what about all the issues people try to report that are not covered by policies? How well do you understand the negative experiences people have with the products you build?
There are a few questions that help bring clarity to that:
- What % of reported items do you act on? (i.e. clearly violate policy)
- What % of reported items are ‘borderline’? (i.e. close but not quite)
- What % of reported items seem to be unrelated to the reporting category?
- What is the dropoff rate on the reporting flow?
- Do people who use the reporting flow feel that it captures their issue?
- How do people feel after they submit a report? Toward the person who posted, other people involved, and your company or service?
- What % of people feel the issue they reported was resolved?
- What % of people feel positive about the person who created the item they are reporting after the issue is addressed?
Can a positive reporting experience or outcome actually bring people closer together? Foster mutual understanding or civil discourse?
If the % of reported items you act on is less than 20% then your reporting flows do not meet the needs of the people you serve. It means most issues will be ignored and that reviewing them against the policy they were reported for will not be time well spent, everyone loses. A low action rate means that the reports need to be investigated, not against policy but by user surveying, to understand what is happening. Ideally, once you get the reporting structure right, the % of reported items that are clearly related to the category should be around 75%.
How do you get the reporting structure right?
By asking people why they are reporting, and then providing them the right tools for the issue they are reporting. You can measure this by asking what % of people using the reporting flow feel it captured their issue. It was through this process of gathering feedback that I learned the importance of language. Identifying the specific language or words that matched people’s experience led to improvements in engagement of 20% to 60%.
It is also important when developing the language to work closely with people from the cultures your product supports. For example, the initial word that was used for ‘harassment’ in France was found to only mean sexual harassment and people there pointed out that it did not encompass different forms of harassment in the way it does in English so they suggested a different word to capture this.
During a focus group for teenagers there was a discussion about being ‘harassed’, the word that was used to report bullying for teens, and no one in the room felt that they, or anyone they knew had been harassed. When the moderator asked, ‘So what does happen?’, they said ‘people spread rumors’ or ‘people post inappropriate photos’. When the options were changed to reflect teen’s experience the completion for reporting went from 20% to 80%.
Going back to ‘hate speech’ in India, the % of reported items that were clearly related to the ‘Hate Speech’ policy were around 5%. Most of the items were people saying things about sports teams or religion, they did not violate any policies. Surveys found the reasons people were reporting “This disrespects my religion”, or “This disrespects something important to me”. When the reporting options reflected people’s experience and language they got to 75% accuracy, and then you could direct people to the correct resolution: an understanding that this kind of speech is allowed, but that they can discuss with their friends. When the issue was hate speech, a different series of steps and policy actions were needed.
Most reporting tools use words and concepts that have meaning for the people in the company. Finding the language that matched the person’s experience led to significant increases in completion, and it helps people get to the correct resolution for their issue.
Another interesting idea to explore is to provide a reporting structure that focuses on the person’s experience, rather than the policy. Is it something where someone feels very anxious or afraid? The reported item might not violate policy, but it is important to provide tools for people who ask for help.
The role of policy and classifiers
There are 4 kinds of content you see on the site:
- Happy content.
- Content that people find upsetting but does not violate policy, for example polarizing content like political speech.
- Borderline content (e.g. anti-vaccination).
- Content that violates policy (e.g. nudity).
Policy is essential to help identify the content that you do not want on your site. Classifiers are necessary to help proactively identify content that violates policy, as well as identifying borderline content and limiting its distribution. Policy and classifiers are a necessary part of the solution, but they are not the solution.
When I was first interviewing at Facebook, I remember someone saying that the goal was to replace the Christmas letter. To create a place where we could share about our lives, the happy and difficult moments, so that we did not need to write a letter at then end of the year. Our friends would feel close to us even if they lived far away.
What kind of place do we want our services to be?
What I found most interesting about the Russian articles was not the Russians, but that they were intent on sharing the things that divide us that our free speech correctly protects.
Any political operative who understands this knows that what they need to do is share polarizing content. Content that is well within what policies allow. People will respond emotionally and share with good intentions.
Free speech works in a different way at the dinner table, in the workplace, or at school. There are things we don’t say because of the social norms in those spaces. Social norms help create spaces we feel safe inhabiting.
Here is a good example of social norms as articulated by a moderator from a post on Reddit about Elijah Cummings’ passing:
r/Politics does not welcome celebrating one’s death.
You are welcome to disagree with politicians and everything they did during their time in office.
You are not welcome, however, to make fun of their death. Their death has nothing to do with our political opinions.
You could say at the dinner table that you’re happy he’s dead, but the reactions of those around you would likely temper your words, and if you did say it, you might get some feedback later on. This kind of feedback is how we learn to be in community with each other, and is a key part of creating communities that we want to be a part of.
Today the tools we have when we come across content that is upsetting is to report it, or get up “on stage” and comment in front of all your friends, which can lead to escalation. Reporting won’t do anything as it does not violate policy. Or you could avoid conflict and create distance. The next set of tools: hide someone from your feed, block them, unfriend them, or having an automated system hide their content does not help them learn to do anything different, and only helps us distance or sever our connection to them.
What tools or products could help mutual understanding? I don’t mean agreement, as all of us are entitled to our values and opinions.
The role of services in our lives
One of the things I found helpful in reasoning about this class of problems is to take the ‘online’ away. Right now services have put themselves in the position of policing our communications, and I don’t think that works for anybody.
It is as if you were in a dinner conversation, and you said something only to have someone from the content police come into the room and take it away. While it is essential that you have police to protect us from the most harmful things, there should be other forms of recourse where we can figure out the right outcome with each other .
If we build good mechanisms that help people sort things out with each other, there might be less need for the content police in our everyday lives.
Which buttons you have matters
One of the things I began doing after working on these tools is sending private messages to friends who had shared things I found upsetting. Prior to building the tools I’ve described I wish I had done research on why people, instead of messaging their friends, would reach for the report button. Maybe it is because when we’re emotionally activated, we reach for the first thing that we feel we can do. Maybe it’s because it’s hard to tell our friends when we’re upset because of something they did.
The last published number I saw on the flows where you ask your friend to remove the picture was that it had been used by over 100 million people. Putting a link or button that helps people navigate a moment when they are emotionally activated works.
Opportunities in online communication
After seeing the events of the last few years, I believe that working on polarizing content that does not violate policy (or is borderline) is one of the most important issues for the fabric of society today.
There are two solutions: limit distribution (or audience), or create tools that help the development of social norms and civil discourse. I believe limiting distribution will only further the divisions that are developing.
We are in the early days of living in a world where most of our communications are mediated by technology. There is a great opportunity to shape the foundation of this field:
- How can we help people become aware of when they have had a negative impact on others?
- What could we build that cultivates mutual understanding on the things we disagree on?
- What are different ways of supporting the creation of social norms in online spaces?
- What could you build that would help civil discourse?
These assume:
- Most people will appreciate receiving feedback in a respectful way.
- People who are malicious (trolls, spammers, etc) require a different treatment.
There are different ways that tools could be measured:
- Does a negative interaction result on neutral or positive sentiment between people?
- Do people feel that they have recourse when they encounter upsetting content?
I believe that if we develop products that take the interactions that divide us, and instead turn them into opportunities for respectful feedback that incorporates the communication of our emotions that we will, with each other, develop ways of being online that will strengthen our communities, independent of our different beliefs — political or otherwise.
Recommended reading
Tankard, Paluck on Norm perception as a vehicle for social change: https://static1.squarespace.com/static/5186d08fe4b065e39b45b91e/t/568de2f4e0327c7b8a288d06/1452139252477/TankardPaluck+2016.pdf
Keltner — A psychologist probes how altruism, Darwinism and neurobiology mean that we can succeed by not being cutthroat: https://www.scientificamerican.com/article/kindness-emotions-psychology/
Weger — Active Listening in Peer Interviews: The Influence of Message Paraphrasing on Perceptions of Listening Skill: https://www.tandfonline.com/doi/full/10.1080/10904010903466311?scroll=top&needAccess=true
DeSteno — The simplest way to build trust: https://hbr.org/2014/06/the-simplest-way-to-build-trust
The Better Conversations Guide: https://onbeing.org/civil-conversations-project/better-conversations-guide/
Six tips for reading emotion in text messages: https://greatergood.berkeley.edu/article/item/six_tips_for_reading_emotions_in_text_messages
Can text messages damage intimate communication https://www.psychologytoday.com/us/blog/rediscovering-love/201102/can-text-messages-damage-intimate-communication