Synthesized Social Signals: Computationally-Derived Social Signals from Account Histories

Jane Im
The Startup
Published in
6 min readMay 1, 2020

This post summarizes a research paper that suggests a new type of social signal on social platforms. This paper was accepted to the ACM CHI Conference on Human Factors in Computing Systems, a premier international conference of Human-Computer Interaction. This work was done with Sonali Tandon, Eshwar Chandrasekharan, Taylor Denby, and Eric Gilbert.

What if we can know an account is toxic before interacting with it online? It would save us so much energy — you can avoid/mute/block toxic accounts before you find yourself in a dispute with them. It would also have the potential to save horrific online abuse that happens to children in online gaming and chats — children can be warned of accounts that pose to be friendly children who are actually predators.

Unfortunately that’s currently hard on social platforms. We argue this is because there aren’t enough social signals. Online, social signals are typically features provided by platform designers that allow users to express themselves. They include profile images, bios, location fields, cover images, etc. as shown in the below figure.

Figure 1. Example social signals on Facebook: a) profile image, b) cover image, c) bio, d) # of followers, e) # of friends (# of mutual friends), f) public list of friends, g) work information, h) school information, i) location information, j) family member information. Past posts in profile page k) contain cues from which we derive synthesized social signals (S3s).

Compared to offline, online social signals are limited to the few platform designers explicitly provide. In other words, there are comparatively fewer cues about someone online than there would be f2f. And perhaps more importantly, online social signals are generally easier to fake than their offline counterparts. I can always pretend I’m a doctor by writing “Dr. Im” in my bio, when I’m actually not. ;)

Synthesized Social Signals (S3s)

But social platforms are better at something than offline — they have a very rich source of information: an account’s behavior history. Offline, it takes time to get to know a person’s history of behavior. For instance, when a person drops by your office for the first time, they don’t bring their history of behaviors with them when they enter the door—it takes time to get to know a person’s past. But in social platforms, you can easily access it by scrolling through the past posts, comments, tags, etc. left in profile pages.

Figure 2. While offline f2f interactions provide more rich social signals, social platforms are better at something than offline — they provide easy access to an account’s behavior history.

Leveraging such abundant data, we suggest the concept of synthesized social signals (S3s) social signals computationally derived from accounts’ post histories and rendered onto the profile so people can see them and make decisions in real-time interactions. In short, we’re bringing algorithms to the edge of the browser so people can make decisions using the information from algorithms in real-time.

Figure 3. Illustration of synthesized social signals (S3s). @bob’s account history flows through different algorithms, A1, A2, … An, to produce signals that are then rendered into the profile. @bob’s profile has been augmented with signals corresponding to authoring toxic messages and spreading misinformation.

Sig

In order to illustrate the concept of S3s, we built a system called Sig, an extensible Chrome extension that computes and visualizes S3s on social platforms, and then renders them into profiles. A user scenario of Sig is as the following (which is based on actual participant’s experience of using Sig).

One of Michelle’s followers retweeted a tweet containing a meme. She thought the tweet was funny, but noticed Sig flagged the account that tweeted it. She goes to check the account profile and finds it is marked as toxic. By clicking on the “toxicity” tag, she discovers that many tweets were aggressive to others and included offensive racial slurs. ‘Glad Sig prevented me from following the account’, she thought while she quickly closed the profile page.

On Twitter, Sig functions on 1) profile pages, 2) the timeline, and 3) the notification page. While it is possible to embed various S3s into Sig, we focused on toxicity and misinformation for our field study because we thought they were well-known problems on social media.

Profile page. Whenever a user visits another account profile page, Sig computes S3s (here toxicity and misinformation spreading behavior) by fetching up to 200 tweets and running the algorithms on them. If at least one of an account’s S3s is over the threshold the user has set (Figure 6), then the border of the profile page turns red to warn the user (Figure 3).

Figure 4. Profile border color is used to render S3s in notification/timeline page of Twitter. A red border indicates that at least one S3 has been triggered, and the account may be risky to interact with (examples: Alice and John, in first and third row). A blue, double-lined border indicates that Sig is currently computing S3s for the account (examples on the right in the first row). If no S3s are triggered, the blue border disappears after computation (examples in second and right of third row).

Notification and Timeline. Sig also shows S3s on the notification page, as it is the main way users become aware of interactions with other accounts. As shown in Figure 4, the profile’s border indicates the current state of Sig. When the border is blue and double lined, it indicates that Sig is computing S3s for that account. When the border turns red, it means that there is at least one S3 triggered (e.g., either the account is likely to be toxic or spreading misinformation). If the blue border disappears and no further changes happen on the profile image, it indicates that Sig finished computing. When a person is scrolling down in the timeline, S3s are computed in real time and visualized by coloring the profile image’s borders.

Figure 5. Modal showing up to five tweets of an account flagged for toxicity S3. Per our pilot study results, we aimed for ensuring transparency in how Sig shows S3s.
Figure 6. Users can adjust the sliders to set their own thresholds for each S3 in Sig. In the field deployment, participants could choose thresholds for toxicity and misinformation.

Findings from the Field Study

For our field study, we focused on Twitter. We recruited 11 people who frequently use Twitter and asked them to use Sig while browsing Twitter for at least 30 minutes per day, over at least four days. Below are some of the findings of our field study.

1. S3s vs. Current social signals

Perhaps the most intriguing finding was that Sig identified accounts that are verified or that have a lot of followers — main examples of existing social signals on social platforms — as toxic or misinformation spreading. For instance, one participant encountered a politician’s account which was recommended by Twitter’s algorithm. It had a blue verification badge but Sig flagged it as spreading misinformation. The participant said it made him realize that the politician “had an agenda they were pushing”. Such examples show that compared to the current social signals (which are easy to manipulate), S3s can provide accurate information about accounts.

“It was like I figured, cause she had a blue check mark and everything, I figured, and she’s a politician so it’s kind of funny when you see that [flagged by Sig as misinformation spreading].”

2. Augmenting social decision-making

Despite the fact that we did not encourage participants to use Sig to follow, mute, or block accounts, many participants used Sig to make social decisions on stranger accounts. And our results show that participants liked that they could make their own decisions based on Sig.

“Double check to see if the flagged tweets matched up with what I thought was a problem, which it usually did […] sometimes I’d mute or block them preemptively.”

3. Reducing time and effort to get account information

Six participants noted that Sig was fast in pulling information from post history — reducing cost (time and effort). All participants liked the popup pulling and showing up to five tweets (Figure 5) as it let them quickly see the reason behind Sig’s flagging. Otherwise one would have to manually scroll down to read tweets.

“…the extension would be a useful way for me to quickly get information without having to scroll back a zillion pages…”

4. Feeling safer

Few participants also reported feeling safer as a result of Sig, as Sig alerted ahead whether an account is toxic before direct interactions. Unsurprisingly, participants who mentioned this were all women and non-binary people, who tend to experience online harassment more frequently compared to men.

“I think I would keep setting it low for general safety, and then if something seemed like it was flagged incorrectly, then I could just note that to myself. I’d rather risk that than it not flagging people it should.”

So far computation has been mostly hiding under the interface of platforms. Our work shows that surfacing results of computation as social signals enables people to make their own decisions, and that people are interested in such opportunities. Furthermore, our work suggests that the CSCW community should ask: How can we re-imagine and design social platforms in a way so the platforms fully utilize the strengths of online spaces?

Check out the full paper here! https://dl.acm.org/doi/abs/10.1145/3313831.3376383

--

--

Jane Im
The Startup

임제인. PhD candidate at the University of Michigan. Meta PhD Research Fellow. Human-Computer Interaction researcher. https://consentful.systems https://imjane.net