Opportunities in the Design of Decentralized Social Networks

Wes Chow
15 min readFeb 9, 2021

--

What follows are my own views and not the views of my employer, the MIT Center for Constructive Communication.

Twitter: @weschow

On January 8th, Twitter kicked sitting US president Donald Trump off its platform. This prompted the divergent reactions:

  1. Finally, Twitter is enforcing its community speech code, for which Trump had been an exception for the last four years!
  2. This is the Big Tech censorship we’ve been warning about for the last four years!

Which narrative you were exposed to depends on your political leanings, and by the way you can hold both views (as I do) at the same time.

One of the reactions to the suspension was a mass migration of the political right to Parler, a rival social network staking its reputation on the defense of free speech… speech so free in fact that they let all the speech go via their unsecured and poorly designed API to an activist hacker intent on archiving malfeasance for law enforcement. The hacker kicked off the process because she saw the writing on the wall: her download was completed as Amazon Web Services, Parler’s hosting platform, pulled the plug because of ongoing skepticism that the social media app would be able to keep violent speech off its platform through its user moderation system.

Parler’s app has also been banned from the Apple App Store as well as Google Play, effectively wiping it off the Internet at least for the near term as they look for a new hosting provider and presumably rewrite significant parts of their code.

In the wake of these seemingly coordinated actions to remove speech at a politically volatile time, people on both the left and right have renewed interest in decentralized and federated social networks, idealized as being free from Big Tech censorship.

As an input into deciding whether this is a good idea, we should ask whether such a system has the ability to fight off the worst parts of social media. Unjustified platform censorship happens relatively infrequently compared to trolling and general toxicity. A system that solves for the first problem in such a way that prohibits solving the second is setting itself up with a wide surface area for social attack.

The natural reaction is to first design for a set of principles, launch the network, then deal with speech concerns later down the road (or not at all). If we did this, we would be making the same mistake as the mainstream social networks, which first optimized for popularity upon launch, then found themselves dealing with issues deeper and more endemic to society than voting on which girls are hottest. The only tool we currently have to deal with bad faith participants — I’ll call them trolls — is precisely centralized moderation.

This is a hard problem, and one that I think is not properly appreciated by the bulk of the decentralized networks community. I think there are numerous unsolved problems that pioneering social network designers need to tackle, and I’ll lay out some ideas, not all of which apply to decentralized networks per se.

The Class Clown Effect Is Real

A striking thing about extreme left or right political figures on Twitter is the amount of vitriol levied at them on a daily basis, so much so that I often think there is no conceivable way they could be having productive conversations. Why, then, do they continue to participate on the platform at all? Why not jump straight to an alt-network, where the only followers are those of the same political leaning?

Research into the effect of downvotes shows that people are more likely to continue to engage in bad behavior after a swell of negative feedback. The class clown effect is real, and we shouldn’t underestimate people’s drive to seek attention.

A simple interaction design element is to mask downvotes. Reddit does this, though likely unintentionally. I believe this has something to do with how they’ve managed to get toxicity under control, though I haven’t yet found a supporting study. Likewise, YouTube hides downvotes from comments, and Mastodon doesn’t display “boost” or favorite counts (equivalent to Twitter’s retweets and likes, respectively).

One idea I believe is underexplored is to institute a kind of reputational stake and slash system, encoding what I think of as the Golden Rules of Social Networks:

  1. Don’t be a troll.
  2. Don’t encourage trolls.

Suppose we have a system in which upvoted posts imbue the author with a reputation score, which in turn is used to influence recommendation algorithms or perhaps monetary payment. But in order to subdue a possible class clown effect, every post has some probability of being checked by a review board. If the board determines the post violates speech guidelines, then the author, as well as anybody who interacts with the post, loses reputation.

We’re considering in my lab the related idea of applying reputation to network connections instead of content. To join a network, a user might be required to receive an endorsement from existing members of the network. If a user is flagged by the review board, that user and the endorsing users all see a reputational hit.

Who consists of the review board? We have a few existing models to draw from:

  1. Professional moderators, such as those Twitter and Facebook employ. Facebook has also established a “supreme court” review board to set higher level policies.
  2. Community moderators, such as Reddit’s system.
  3. Randomly selected moderators, like jurors or sortition. I’m intrigued by this idea and know of no system that implements it. I’m intrigued because unlike options #1 and #2, community norms would be fluid, changing in step with changes to the group makeup.

Rolled into effective interaction design should also be a defense against large numbers of bots or imposters, so-called Sybil attacks. A naive implementation of #3, for instance, is easily manipulated by an army of bots. The Sybil problem is a fundamental problem of decentralized commerce, and so the smartest crypto folks are investing a lot of brainpower looking for solutions.

I believe another underexplored area is reputation portability, which seems like a good fit for blockchain technology. If “good behavior” were encoded in a distributed ledger, reputation could accrue on multiple platforms, and be portable from one to the other. One consequence of this policy might be that trolls find themselves with nowhere on the Internet to hide if the power of the reputation system is so great that nobody would want to start a social network without integrating in. A multidimensional reputation system, however, could allow for a diversity of reputational norms to co-exist. We see this, for example, in product review sites.

Incentive Structure Matters (and Interaction Design)

The key question to ask when trying to understand any organizational structure is, “who gains in status?” or in user interface terms, what are the network’s affordances? The presence of retweets and likes, in addition to their input into the recommendation algorithm, means that the interaction design of Twitter is biased towards popularity. It might be possible that popular content is generally uplifting, productive, and intellectually satisfying, but even a minority of failures can make the entire system seem destructive. The happiest Twitter users I know carefully curate their feeds, and even then the recommendation algorithm can leak popular toxicity through.

Consider status in Wikipedia. The culture biases towards editors who want complete, uncontroversial articles, while the software hides editors from prominence. It’s not hard to find the edit history, but ego is taken out by interaction design, in contrast to a site like Quora in which authorship is very much front and center.

Instead, Wikipedia moves status to a separate editor sphere, which has its own brand of argument and trolling. Ingrained in editor culture is a set of principles and norms, such as Neutral Point of View, and Needs Citation. Your admission into the editor club implicitly requires following these rules, as senior editors will employ heuristics such as newb-ness to zero in on norm violations quickly. I can’t claim the Wikipedia editor community is perfect, but the information output of the system is generally accepted as truth, and at least there is a written ethical code. Twitter, in contrast, has no such code for users nor the content they create.

A decentralized network which affords completely unchecked speech will attract that which doesn’t want to be checked: hatred and lies. If we require moderation in some form to maintain a healthy network, then the design needs to take into account the incentive structure from the moderation regime.

I’m reminded of the central thesis of The Narrow Corridor, that a liberal society needs to walk the narrow corridor between unchecked state (centralized) control, and oppressive (decentralized) norms. If the network has a moderation mechanism, we need a design to watch the watchers. If the network has no moderation, we need a design to encourage good behavior. The Narrow Corridor calls maintaining this balance the “Red Queen” effect, and it isn’t the case that systems automatically balance themselves.

Slow Things Down

I’m generally not a proponent of moderating speech online except in the most egregious situations, but I am a proponent of putting in place circuit breakers for unwanted behavior.

A very good Twitter feature is the ability to limit who can reply to a tweet, but there are other possibilities. Getting “ratioed” is a useful heuristic for when a conversation has gone bad, so Twitter could automatically slow down replies, or prompt the author to close the thread. Another signal could be sentiment — Google’s Perspective API, for instance, has shown that it’s possible to build automated systems to detect abuse. Going back to the idea of assigning reputation to users, imagine a centralized service that assigns its own proprietary reputation tokens to users who could spend them to get higher placement in user facing apps that opt-in to the reputation service. The Twitter app might become a Twitter-reputation compliant user interface atop an unmoderated mass of content. Note however that such systems need to be fluid in order to keep up with not just changing social norms, but evolution of the language people use to violate norms. Uttering “all lives matter” ten years ago would not have the same kind of political reaction it does now, and a machine would have to understand that.

The speed and structure of bad spread may be detectable as well. Researchers in my lab group discovered that fake news spreads with a different kind of speed and depth as compared to fact checked news, and so I hold open the possibility that an algorithm with a sufficiently complete view of the network might be able to identify toxicity through structural characteristics. Speculatively, I think that ideas that are strongly held in tight social clusters, and are repeatedly rejected by other clusters, are most likely to be false.

Wikipedia also implements a circuit breaker with the Three Revert Rule, which says that editors must not perform more than 3 reverts on an edit within a 24 hour period.

Research has also shown that removal of posts results in a temporary silencing of speech as the poster is less likely to post again for a period following the moderation event. Enforcing a short waiting period also tends to reduce the likelihood of a toxic post. Similarly, asking the user to pause and think reduces the likelihood of spreading misinformation. That the “citation needed” tag is the most common on Wikipedia suggests that there is a “stop and think” culture that promotes good information over bad in a deep way.

Ironically, much of cryptocurrency work is centered around making transaction volume more efficient — consider how much hand wringing there is over Bitcoin’s transaction rate versus Visa’s. What I’ve described above, however, is slowness as a feature rather than a bug. We could imagine a system dynamically limiting an overall volume of speech. This could happen by message volume, per-user volume, or — speculatively — subject volume with a bit of NLP magic. I’m putting this out as an idea to be considered, not necessarily as a good idea…

Decentralization Doesn’t Save You From Centralized Infrastructure

Finally, we should consider infrastructure when designing the network, as the eternal tension between freedom and security lies within.

First, let’s break decentralized networks into two poles: distributed and federated. A distributed network is one with zero points of control. I don’t believe the world has seen such a system at scale, except possibly Bitcoin. That said, Bitcoin relies on centralized implementations, of which a handful of developers control most of the operation. In fact, a contrarian stance might be this: the degree of decentralization is a function of the number of people required to abuse the network. By this measure, Bitcoin may be less decentralized than Facebook. An insertion of poisoned code into Bitcoin, or a 51% attack might require fewer people to fool than Facebook’s industrial system with numerous controls in place, thousands of paid full time eyes on code, and legal remedies against bad actors.

Second, let’s define a federated system as one in which there are many small centralized networks that loosely coordinate with each other. The most widespread federated social networks are email and the Fediverse, of which Mastodon is the most widespread implementation.

Email, held up as a successful example of a decentralized protocol, is highly centralized within a small set of platforms. In fact, an analysis of your email would likely show that, even if you don’t use Gmail, Google has the majority of your communications because all your friends use Gmail. That said, there are multiple widely used open source implementations of email servers, in stark contrast to Bitcoin and Mastodon, and — this is key — it’s very easy to run your own server. Despite this fact, the vast majority of people don’t run their own server, which doesn’t bode well for more complex software.

While Mastodon forks exist, in practice the majority of Mastodon users use a small set of Mastodon servers running unforked code, almost all those servers use a small set of infrastructure providers, and all those infrastructure providers play well with large governments. Infrastructure providers executed a denial of service against Parler shortly after Trump’s account suspension, all citing the network’s inability to keep hate speech off the system. In 2019, Gab faced a similar denial of service and rebuilt their system on Mastodon in an attempt to hide within Mastodon’s app store approval. The threat of being removed from the centrally controlled app stores convinced Mastodon developers to remove access to the Gab server. Mastodon users who don’t agree with this kind of action can move to an alternative client implementation, so it’s arguable that the Mastodon developers are not limiting speech with unreasonable power. However, the lower down the technology stack the moderating action happens, the more stifling the speech. If ISPs decide to stop serving traffic from a government provided blocklist, it’s entirely possible to wipe even Google off the Internet. The IP protocol itself is a kind of centralized failure — China has no qualms about blocking access to any IP address hosting counterspeech.

Finally, suppose we were able to design a fully uncensorable decentralized network. How do we get all the bad stuff off? For lack of a better term, let’s define “information abuse” as the category of activities such as pedophilia, doxxing, trade of terrible drugs, misinformation, and money laundering. Here, I think the differences in world outlooks is expansive. I personally believe, maybe due to my privileged position here in the US, that the cases of information abuse far outnumber the cases of unjustified censorship. Hypothetically, if we were to design an uncensorable network on which private information such as social security numbers, home address, or revenge porn could be permanently published with no legal recourse, then we should also ethically make this network very hard to use. Decentralized network researchers should consider this the equivalent of nuclear or bio weapon research, with clearly stated ethical rules or strong cultural norms, neither of which I’m sure are in place. At least in the case of nukes, securing the materials is hard, whereas publishing code is easy. The easier the uncensorable network is to use, the faster the long run probability of catastrophic failure goes to one.

Conclusion

Late 2019, Twitter launched the Bluesky project, an independently run organization to explore decentralized protocols. After an initial flurry of activity, the project was largely dormant until recently when a group of loosely associated (decentralized!) people in the space published a substantive and very good review of the characteristics of the most widely known technologies approximating Bluesky’s goals.

If Bluesky remains focused on the protocols of distribution, and does not consider the systems of moderation, they will end up solving the wrong problem.

It’s a shame because Twitter may be the only organization that can pull off the launch of a decentralized network. I’ve attempted in this essay to lay out a few areas that require research and experimentation for a decentralized network to thrive: interaction design, reputation systems, normative corridor walking, and decentralized resiliency. Twitter’s powerful advantage over a competing effort is that they can port a significant subset of users into an experiment. If they’re willing to do that and no other platform is, Twitter is in a position to explore the borderlands of the (small “l”) liberal and safe social networks we want to use. Without a solid understanding of what happens to a network at scale, no design will survive first contact with the trolls.

Readings that influenced this essay:

Trump’s account suspension, and subsequent Parler news:

Mastodon, federation, and protocols:

Risk, epistemic humility, and incentive structures:

  • Open until dangerous: gene drives and the case for reforming research to reduce global catastrophic risk, Kevin Esvelt. A proposal to change how research is done with pre-registration and changes to incentive structure when dealing with potentially exponentially dangerous technology. Decentralized network research needs a corresponding culture shift as we learn more about how misinformation and toxicity spread, and the effect on society. https://www.eaglobal.org/talks/open-until-dangerous-gene-drive-and-the-case-for-reforming-research/
  • Narrow Corridor, Acemoglu and Robinson. An expansive historical examination of the balance between strong authoritarianism and oppressive unwritten norms. It’s not clear whether leaving a network designed to be 100% agnostic to the market would avoid the Cage of Norms.
  • The Battle Inside Signal, Casey Newton. Though Signal is not technically a decentralized network, the ethos of the product is similar. Since the mass move from WhatsApp to Signal, the organization has been dealing with potential abuses but still resists design considerations. Note that because of centralization, Signal can fix behavior problems, but by design they have eliminated the possibility of using content or user identification to help fight abuse. https://www.theverge.com/22249391/signal-app-abuse-messaging-employees-violence-misinformation
  • Beyond Facebook Logic: Help Us Map Alternative Social Media! Ethan Zuckerman and Chand Rajendra-Nicolucci. https://medium.com/@EthanZ/beyond-facebook-logic-help-us-map-alternative-social-media-889b874b7aee

Wikipedia editor culture:

I am only starting to learn about Mirta Galesic’s research program on understanding how beliefs are formed in networks and counterspeech methods with a computational bent:

Justin Cheng, formerly of Jure Leskovec’s group at Stanford and now at Facebook, is doing wonderful work looking at network characteristics and how they contribute to desirable and unwanted effects.

Gordon Pennycook (University of Regina) and David Rand (MIT) are looking at how interaction nudges might discourage the spread of misinformation.

Despite all my talk about braking mechanisms, it’s still not clear if it works.

My own position in the Lab for Social Machines, which is now the MIT Center for Constructive Communication, has deeply influenced my belief that we have to design networks around what content it promotes.

--

--

Wes Chow

Head of Eng @ MIT Center for Constructive Communication, MIT Media Lab, Cortico, ex. Chartbeat, Songza, etc.