What Cryptography Can Tell Us About Trust on Social Media

Jwalin Nilesh Joshi
The Misinformation Project
8 min readFeb 24, 2021
Edwin Forbes “Pickets Trading Between the Lines”

During the American Civil War soldiers from the North and the Confederacy would trade newspapers. It was difficult to receive the latest news on the front, and these newspapers contained valuable information about the other side’s internal politics and military outcomes.

There were no formal rules to this exchange; both sides trusted each other enough to swap information. If we assume that everyone involved was a rational actor, we can establish a set of rules that governed the exchange..

  1. Gaining information was more important than fooling the enemy.
  2. There were consequences to breaking trust. Generals would lose a valuable source of information, and that outweighed the benefits of lying.
  3. Newspapers were reliable sources of information. In a pre-Internet world, information was tightly controlled and curated. Newspapers and books were the only places to get accurate information.
  4. There was some form of verification. If a Southern newspaper lied about a defeat, Northern generals would eventually learn about it via telegram.

Meanwhile, it’s 2021 and we’re finding it harder to find reliable sources of information. The democratization of information production enabled by social media and the internet has eroded trust; 23% of Americans say that they have shared fake news stories on social media. It’s more important than ever to build trust on the internet. When we look at the decisions made by civil war generals, it is clear that they no longer apply. Information no longer has value, it has become a commodity. It’s less scarce, less likely to be authentic, and more difficult to verify. Trust is no longer two-way. If you lose trust with a person on social media, you have access to millions of other people who can take their place. A modern world requires modern approaches to trust.

Security researchers faced a different, but related problem. In 1977, MIT researchers discovered RSA encryption. It was a form of asymmetric key cryptography, a method of communication that allowed users to pass sensitive information over insecure channels through the exchange of public keys. If you have someone’s public key, RSA allows you to encrypt a message specifically for them. The only way someone can decrypt that message is if they were in possession of the intended recipient's private key. Having someone’s public key allowed you to build a lock that only they could open.

Public key encryption

However, this requires that both recipients be able to share their public keys over a secure channel. If the exchange of public keys takes place on an insecure channel, an attacker may be able to manipulate the messages so that public keys received by both parties are ones that the attacker owns. The attacker can then decrypt all messages sent on the channel and send messages masquerading as both parties. This is very bad.

This problem often required parties to meet in person to exchange public keys, which was not feasible for sensitive topics, or in cases where anonymity was key. Even when it was feasible, it was highly inefficient. In order to solve this problem, a few approaches were used. The first approach was to trust a Certificate Authority (CA). The CA would issue certificates verifying that a party had a public key, and these certificates would be made publicly available. If you trust a CA, you trust that someone who has been issued a certificate by the CA has the public key that is on their certificate, and you no longer need to exchange keys with them. These certificates are tamper-proof and non-transferable by design.

Another solution was to use a Web-Of-Trust model. Instead of relying on a centralized CA to create and distribute certificates, users designate people they trust and sign certificates for them. Everyone is a CA. The idea is that if you trust a certain user, you should trust everyone that user trusts, which then extends transitively, and you can start trusting friends of friends of friends. If you encounter someone who you do not directly know, you can see the degree of separation between yourself and their certificate and decide whether or not to trust it. This model is very prone to failure, the most notable being that trust isn’t transitive beyond a couple of degrees of separation. In the CA model, you need to make one leap of faith; you only need to believe that the CA is competent and issues certificates properly. Since the CA has economic incentives to make sure that it issues proper certificates, this is not a big jump. In Web-Of-Trust, you need to trust everyone in your extended network. If someone a couple of degrees of separation away from you lets a bad actor in, not only do you trust the bad actor, you trust the people in the bad actor’s network.

Trying to apply these models to misinformation provides some interesting insights. A CA model would work by issuing certificates to trusted users, who would share information that is not fake. However, a couple of challenges quickly arise. Anyone who is a CA would have to be unbiased in what they consider fake news, which is probably not going to happen. Even if we found an unbiased CA, there is still the problem of scale. CA’s for public key infrastructure has quick, semi-automated methods to check and verify identity. Being a CA for information requires a thorough check of the users posting and sharing history, and would most likely be done by a human. Furthermore, it would be difficult for the CA to detect if the user becomes untrustworthy later on and revoke their certificate. It would require constant monitoring of overall certificate grantees, which is not feasible given the size of social media. Moreover, even if an unbiased CA that could scale existed, it is unlikely that social media users would grant said CA any authority. Pandora’s box has been opened, and now that we are all distributors of the content we will not allow information production to be tightly controlled by a central authority again.

This analysis shows that we need a decentralized solution that will work on a massive scale. The Web-Of-Trust model is slightly more promising in this regard. It’s already used implicitly by social media and is scalable — you often see articles and posts that friends comment on and share, and there is some degree of trust between you and your friends. With one or two degrees of separation, there is no need to make trust explicit; Users often know who is trustworthy within their own social network. The real issue arises when we want to see how trustworthy an unknown entity is. However, if we want to explicitly grant trust to unknown entities, the old Web-Of-Trust problems arise.. Web-Of-Trust is decentralized and scalable, but the incentives of individuals do not align with the incentives of the network. This is the final piece of the puzzle. In order for trust to be built, there must be a win-win outcome between individuals in the network. The game cannot be zero-sum. If individuals act in the interests of the whole, then they must be rewarded. If they act against the interests of the whole, they must be penalized.

This incentive structure is reminiscent of blockchain infrastructure. In the vanilla Proof-of-Work blockchain behind Bitcoin, users keep a ledger of all transactions on the blockchain. This must line up with the ledger of at least 51% of other users, or they can no longer add blocks to the chain. Users receive bitcoin for adding blocks to the chain, so there is a clear economic incentive for staying honest. There is a win-win situation, so the network can operate under the assumption that trust can be built.

There are many other consensus mechanisms used by different implementations of the blockchain. The one that best applies to our social media problem is Delegated Proof-of-Stake (DPoS). When used for cryptocurrency, DPoS requires users to cast votes for delegates, or “witnesses”. Delegates are individuals that have been deemed trustworthy actors by the network, and it is their job to validate the legitimacy of new transactions. Users’ votes are weighted by their stake in the network, so richer users’ votes count more. The benefits that come from mining(verifying the chain and receiving cryptocurrency as a result) are distributed in accordance to the user’s stake in the delegate they are backing. If a delegate acts in a dishonest manner and attempts to add a malicious block to the chain, the other delegates will deny their block, thus causing them to lose out on their potential mining reward. Their backers will see this and move to another delegate who is more trustworthy. It’s a bit like a corporate boardroom.. The dividends are distributed to holders in proportion to their stake in the company, and bad executives are voted out. The board members are incentivized to pick good executives because if the company does well they are more likely to receive dividends.

In our social media use-case, the currency is trust and the reward is being heard. The blockchain is applied as follows. Users are given a certain amount of trust tokens when they join the platform. They can use these trust tokens to endorse delegates in the network. When a delegate shares information, that information is verified by the users that interact with the post. If the information is trustworthy in the aggregate, then the delegate receives trust tokens, which are then distributed in a dividend-like format to their backers. Platforms can amplify the voices of users with higher trust values, while those with lower trust values are less likely to be trending. The main idea here is that users will want to endorse delegates they know the network will deem as trustworthy because they themselves want to be heard on the platform. There is no incentive to publish content on a platform if your posts do not garner attention, so untrustworthy actors will eventually leave.

Quantifying trust in this way may pass the game theory test, where we assume rationality and only look at incentives, but we have yet to consider human factors. In order for trust tokens to work, there has to be buy-in from users. They have to be convinced that the Delegated-Proof-of-Stake algorithm is an efficient method of identifying scrupulous actors. In the case where a source of information that they enjoy, say Fox News, is deemed less credible, they have to believe that this is not a liberal conspiracy, but rather the network efficiently allocating social capital. Moreover, diehard followers are unlikely to be deterred by a source losing trust. These scenarios all seem to be damning to our trust token model, but they miss a subtle point. We don’t care about those who have already been misled by bad actors, changing their minds is a near-impossible task. The model serves to protect those who have yet to be influenced, whose naivety makes them susceptible to misinformation. If the network can prevent middle-of-the-road users from being misled, then it does its job.

Clearly, the details of this implementation need to be ironed out in a white paper, but the inherent trust mechanism holds value. In a world where information production is no longer centralized, it is important that we take democratic approaches to information distribution and verification.

--

--