Bad Actors In Decentralized Apps

When our technology stack is decentralized, so is the power to moderate and safeguard it

Jonathan Boiser
Offline Camp
10 min readDec 9, 2019

--

Introduction

As more people, money, and attention move to the internet, those who own, build, and use online spaces are increasingly having to reckon with bad actors in their midst.

By the term bad actor, we refer to those who seek to do harm against others on various platforms and beyond, often covertly and usually through exploiting weak points in how they are designed or operated.

…and not this kind of “bad actor” (Photo: Barry, HBO)

The effects of bad actors online are not isolated to our computers or small communities of tech enthusiasts, but are felt in the greater society. News headlines frequently feature the names of large technology companies who have either failed to stop bad actors or acted in bad faith themselves. Here are some recent examples from The New York Times:

Even in emerging technologies related to the so-called “Decentralized Web” (DWeb), bad actors have already made an appearance. In one major application of DWeb technology — cryptocurrency — we have already see new and creative forms of financial crime in the form of theft, price manipulation, and fraud.

As a community that is interested in the potential of offline-first and decentralized tech, we cannot merely focus on their potential for good, but also be aware of the harm bad actors can cause with this technology. This is especially true because, in many ways, the challenge of protecting these platforms from bad actors is greater than for traditional centralized platforms.

How an (Enlightened) Online Platform Might Handle Bad Actors

To simplify things, let’s pretend we work for an enlightened tech company that builds a popular social media platform. We have the best intentions, the best people, and don’t have to deal with the real-life — usually financial — tensions that most real companies need to deal with (limited resources, investors, the need to be profitable, ideological conflicts, and so on).

Basically, pretend we are the Social Media version of this meme.

If we had a problem with trolls, posters of abusive content, or otherwise unpleasant users, we could delete their accounts and ban them from the platform. Their data would disappear from all of our servers and nobody would ever have to see their content or interact with them again (at least until they re-join under a different identity). In practice, however, this often requires the expensive and sometimes traumatizing work of human moderation. Regardless, if we know who the bad users are, then we can remove them from our platform definitively.

If we had a problem with hackers trying to break into our systems covertly, then, as the owners and creators of our security infrastructure, we could in theory possess full knowledge of the “threat model” of potential hacks and be able to mitigate or eliminate all of the vulnerabilities that we knew about. In practice, however, there are always “zero-day” threats not yet known about in the security infrastructure, especially if said security is provided by a third-party like a hardware manufacturer or software developer.

And, as a last example, if we found out that someone was able to surveil our users through otherwise publicly-available data on our platform through scraping, cookies, tracking software, etc., we could remove this data from the public-facing interface or APIs, forcing our would-be spies to work with less information about our users. In practice, however, platforms that rely on advertisements depend to some degree on tracking to improve the relevance of the ads we see around the internet, even outside of their platforms.

So, for a large class of bad actors, we have tools to hinder their malicious intentions because we control who comes in, what gets out, and everything else in between. In other words, because our technology stack is centralized, so is the power to control it.

As we will see, by taking away this attribute of centralization, DWeb apps do not have the same kinds of strategies available to them. As creators of such a platform, we might have some control over the software that runs the platform, but much less control over who is using it, where it’s being used, and what it is being used for.

How Decentralization Complicates Things

Now, let’s imagine that we are working on a DWeb social media platform. Without the benefit of centralization, how would we go about handling the same kinds of bad actors we considered in the last section?

How does one control the uncontrollable?

If we had a problem with trolls and other kinds of abusive users, we could not simply ban them by decree. At most, we could announce far and wide that a specific user should be avoided, but it’s not guaranteed that the message will reach all parts of the network.

Conceivably, a troll could get their content out on a large fraction of our network’s servers, making their content more likely to spread during data syncs (and more robust against removal from the entire network). In an extreme case, if a portion of the network were almost always offline (e.g. primarily relied on the “sneakernet”) then abusive content might never be erased from members of this sub-network since our ban announcement might never reach them.

If we had a problem with hackers, we might be able to patch vulnerabilities in our software and aggressively get members of our network to update their software to the latest, most secure version of platform. But what about those who never upgraded, thereby leaving portions of the network permanently vulnerable? And aside from software, we’d have to consider users who were vulnerable to other means of hacking due to issues with their hardware, network, or other circumstances out of our control.

As for surveillance, in the context of decentralized networks quite a lot of data needs to be publicly shared just to get the network to work. For example, peers need to constantly announce their presence in order to interact with each other. In fact, even if we were able to allow encrypted communication within the network, other peers could still witness this, and infer that something interesting (or suspicious) was going on.

In summary, the lack of centralization leads to a lack of control of what happens on our network. In some ways, this is a selling point of decentralized platforms: to provide spaces that are free from the problems seen in centralized networks like censorship, inequality, and the influence of money. However, such freedoms bring their own troubles and, paradoxically, fewer tools to address them.

Unique challenges

Due to the unique character of the technology, decentralized apps have their own unique challenges that centralized platforms will not need to consider.

For example, nodes in a decentralized network like IPFS or Bitcoin are generally unique. They might be running on different kinds of hardware, running different versions of the software, or might be accidentally or intentionally misconfigured, making the network as a whole work sub-optimally (or vulnerable to hacks as discussed earlier).

In general, if the network protocol is well-designed, it should be robust against this kind of variability, especially if the aberrant behavior is purely random and a majority of nodes are doing the “right thing.” However, as recently portrayed a recent episode of the HBO show Silicon Valley, some networks are, by design, vulnerable to unexpected manipulation when the aberrant behavior isn’t a random minority, but a rather a majority of nodes acting together in a so-called “51% attack”.

When it comes to networks of humans, decentralized social networks like Fritter and Secure Scuttlebutt appear to be more susceptible to the problems of political and social polarization seen on sites like Facebook and Twitter.

Social networks can form sub-networks that never interact with one another (image from https://www.researchgate.net/publication/322971747_MIS2_Misinformation_and_Misbehavior_Mining_on_the_Web)

Decentralized social networks are composed of many individual sub-networks that may connect with each other occasionally (or never). While any Twitter user can read the feed of any other Twitter user as long as they know the username of that user, on a decentralized social network not only do you need to know the identity of the target user (i.e. the username or hash), but you must also be connected to another user who is connected to the target user either directly (e.g. as a “friend”) or through a small number of steps (e.g. as a “friend of a friend”).

Due to this dynamic, DWeb social sub-networks may have the tendency to resemble in-person social networks, which tend to naturally form along the lines of race, politics, social class, and other dimensions of identity. We should also mention that the current users of these platforms tend to skew heavily towards technically-literate people, and the (lack of) diversity has been a persistent issue in tech.

However, these kinds of tight-knit and like-minded communities are valued for their effectiveness in moderating and maintaining a positive and safe space for its members. In the end, it may be our responsibility as individual citizens and humans to break through the polarization and isolationism arguably created by technology in the first place.

How decentralized platforms try to defend against bad actors

Frankly, since many of the early adopters of the most popular DWeb applications tend to be tech-literate and security-conscious, the security of these apps depend a great deal on the knowledge and vigilance of individual users.

For example, the writers of the Secure Scuttlebutt FAQ page provide some suggestions for protecting your privacy, like using a VPN or Tor and taking precautions to make sure your private key is never leaked to the outside world. As for moderating against abusive content, in Scuttlebutt, this needs to be done by the community, and is mostly accomplished by large numbers of users blocking bad actors and flagging inappropriate content (“community as immunity”).

In decentralized networks, a similar dynamic is present where the network protocol or rules and/or individual nodes in the network need to provide incentives to other members in hopes of promoting their own ideals of how the network should run.

For example, distributed file sharing networks like IPFS or BitTorrent may enforce differential treatment of clients that contribute to the overall health of the network. Some networks, particularly the ones that undergird cryptocurrency, actually try to provide monetary incentives to promote the efficient working of the network.

Conclusion and Personal Thoughts

I believe, like most things, the problems we’ve considered will require solutions that are both social and technological.

For example, when the security of our apps to depend so heavily on individual vigilance and “community as immunity,” we can end up in a place where only those with the resources to protect themselves are truly safe and reaping the full value of these apps (see the article “Decentralization is not enough” based on a past Offline Camp passion talk by Nolan Lawson). Not only do we need to make sure that every member of our communities has a minimum of tech- and security-literacy (a social solution), we need to make sure that the underlying engineering of our platforms does not make this a hard requirement for the average user to be safe (a technological solution).

Illustration from “Decentralization is not Enough” by Nolan Lawson

I’m also curious what resources we can draw on outside of technology, from philosophy, history, religion, ecology, economics, etc., to guide how we design platforms that are robust against bad actors and promote “growth” (broadly defined) within the network. I’m particularly interested in intellectual frameworks that are also grounded in pragmatism, which recognize the existence of bad actors and the things that motivate them — rather than assuming a kind of utopian idealism as a starting point, where everybody is unified around the same good intentions. The Center For Humane Technology website says something to the same effect: “Humane technology requires that we understand our most vulnerable human instincts so we can design compassionately to protect them from abuse.”

Finally, I am reminded of a quote attributed to Cornel West: “Justice is what love looks like in public.” If we are to create just and safe spaces free of bad actors, I think love will need to be central to our approach. Because people will not abuse or deface what they love. Nor will they leave those they love vulnerable to harm. Technology does not emerge sui generis, but is created and used by humans that love. Technology thus grows to be a reflection of what its creators and users love, for better or worse.

Editor’s Note: This article summarizes a discussion held at Offline Camp, a unique tech retreat that brings together the Offline First community. To join us around the campfire, sign up for updates and cast your vote on where we should host future editions of Offline Camp. Learn more about educational resources on Offline First and ways to join the conversation.

--

--

Jonathan Boiser
Offline Camp

Developer in San Diego, Ph.D. candidate in Bro Studies from Syracuse