Mapping Sexual Assault

Deploying Additively Homomorphic Encryption With Peer To Peer Networking To Confidentially Discover Links Between Perpetrators and Survivors

praxis
praxis journal

--

Abstract: The cryptosystem proposed in this paper is designed to attack the impunity currently enjoyed by most perpetrators of sexual assault, who operate in an environment in which most assaults are not reported, and most judicial systems are not interested in prosecuting rape as rigorously as other violent crimes. This system is designed to pseudonymously connect survivors who share a common perpetrator with one another. By doing so we hope to introduce disruptive conditions that would lead to greater accountability for the perpetrators of sexual assault.

Objective: To pseudonymously connect survivors who share the same perpetrator, so that those survivors can then arrange a safe face to face meeting with one another, without revealing the real names of the survivors or perpetrators to anyone.

Threat Model: The real identity of the reporting survivors must be kept confidential in order to prevent them from being exposed to retaliatory attacks. Since the reporting of perpetrators is pseudonymous, the real names of perpetrators must be kept confidential in order to reduce the incentive to make false reports.

The number of false sexual assault reports made in the real world is most likely extraordinarily low, due to the high social cost of reporting for the claimant and the limited likelihood of any real action being taken against the accused. However, because this reporting system would be pseudonymous, the social cost of reporting would be removed, and therefore the risk of false claims would be elevated.

Introduction: One of my friends, after reading how this system is designed to keep the real identities of perpetrators pseudonymous asked, “then, how do you get justice?” This question cuts to the heart of the problem: how do you get justice in an unjust system? How can you fight patriarchy in a patriarchal court system, in a patriarchal society? I believe that the beginning of the answer is to help people realize that they are not alone, to connect people who have had similar experiences so that they can foster a sense of solidarity and perhaps decide to take action, bolstered by the collective support of their fellow survivors.

The information that would make this possible, the names of survivors who share a common perpetrator, is essentially hidden in the network. Individuals often know the name of their perpetrator, but they do not know the names of the other people the perpetrator hurt, only he knows that. If we were able to observe all the names of the survivors and their perpetrators, we could map this information, but that would mean centralizing this information and revealing all the names to a trusted third party. What if there were a peer to peer network that could pseudonymously introduce survivors who shared a common perpetrator to one another? It is possible to build such a confidential and pseudonymous system with innovative cryptography.

Before outlining the model itself, I’d like to define some terms that are useful when talking about accountability. The system I propose in this paper is not an accountability system, it is designed to attack perpetrator impunity, not to establish as a fact whether or not a particular assault occurred or who was responsible. However, the relationship between impunity and accountability is important to establish, since that relationship helps explain how attacking impunity ultimately leads to greater accountability.

Anonymity: An actor who can maintain strong anonymity can act with impunity indefinitely.

Impunity: Impunity is the property of being able to commit the same harmful act repeatedly without discovery, accountability, or both.

Discovery: Discovery occurs when an individual is identified as responsible for some harmful act, and it has been demonstrated with dispositive probability that they were responsible.

Dispositive Probability: Dispositive Probability is the point at which the sum of discovered testimony and evidence that indicates a harmful act was likely committed is sufficiently strong (dispositive) in order to subject the person accused of committing a harmful act to an accountability process.

Although courts and lawyers sometimes refer to “proof” when discussing criminal law, there can be no scientific proof of any crime ever having been committed. Because crimes are historical events, they are not repeatable, and therefore not falsifiable. History, properly speaking, is outside of the scope of empirical science. Epistemologically, we can never verify whether some event has occurred; the best we can do is dispositive probability. What quantum of evidence constitutes the standard for dispositive probability having been met is a moving target: it is a function of the severity of consequences (which should correspond to the harmfulness of the original act in a proportional system) to which the person accused would be subjected. If the consequence is that they will be expelled from school, the standard for dispositive probability should be high. If the consequence is that they will receive a parking ticket for parking in the dean’s lot, well then we can deal with with a much lower standard for dispositive probability. We can therefore say that the standard for what is “dispositive” is a function of the amount of economic work needed to conduct discovery in an epistemically uncertain environment, relative to the consequences for making an incorrect assumption about the past.

This epistemological uncertainty about the past is why having an adversarial justice system is so important; since we cannot definitely prove what happened, the best we can do is establish a fair forum in which opposing narratives can compete for plausibility.

Accountability: Accountability is a system designed to prevent impunity. Accountability systems can take several forms:

Restorative Justice: The restorative justice model assumes that the person who has committed an offense will confess when confronted, and that the victim of the offense is open to forgiving them. Restorative justice models don’t generally handle deceptive actors well, since they don’t have an investigatory body to discover documents or an adversarial forum to cross-examine witnesses and challenge evidence. For this reason, restorative justice usually works best in small, trusting communities where it would be hard to hide one’s actions. In a restorative justice model, the social bonds between the person who confessed to the harmful act and the community are first repaired. Those bonds themselves then act to restrict the person’s future behavior, since if the person who has confessed commits that same harmful act again, that would result in the degradation or severance of the social bonds connecting them with the community. The psychological principle upon which restorative justice acts is that, by forgiving the person who violated the community’s trust, the community has paradoxically strengthened the confessed person’s bonds to the community, since by forgiving the person they have demonstrated the extent to which they value that individual, in spite of that person’s violation of the community’s trust. In this model, there are no violent sanctions, and the ultimate sanction is for the community to collectively cut all social ties with an offending individual. This is generally only an effective model in communities where cultural values are uniform and where social bonds are mapped directly over links of economic interdependence, or, ideally, over kinship links. When both of these conditions are met, then there can be a consensus on group values through shared culture and members can be coerced via their economic dependence on the ecology of social connections within the network. If the community cannot reach consensus on what values it will enforce, then it cannot operationalize whatever non-violent coercive power it may have over its members. If being asked to leave the social community is not a sufficiently strong disincentive to breach the trust of the community, then the coercive power of such a non-violent and trust-based system is weak. This model has been successful historically, and is still used informally within families, small remote villages, within some left-radical political communities, and more formally among closed social communities such as the Amish, at universities (to enforce academic honesty), within Orthodox Jewish communities, and in the military (via dishonorable discharge).

Punitive, Deterrent, & Segregative Justice: Justice systems that operate on a punitive, deterrent, or segregative basis are the most commonly employed methods of social control, population management, and accountability today. Although these systems often exacerbate the very social problems than they claim to address, they can achieve the property of limited accountability. These systems make social actors’ attempts to achieve impunity costly by raising real or perceived costs to engaging in unsanctioned behavior. These punitive systems are designed to create deterrence, and are imagined to modify future behavior. Such systems also make impunity physically difficult by segregating individuals from the broader community within architectural systems of control, or by removing individuals from the human community entirely, by taking their life. The problems with systems such as this are numerous. At the most basic level, there is increasing evidence to indicate that they do not provide a meaningful deterrent. There is also ample evidence to show that punitive regimes fail to modify behavior; recidivism rates among populations subjected to prison are usually higher than those subjected to probation for the same offense. More fundamentally, the consolidation of the legitimate use of force is difficult to undo or limit, once a society has submitted to that consolidated power.1

Retributive Justice: This system is probably the oldest solution to the problem of violence in trust-based social communities. Unlike in a restorative justice system, in which no use of force is legitimate, or in the punitive justice system, in which a trusted third party (the state) reserves a monopoly on the legitimate use of force, in a retributive justice system everyone reserves the right to use force, and the legitimacy of an individual’s use of force is judged by the consensus of the community in relation to explicit or implicit norms guiding the use of force. In this system, there is no single body responsible for weighing the dispositive probability that an act occurred, instead, any person subjected to harm is empowered to unilaterally subject the person harming them to proportional violence, usually either immediately or in the near term, with respect to the time of the offense.

The indiscriminate and excessive deployment of force is restricted by three factors. First, whenever any individual responds to a harm with autonomous retributive action, the dispositive validity of the harm done to them and the proportionality of their response is surveiled and judged by the entire community, all or any of whom may in turn subject that individual to either non-violent coercion (for instance, economic sanction by withholding access to a sharing economy) or violent sanction for their excessive use of retributive force. Second, the risk of disproportionate escalation is often mitigated by the establishment of ritualized norms for competitive and proportional disciplinary violence. Third, proportionality is maintained by the collective recognition that an escalation into internecine violence would be infectious across social networks and extremely costly for all actors in a society. This provides an incentive for bystanders to intervene and act as de-escalators, since a burgeoning feud could draw them into the widening gyre of violence. The biggest risk in this type of distributed justice system is it’s potential to devolve into factional disputes between two or more competing groups. Those might be imagined or real kinship circles, or competing economic ecologies that share few links between their respective networks. In practice, this would look like familial feud or inter-village warfare.

The goal of the cryptosystem proposed in this paper is to attack rapist impunity. When impunity is reduced, then barriers facing survivors to access accountability systems are lowered. A community of survivors sharing a common perpetrator could choose to access or autonomously operationalize any of the accountability systems listed above.

Technical Problems: I will first address several security vulnerabilities in the centralized model that I had first proposed in an initial draft of this paper.2 Several friends of mine, as well as some folks I’ve never met in person, were generous enough with their time to point out quite a few vulnerabilities that I had not thought of initially. I’d like to thank them for taking the time to ponder over my first model; it’s very rewarding to be able to reveive feedback from a community of people interested in social justice and cryptography.

The original model relied on a trusted third party (such as a university) to collect reports pseudonymously. Perpetrator pseudonyms would have been generated as a hash of their plaintext names with SHA2 locally, within the survivor’s browser via a Chrome extension, before being exported over the Tor network to the trusted third party, which would have been run as a Tor hidden service. Survivors would have chosen a unique password and exported that as a hash along with the hash of the reported perpetrator’s name. This way, survivors could use that unique password to log in at a later date, over Tor, to a private chat room that would corresponded to the hash of the perpetrator whom they had reported. This chat room would be filled with other people who had reported the same perpetrator name. Additionally, the trusted third party would retain the ability to build a map of common assualt connections between perpetrators and survivors, which could be used to discover whether there were a few perpetrators in a population, who were between them responsible for many assaults, or whether many perpetrators were responsible for a few assaults each.

After reviewing the security vulnerabilities in this old centralized model, I will then introduce a new, more robust model for confidentially introducing sexual assault survivors who share the same perpetrator to one another, one that relies on homomorphic cryptography and peer to peer networking. Unfortunately, the centralization of a network map showing connections between pseudonymous perpetrators and survivors is a security vulvernability that can be used to unmask survivors’ real identities, and so it has been abandoned along with the trusted third party in this model.

Keyspace: In the original model, the plaintext of an individual’s name is not a sufficiently complex “keyspace” to generate a robust hash. Especially since the system administrator would have access to a complete list of plaintext names of every student at the university, it would be trivial to guess the name that corresponded to each hash. Salting the hash, as I suggested one could do in the paper, doesn’t actually help, since the system administrator must know the salt in order to compare hashes from different respondents.

One way around this problem is for the server to hash the hashes it collects with a secret key that only the system administrator has, and then use these hashed hashes to compare matches and build a social map of assault from there. The problem with this is that it is functionally no different from the “bullshit solution” I referred to in the paper, where the trusted third party has access to all of the plaintext names but “promises” not to look, since the sysadmin would have access to the original hashes, which would be trivial to crack.

Obfuscation: In the original model, if many attackers were to collude and falsely report a large number of names, then they could plausibly obfuscate any real data in the graph and invalidate the results of the survey.

Pseudonymity vs Anonymity: In the original model, it was pointed out that I refer to respondents as being “anonymous” throughout the paper, but that they are actually pseudonymous, since persistent information is associated with their persistent identities at two separate points. This may seem like a semantic point, but it can lead to unmasking respondents’ identities under certain conditions.

One way this could happen is if a perpetrator reported their own name to the system, because they would then be redirected to the chat room filled with the people they had assaulted, which is obviously not a desirable result, since it makes survivors vulnerable to retaliatory attacks for reporting, a situation that the system was designed to prevent. By deploying this attack, a perpetrator would be able to confirm whether they had been reported. If there had been only one survivor, the perpetrator would be able to identify them individually. If there were multiple survivors and all of them were online, then the perpetrator would know that all of them had reported. If only some of the survivors were in the chat room, the perpetrator could figure out which people had reported by phishing for identifiable information while masquerading as a fellow survivor. Even worse than this, if a perpetrator were to recruit his friends to report him as a perpetrator, any real survivor that might report him would subsequently be redirected into a chat room filled with him and his friends, which would of course be awful. If perpetrators advertised their intention to deploy this attack, they could effectively deter people from using the system to meet their fellow survivors.

Another attack vector would be for an adversary to correlate the number of people a pseudonymous respondent had reported as having assaulted them with the respondent’s real world identity. Multiple perpetrators could achieve this by colluding to share information; they could centralize the names of people they had assaulted themselves. Perpetrators employing this attack could then construct their own parallel map of social connections, including real world names, and compare that map to the one published by the university by overlaying the two images. The parts of the map whose structure matched would likely correspond to real identities; if the portions of the map that directly correspond are large or are composed of complex inter-relationships, the probability that such a correspondence indicates a match is stronger. If the attacker’s map drew on a sufficiently large data set, they could probably reproduce enough of the map to deduce the real identities of most all respondents.

This may seem improbable, since a) how would they find each other, and b) wouldn’t they face a natural social inhibition to talking about having assaulted someone, especially since they would have to approach multiple parties in order to find other perpetrators? However, if, for example, everyone in a fraternity shared information about who they had sexually assaulted, they could probably build a partial map and overlay it against the published map. If multiple fraternities shared information, their map would be more complete still and even easier to compare. Since the connections between perpetrators of sexual assault on a college campus likely form a small world model, it is probable that many perpetrators know one another. When I was at university, my friend group made a network graph of our own (consensual) sexual encounters, and the resulting map was very densely networked indeed. A map of non-consensual sexual activity would likely be isomorphic to the one my friends and I made.

One possible positive here is that if the total population of the school is large, this correlation attack becomes more difficult, since the size of private social groups that might feel free to share incriminating information becomes smaller in relation to the total population of the school. Still, this attack could realistically be made operational and shouldn't be discounted.

Fundamental Flaws: The original system described in the first draft of this paper has serious problems. The system administrator can identify the names of perpetrators by cracking the hashes they are given, which they are not supposed to be able to do. Malicious respondents can distort survey data by submitting erroneous mass reports. Most seriously, the identities of reporting survivors can be discovered by perpetrators. Either the perpetrators can report themselves and be ported to a chat room with the person(s) they assaulted, or they can collude with other perpetrators to exploit the publicly available pseudonymous map data to compare it with real world names.

The first vulnerability is only a serious problem if the sysadmin acts maliciously and decrypts the reported names. However, even if the sysadmin is trustworthy, the vulnerability itself distorts reporting data, since respondents may be fearful to report if they know that the names they report could be decrypted.

The problem of malicious respondents subverting survey data could be mitigated by limiting the number of people a respondent can report, but it is difficult to entirely eliminate this problem from the old model.

The problem of a perpetrator reporting themselves, or of recruiting their friends to report them, in order to either discover the identity of those who reported them, or to deter reporting altogether, is challenging, but not impossible to solve. If survivors who shared the same perpetrators understood this risk, they could conceivably use the chat room to arrange a meeting in a large public place, and then mutually authenticate one another with a shared secret, such as “I will be wearing a red hat at 1200 in public place x.” All survivors could then bring supportive friends to such a meeting, and if they saw anyone resembling the perpetrator or his associates, they could scram out of there (or beat the shit out of him).

The last problem identified is a wicked one, however; it’s an effective attack that makes it too dangerous to publicly release a pseudonymous map of associations between survivors and perpetrators. A university could release some statistics, such as ‘n people have committed more than x assaults,’ but releasing the entire social graph would risk unmasking survivor’s real names.

Deploy Homomorphic Encryption With Peer To Peer Networking: One possible solution to nearly all of these problems would involve deploying homomorphic encryption and eliminating the trusted third party from the model. Homomorphic encryption permits the addition of cyphertext and the private comparison of data intersections within a multiset. If we deploy this approach, we forfeit the ability to centralize reporting statistics, but retain a more robust capability to confidentially introduce survivors to one another.

In this model, the name of a reported perpetrator would be represented as a polynomial and then encrypted as a piece of cyphertext that could be compared against other encrypted polynomials reported by other survivors, without the need for a trusted third party to decrypt and compare the multiset. Persistent pseudonymity for survivors would be provided by public key cryptography at the application level. Data transport would be conducted over UDP, the user datagram protocol. Unlike TCP, which is a connection-oriented protocol that requires a handshake to set up end-to-end communications, UDP is capable of broadcasting encrypted packets across the network, irrespective of whether or not the endpoints are the intended recipient.

We can imagine how this system might be deployed as an application on a smart phone. The survivor enters the name(s) of a perpetrator(s). Each name is represented as a polynomial, which is then encrypted. The resulting cyphertext is then signed with the survivor’s private key, which is generated by the application. The public key is published, with no identifying information, through a Tor hidden service keyserver. The application then broadcasts the cyphertext of the perpetrators name, signed by the survivor’s private key, as a UDP payload. This payload is repeatedly broadcast to all other phones running this application, which will in turn re-broadcast every payload they receive. Since every phone re-broadcasts every payload it receives, the fact that a particular payload came from a particular IP address doesn’t tell us much about what the actual source of that particular payload was. Conversely, just because a particular packet is signed with a particular private key, that tells us nothing about the originating IP address from which that signed packet was broadcast. Eventually, every phone in the network will have received and broadcast every report made. This is how the system maintains pseudonymity. Each phone will log all incoming payloads and compare this cyphertext with the original cyphertext representing the name of the perpetrator(s) that the user had entered into that phone. When the application gets a hit, it imports the public key associated with that packet and notifies the user.

The user would then be offered the opportunity to write a message to the person who reported the same perpetrator. This message would be encrypted with the recipient’s public key and signed with the user’s private key, before being broadcast as a UDP packet across the application network (which operates on port whatever). Although everyone in the network would eventually receive this message, only the intended recipient would be able to read it. Alternately, the user could simultaneously communicate with multiple people who had reported the same perpetrator(s) name by mutually deciding upon a shared secret via a multi-party ECDH exchange, each step of which would be signed with each of the users’ respective private keys to assure authenticity during the exchange. This shared secret would then be used to encrypt messages symmetrically using AES-256. These messages would then be signed with the users’ private key for authenticity, and broadcast to the entire network. Only the intended recipients would be able to read the shared messages.

We still need to prevent the perpetrator from reporting themselves to the system, or recruiting their associates to report them, since that would open a communication channel with the person who had reported them, which would create an attack surface through which they could compromise the reporting survivor’s pseudonym and learn their identity, or deter survivors from reporting in the first place. We can mitigate this risk this with snowball sampling, a non-probability sampling technique where existing subjects recruit future subjects from among their acquaintances. Thus, the sample group appears to grow like a rolling snowball across the ecology of subjects’ social networks. Riseup, the privacy minded email provider for activists, uses this method to control access to account creation. In order to setup an email account with riseup, a user can either submit a paragraph about themselves and why they would like an account, which a sysadmin will actually read, or they can enter two unique passwords, each of which is generated by a separate existing riseup account user. This means that someone who wants an account has to either introduce themselves to a sysadmin, or they have to have two friends who already have riseup accounts.

Using a snowball method, we can restrict discovery of fellow survivors to the broader social network ecology of people trusted by people whom the survivor trusts, expanding exponentially to include the people trusted by those trusted people. This system can survive trust context collapse by defining separate social network ecologies with a shared secret, used to symmetrically encrypt shared payloads, so that the software could be distributed to everyone, but only users within a trust ecology defined by a shared secret would be able to decrypt the payload and compare the homomorphic cyphertext corresponding to a perpetrator’s name reported by someone in their trust ecology network. This would also shield users’ pseudonyms from people outside of their trust ecology, since their unique signature of the homomorphic cyphertext used to create persistent pseudonymity would itself be symmetrically encrypted with a shared secret known only to those in their trust ecology. This would make users’ communications anonymous to an attacker outside of their trust ecology, since all users would log all traffic, but only the traffic decrypted with a shared secret would reveal any persistently pseudonymous identifying information. We can also give this system the property of trust agility. Let’s say that one user in a trust ecology trusts someone by adding them to their trust ecology, by sharing the secret used to symmetrically encrypt traffic for that group. Then let’s say that this person turns out to be not very trustworthy after all. Any user in the trust ecology can create a new group that excludes this person by broadcasting n number of multi-party ECDH exchanges encrypted with the unique public keys of n users and signed with that person’s private key, where n is the number of trusted people in the trust ecology, minus the untrusted person. All of the recipients of this ECDH exchange attempt would be asked if they would like to join the new group. The user attempting to create this group could include a message explaining why they felt it necessary to exclude the person in question. This ECDH exchange would then establish a new shared secret used to symmetrically encrypt the entire payload for this group. The untrusted person would then be locked out of this new group, but would still have access to the old shared secret. This means that a person who is trusted by the person who created the new group, and whom, for whatever reason, also trusts the “untrustworthy” person, would be able to communicate in both groups, which means that this system would have the property of being able to define multiple overlapping trust contexts. Even if an untrustworthy person is included in a trust ecology via a shared secret, they would only be able to view the cyphertext pseudonyms corresponding to survivors and perpetrators. They would not be able to decrypt private messages sent within the ecology, unless they were explicitly included in a private multi-party message exchange via ECDH & AES-256, or if a user explicitly addressed a message to them using their public key. A malicious actor within a trust ecology would only be able to log who was pseudonymously talking with whom and map the graph of associations between reported perpetrator pseudonyms and survivor pseudonyms, by observing which cyphertexts were signed with which keys, but they would not be able to unmask pseudonyms or decrypt private communications unless they were explicitly trusted by a user in the trust ecology.

Ok, so which problems did we solve with this? We managed to solve most of the above listed security vulnerabilities of the original, centralized system. We don’t have to deal with trusting a sysadmin anymore, because the system is decentralized. The malicious over-reporting of names is no longer a threat, since users only share pseudonymous reported perpetrators’ names with people in their trust ecology. If a user within a trust ecology did attempt an obfuscation attack by over-reporting names, other users in that ecology would be able to see that a user with the same persistent keypair was spamming the network, and could then disregard packets sent by that user or redefine the network with a new shared secret in order to exclude them. The problem of a perpetrator reporting themselves, or of recruiting their friends to report them in order to discover who had reported them, is now less of a threat. Trust ecologies mapped over a social network, with the property of trust agility over time, make this attack much less effective, since an attacker would have to be inside a survivor’s trusted social network to be connected with them. However, if a survivor unwittingly shares a social contact with the perpetrator, and this social contact unwittingly trust both of these individuals (since neither person told this mutual social contact about the assault), then the perpetrator could end up in the same trust ecology as the survivor, whether by their intention or by accident. The likelihood of this happening to a survivor increases exponentially as the degrees of separation between them and the edge of the trust ecology increases over time. The last problem, involving the collusion of perpetrators and the comparison of social network graphs, is mostly solved by defining confidential trust ecologies. Although all users log all traffic in the peer to peer network, only people within a trust ecology (defined by a shared secret) can see the cryptographic pseudonyms of reported perpetrators and the signature pseudonyms of the people who reported them. This decreases the size of the dataset that a malicious actor could potentially use in a correlation attack, which means that they would have to draw on a much larger dataset of real names and interconnections themselves, in order to succeed in a correlation attack with the same probability of success as in the original, centralized model.

Did we introduce any new problems with this model? Well, one drawback of eliminating a trusted third party from the model is that everyone can log when they receive each pseudonymous perpetrator name. Since each perpetrator’s cyphertext pseudonym is signed with the pseudonymous private key of the person who reported them, and since every node in the network would re-broadcast every packet received upon receipt, this would expose a potential attack surface to a malicious actor who paid attention to how many times they received the same report and when they received it. If a perpetrator inside a trust ecology observed that the same pseudonym signed by a particular reporter was being rebroadcast more frequently than others, because it had been reported earlier, they could possibly correlate that to when each person inside their real-world social network began using the application with approximate accuracy. If the attacker knew who in their social network had been assaulted, either because they were the assailant or because they heard about it second hand, they could possibly correlate the report made to the real name of the reporter. If the network of users is small enough, an attacker might even be able to locate the probable IP address of the original report. This would work because everyone in the network eventually discovers the IP address of every other person in the network. Pseudonymity is only strong if many people are using the network. As soon as anyone enters the network, they reveal their IP address as being a part of the nework, because the protocol requires that everyone rebroadcast all packets they receive immediately, in order to obscure the originating IP address of those packets. The strength of this correlation attack is inversely proportional to the size of the trust ecology; the more people making reports and using the system, the harder it would be to execute this attack. If the malicious actor is not inside a trust ecology, this attack becomes significantly weaker, since they would have no data about respondents persistent pseudonyms or the pseudonyms of the reported perpetrators. An attacker not included inside a trust ecology would only be able to map the IP addresses of all users on the network, without being able to correlate the content of information transmitted.

The attack scenario with a malicious actor inside a trust ecology is difficult to defend against, because the attack is passive and doesn’t signal to other users that the actor has a malicious intent. However, since this attack is only possible if the malicious actor has been explicitly trusted by someone in a user’s broader social network, and since it would be possible to exclude them from the trust ecology at a later date, this is an acceptable vulnerability. It’s simply hard to defend against all attacks by someone whom a group member has explicitly chosen to accept and trust. As for the scenario where the attacker is outside of the trust ecology, but can log the IP addresses of people on the network, the best way to defend against this attack would be to develop multiple applications for this peer to peer communication platform. If lots people are using the network to share all sorts of information, then the fact that someone is on the network tells us little.

Conclusion: Impunity is the key property that explains the widespread prevalence of sexual assault. Rapists are often able to remain anonymous to the next person they hurt, because most survivors do not report. Most survivors do not report because existing accountability mechanisms in society do not treat rape as a serious crime. Survivors who report are often subjected to blaming attacks by social peers and are ignored by those who are supposed to be responsible for investigation and accountability, whether that is the police and court system or a university administration. Lack of any real accountability for perpetrators who are reported and the low rate of reports overall are intrinsically linked; these two factors compose a mutually re-enforcing feedback loop that leads to greater impunity for rapists, who consequently enjoy relative anonymity when targeting people and little chance of facing serious consequences if they are reported.

One way to deal with the problem of impunity is to label people. The prison system does this all the time when it labels people as felons and sex offenders, labels that follow those people for the rest of their lives. This system relies on first demonstrating dispositively that the individual probably committed the act in question, and then publicizing a persistent label associated with their real name, or sometime even their biometric identity. This system has four flaws. The first is that any system for demonstrating dispositive probability is vulnerable to making erroneous convictions. The second is that people with more social privilege usually fair far better in courts that are controlled by privileged people and which are vulnerable to the influence of social and financial capital. Judicial systems amplify the systemic impunity already afforded to socially privileged individuals. The third problem is that this system deliberately ignores human beings’ capacity to learn and change. The fourth is that by assigning a persistent and harmful label to someone, this label often corrosively influences that individual’s behavior and identity; if you label someone a felon and make them check a box every time they try to get a job, they just might start acting like a “felon” to support themselves. The label itself can also encourage pride/shame identifications with the label, which the person labeled assumes as a coping mechanism, for example: “yeah I’m a felon, but I’m dangerous as fuck, so don’t mess with me.” Labels can perpetuate the very behavior they describe.

In order to subvert the impunity that most rapists enjoy, reporting numbers must rise and accountability must be enforced. The barrier to achieving this is that, since both are intrinsically linked, one cannot change one without first changing the other. There is a third way: ignore existing accountability mechanisms, which are unreceptive to prosecuting rape, and focus on connecting survivors who share the same perpetrator, without centralizing any information about the perpetrators or survivors themselves. Those survivors may be more motivated to face the stress of pursuing a formal case against the perpetrator if they have the support of their peers. The survivors may find that they possess more dispositive evidence against the perpetrator between them than they would possess individually. The survivors may be more motivated to organize against sexual assault, they may feel less alone, and they may come to see their experience less as an obstacle to be overcome and forgotten and more as a social problem that they can do something to change. Sometimes it’s easier to care about other people than it is to care about oneself, and so the knowledge that not acting to stop a perpetrator can have consequences for others may be more mobilizing than conceptualizing one’s assault as a personal violation. They might all just go beat the shit out of their rapist, who knows?

Introducing survivors who share a common perpetrator to one another would likely have a disruptive effect on campus wide conversations about sexual assault, and would introduce new deterrents for perpetrators who would be more vulnerable to exposure, prosecution, or independent retributive action taken by the survivors themselves. Unlike a “label and publicize” system, this model would not rely on flawed dispositive accountability systems, in fact this system would be utterly neutral as to the question of whether an assault had occurred or not, it would merely link people who claimed a common perpetrator. What they should do with that information from there is entirely up to them, the survivors.

Notes:

1 Whenever a society trusts any third party, such as a state, to control a monopoly on the use of force, they are exposing themselves to the risk that the state may use such a force monopoly to consolidate their own interests, such as by reinforcing the power of a hegemonic racial caste or class, or by disposing of people who question or challenge state power. If the modern international state-corporate power matrix were to achieve its own ideological ideal, then force would only be deployed in order to maintain the expectation of a non-violent public sphere, in which individuals can freely engage in economic relations without a high degree of mutual trust. The problem with such a neoliberal ideal is that once wealth and the ability to deploy legitimate force are correlated and concentrated, any political challenge to the status quo can be subjected to economic sanction and/or “legitimate” violence.

This supposedly beneficial property of the unitary state, enabling actors to engage in economic relations without a strong basis of trust, is why states have grown so powerful; their citizens don’t experience the economic inefficiency of having to verify trust prior to engaging in commerce or trade. Without such a guarantee made by a “trusted” third party such as a state, whose threat of sanction looms over all contracts, economic actors are operating in a “non-state” economic environment. In order to enter into a contract in such an environment, actors have to first verify their respective identities, so that neither can remain anonymous, and hence retain the capability to act with impunity. Verification of economic actors outside of the state system is similar to how one would use the web of trust in PGP; by finding a route of trust across their respective social networks. This system of verification through “vouching” by shared social contacts is also a way to bind the individuals together; mutually shared social contacts can also act as an accountability regime. This way, any deception would result in lost reputation or even violent sanction for the offending party. Drug dealers do this all the time; you buy from a wholesaler who is vouched for, so you know they aren’t a cop and you have a way to hedge against them screwing you. Non-state economic contracts usually occur within imagined communities where kinship, racial or religious affinity, or a manufactured common culture enable actors to engage in trusting economic relationships with individuals they do not directly know. Each individual’s reputation within the non-state economy and in reference to imagined community norms governs behavior between actors who don’t share a prior history of trust. Examples include: European banking families in the middle ages, mafia economies, fraternities, street gangs, and drug economies.

2 The original paper can be found here:

https://medium.com/praxis-journal/6b7ae80f3d72

--

--

praxis
praxis journal

a quarterly journal of dangerous people up to no good