Assassination Markets Come to Augur

A Moment for Moral Reckoning in the Blockchain Community

Jacob Z
ConsenSys Media
8 min readAug 7, 2018

--

Last week, I watched the blockchain community scramble to find the right words to talk about the assassination markets and death pools popping up on Augur. Although blockchain news outlets felt that the assassination markets were wrong, they generally struggled to articulate their frustration. In fact, if you surveyed the coverage, you’d quickly pick up on a reluctantly absolving narrative: something like, “It’s unfortunate that death pools are on the blockchain, but the Augur team doesn’t have the power to invalidate supposedly morally harmful prediction markets. The protocol is decentralized.”

But those stories, which entirely exonerated the Augur team, struck me as fatalistic. They left me unsatisfied. It is clear that assassination markets are morally harmful — that much is settled. This piece aims to make it equally clear that we don’t need to resign ourselves to defeatism. There were serious failures on part of the developers and technologists at the Forecast Foundation, the not-for-profit team behind Augur. But this piece is not designed to chastise them. Rather, its objective is didactic: we can learn from their mistakes and improve the blockchain community’s ability to think about the moral dilemmas cropping up in decentralized systems. But first, we need to establish some background for the folk who haven’t been following the story.

Augur itself is not a prediction market. Rather, it’s a “set of open source smart contracts that can be deployed to the Ethereum blockchain,” which enables users to create their own prediction markets. A user can, for example, stake Ether on whether or not Cynthia Nixon will beat out Andrew Cuomo for the Democratic nomination for the New York gubernatorial race. Another user might create a prediction market for something more trivial, like whether or not it will rain in Washington, DC tomorrow. Augur, however, has come under fire recently because users are creating more nefarious markets. Motherboard found open bets on the deaths of people ranging from Donald Trump to Betty White. And while some of the pools merely ask users to predict whether or not an older prominent public figure (e.g., Warren Buffet) will die of natural causes, others explicitly focus on targeted killings. One prediction market that has seen a recent spike in activity, for example, asks users whether or not Donald Trump will “be killed at any point during 2018.” Because these sort of prediction pools create incentives to murder the individual in question, they are often described as assassination markets.

While the presence of assassination markets is certainly frightening, Augur’s marketplace is likely too small (i.e., there’s not much ETH being staked) to actually incentivize the murder of a prominent public figure. So, while we don’t need to be alarmed, we should be concerned. The folks at MIT Technology Review felt the same way and started asking the right questions: if something bad did happen, “could the creators of Augur be held accountable?” The Foundation thinks the answer is no, and most blockchain news outlets accepted that response as true.

A Deeper Look

Here’s where things stand now: the Foundation explains that it “does not operate or control, nor can it control, what markets and actions people perform and create on the Augur protocol.” Furthermore, the Foundation claims that users maintain absolute responsibility for pretty much everything. It’s up to the user to ensure their actions are legal and compliant in all relevant jurisdictions, and users should recognize that other folk using the Augur protocol might be acting illegally. In an interview with ETHNews, Augur co-founder Joey Krug highlighted the Foundation’s lack of control and discharged himself of responsibility. And while Krug seems to believe the conversation ends with that fact — that the Foundation cannot censor morally harmful markets — this is where my analysis begins.

Here’s the first question we should be asking: is it permissible to create systems over which we have no control that could produce significant moral harm? If the answer is an absolute “No,” then the team at the Forecast Foundation was in the wrong from the start. But the correct response isn’t binary. The best answer, in fact, is “sometimes” or “it depends.” The blockchain community takes it as a given that the development of decentralized applications (i.e., systems over which the creators may have no control) is morally just. But there are hypotheticals that prove this isn’t always the case.

Consider: a group of scientists are on the cusp of a historic breakthrough. With the push of a button, they can create a fully sentient, superintelligent A.I. But there’s a risk: they have no guarantee the A.I. will be beneficent. In fact, the thing brought into being could be genocidal — it could spell the end of humanity. Should they push the button? The answer, at least according to intuition and nearly all moral theories, is an overwhelming “No.” It would be wildly irresponsible and unethical for a small group of individuals to unilaterally introduce such an existential risk. And it would be very clear that the malevolent A.I.’s creators ought to be held accountable for their grave mistake (assuming they are still alive). With this in mind, the Foundation’s “we have no control” argument is not obviously sound.

Of course, the existential risks of developing a malevolent A.I. is an extreme example. But it does prove that it is not always permissible to create systems (read: protocols) over which we have no control that could produce serious moral harm. Context is important, and answering the initial questions requires us to consider at least two variables. First, what is the likelihood the system produces serious moral harm? Second, how serious is the harm? In the malevolent A.I. hypothetical, for example, the seriousness of the moral harm — human extinction — on its own likely determines that pushing the button is the wrong choice. Let’s apply our framework to Augur.

Right off the bat, it seems highly unlikely that the prediction markets on Augur actually incentivize anyone to assassinate a prominent public figure. We are a far cry away from the world of “Assassination Politics” imagined by crypto-anarchist Jim Bell in 1995. That being said, incentivizing murder and facilitating assassination markets is a serious moral harm and probably illegal. With these two factors in mind, was creating Augur — a protocol over which the creators have no control that could produce serious moral harm — permissible? While the unlikeliness of the harm ever materializing suggests that it might be, I think the answer is no. But it’s a complicated no, so hang with me.

Even if it is permissible to create systems over which we have no control that could produce serious moral harm, there are tack-on responsibilities. What steps, for example, are we obligated to take to mitigate and minimize that moral harm? And what kind of responsibility — if any — do we have over that system’s outcomes? I ran through a lot of permutations, and I spent a lot of time thinking about cryptoeconomic mechanisms that could’ve been baked into the Augur protocol to censor morally harmful prediction markets. If such a cryptoeconomic mechanism existed — one that maintains decentralization while effectively removing morally harmful prediction markets — creating the protocol would be permissible. But that’s where I ran into a problem: I couldn’t think of any. All the effective solutions I imagined sacrificed decentralization, which demonstrated quite clearly that the creators ought to have maintained more control over their protocol.

A Better Way

I want to review one of the decentralization sacrificing solutions because I think it elucidates my conclusion more clearly. Augur once had a kill switch, but they burned it just days before the first assassination markets popped up. The kill switch mechanism was originally designed to allow developers to fix unanticipated problems. It was employed to prevent critical bugs from attacking the network and to ward off other serious problems. Now that the network is secure, the kill switch has been burned. Kill switches or escape hatches have been used with relative effectiveness before. Most notably, the Bancor team used one after their protocol was exploited, and they were able to reclaim ~$10 million worth of stolen BNT tokens.

This leads us to an obvious question: why not keep the kill switch and use it exclusively for preventing serious moral harm? Its managers could be an independent group of professional ethicists, industry representatives, and perhaps even elected members of the Augur community(see: Civil’s Council). But here’s the kicker: insofar as a kill switch is worth using to recover hacked funds and prevent network crashes, it seems like it’s worth using to disincentivize murder and censor harmful prediction markets. I think the existence of this alternative renders the initial creation impermissible.

But I recognize this argument may not be entirely convincing. I know the blockchain community places a justifiable premium on decentralization. I also know that kill switches and escape hatches are controversial in their own right. But I think the moral calculation is fairly clear. On one side we are weighing the value of a fully decentralized protocol that enables users to create prediction markets with no censorship. On the other side, we are weighing the value of a mostly decentralized protocol that enables users to create prediction markets — but an independent body has the power to void assassination markets and other morally harmful pools. In this instance, I think the value of human life trumps the value of full decentralization. In other words, I think we have a stronger obligation to stamp out assassination markets than to promote fully decentralized prediction markets.

So where does this leave us? I’d like the blockchain community to become more comfortable asking that initial question: “Is it, in this instance, permissible to create a system over which we have no control that could produce serious moral harm?” The teams behind these decentralized systems are rightfully proud when they do good. The same teams, I believe, should also be prepared to take responsibility when they do harm.

Disclaimer: The views expressed by the author above do not necessarily represent the views of Consensys AG. ConsenSys is a decentralized community with ConsenSys Media being a platform for members to freely express their diverse ideas and perspectives. To learn more about ConsenSys and Ethereum, please visit our website.

--

--