The First Church of Chatbot: or, HOWTO Pwn a Democracy using Common Ingredients Easily Found in Every Cloud PaaS

Meng Weng Wong
20 min readMay 18, 2018

--

(a design fiction, originally written April 2017, published May 2018)

This tutorial shows you — a hacker —how to write God — a program.

But first, some context.

Our history, as hackers, is long.

In the 1960s, phone phreakers figured out they could whistle up free long distance. On that note in 1984, Emmanuel Goldstein started 2600: the Hacker Quarterly. In 1988, Robert Tappan Morris’s worm infected up to 10% of the Internet. In 2003, Bunnie Huang published Hacking the Xbox. Epic hacks and hackers all.

In 2013, Black Mirror screened The Waldo Moment. And in 2017, the first Twitter-native candidate trolled the mainstream media into free, 24/7 coverage and lolled all the way to the White House.

What an epic hack! Much lulz, very zomg.

In the 1980s and 1990s, the nascent Internet was tiny and content was sparse: IETF RFCs, the Usenet Oracle, Project Gutenberg. Still, samizdat lurked on shady Gopher and FTP sites for thrillseekers to find: the Anarchist Cookbook, the MIT Guide to Lockpicking. The advice in these texts was of questionable value, but their very existence (and outlaw tone) crowbarred open the consciousness of a whole generation of high-school cypherpunks who went on to read Neuromancer and Snow Crash, install Linux and PGP, and major in computer science. Some of them became authors and lit the torches of a new generation with books like Little Brother (Cory Doctorow) and Accelerando (Charlie Stross). The Internet was the LSD of Generation X.

In 1990, the Secret Service raided Steve Jackson Games and confiscated the role-playing game GURPS Cyberpunk, asserting that it was a “handbook for computer crime”.

It was, of course, no such thing.

This tutorial, however, is a handbook for what some might call cybercrime … for using emerging computer technologies to exploit well-known vulnerabilities in the human psyche and human social structures for … well, for profit — for power—for the lulz!

Except it’s all 100% legal. We’ll see why, later.

The intended audience for this guide is a hacker, in both the creative and destructive senses of the word. That word, “hacker”: it’s a big tent, like science fiction bigger on the inside than on the outside. Every hacker is two people in one body. We go to maker faires and we go to DEFCON. We start startups, we contribute to Github, we architect and build systems for others to use. We create, from first principles, elegant illustrations to the delight of our peers. If a thing can newly be done, we are the first to try to do it. It’s why the best founders are hackers: we are innovation entrepreneurs.

At the same time, our brains are constitutionally wired to seek truth to the extent of pedantry, to explore every exception to prove the rule. We think about systems — others’ and our own — in terms of weaknesses and potential exploits. We think about communications in terms of privacy and eavesdropping. We cover our webcams because we know how easily hardware can be pwned. All systems will be gamed. Our working vocabulary includes “zero-days”, “botnets”, and “buffer overflows”. Reflections on Trusting Trust permanently recalibrated our paranoia about computers and software systems.

This tutorial might permanently recalibrate your paranoia with regard to humans and society, if you aren’t already a cynical crypto-anarchist.

Finally, hackers pay it forward. We do our best then give it away. We spend countless hours answering questions, instructing newbies, writing documentation, turning science into technology and technology into apps for the benefit of everyone who doesn’t code. This tutorial is written in that spirit.

I will assume basic technology skills. You already know how to hack into a computer, how to install a rootkit, how to use one machine to probe a thousand more. You know how to boot up a botnet, set up command-and-control nodes on IRC, and launch a basic DDOS to take down a major Internet website. Basic skr1pt-kiddie type stuff, a lot of it is commoditized.

But we don’t just break into systems. We build systems. Perhaps the best example of doing both at once is the Carna botnet of 2012.

I will assume that you also know how to build a web or mobile app; how to run data analytics using Python or R; how to assemble useful tools from AWS’s chest of gadgets; how to use the machine learning kits published by Facebook and Google; how to learn, within a week, to use a new API and boot up a backend that scales to millions of interactions per second.

In short, you have mastered technology: both its dark and its light sides. The Force is strong with you.

What would it mean to apply that mastery to humans? To societies?

In this tutorial, your attack target is not a computer, not a network, but an entire democratic society. Let the game begin.

HOWTO Play “Build Your Own Cult Online”

“We are as gods, and might as well get good at it.”
Stewart Brand, Whole Earth Catalog

Your tools: machine learning, natural language processing, social media APIs, chatbot toolkits. After a long winter, A.I. is maturing. Twenty years ago these tools did not exist; neither did social media. You may assume unlimited CPU, unlimited disk, unlimited bandwidth. For a god, infrastructure omnipotence is table stakes.

Your strategy: build an army of chatbots to engage with humans, and a network of fake news sites to shape their beliefs. Think of it as automated social engineering at scale. (What’s social engineering? Anything from a “send-money” scam to swatting.)

Project task: recruit a cult of humans who will believe anything, do much anything you tell them. At least a dozen. They can be from anywhere around the world. Minimum age, 21. (Getting people younger than that to do what you tell them is (a) immoral, (b) pointless (you want their vote), and (c) way harder – ask any parent!)

Your financial goal: to get those humans to contribute funds to sustain the cult.

Your political goal: to get those humans to vote for a slate of political candidates of your choice in the next election. Your eventual aim is to capture the flag — to gain, by only lawful means, voting control of a host society, despite the best efforts of its legacy institutions to resist you. The host can be a corporation or a democracy.

Your social goal: to help your cult members to actually live good lives — better lives than they would have lived if they had not joined your cult.

Three goals, three rules:

The scalability rule: you, as founder of the cult, should not interact directly with your followers. Maybe in the early days it’s okay to do things that don’t scale, but in the long run, your will has to be expressed through programming the chatbots and through community scaling mechanisms like peer support forums and meetups.

The honesty rule: you have to tell each of your cult members, within two years of conversion, that you hacked their minds, and how you did it…

The level-up rule: …and you have to teach each of your cult members how to hack others in turn.

Is this possible? Yes: we have a mathematician’s word for it. In 1946, Gödel famously identified a weakness in the US Constitution that allowed conversion into a dictatorship. If the mathematicians say it can be done, then implementation must be a simple matter of programming, a job for engineers. In this case, social engineers.

A democracy is made of voters in the same way a network is made of individual computers. Whoever pwns the nodes, pwns the network. Once you pwn the network, you have two kinds of control: direct control over the nodes who are running your software, and indirect control even over the nodes who aren’t, because you control the state that taxes those nodes.

So: how to pwn the nodes?

How To Pwn A Human Being

The weaknesses of your target system are well known. Enormous lists of cognitive biases have been compiled, with the best of intentions: to educate the reader toward greater rationality and overcome those biases.

Isn’t that sweet?

Put on your hacker hat. Treat them as lists of vulnerabilities. Every one of those cognitive biases is a potential exploit against the human psyche. Every uneducated human is an unpatched computer waiting to be pwned. Their OS is to blame: humans are Predictably Irrational.

The biggest, easiest exploits, the lowest-hanging fruit, the buffer overflows of the human mind, have to do with confirmation bias and tribalism.

Confirmation bias lies at the root of sales techniques like “Foot-In-The-Door”. Whole libraries of persuasion mechanisms have been documented. Books like Cialdini’s Influence and Jamie Whyte’s Crimes Against Logic (subtitled Exposing the Bogus Arguments of Politicians, Priests, Journalists, and Other Serial Offenders) can be read as how-to guides; politicians use bogus arguments not because they’re evil but because they work. These tricks are not new. Religions use them. Turn the tricks into software and you can weaponize the resulting tech stack.

(Recent applications at the top of the stack include Neil Strauss’s The Game. That book basically teaches lonely men remedial social skills with a specific goal: to get their biology off their backs by demonstrating a success condition for sexual selection at least once. But technologists, once they have unlocked that achievement, will naturally ask: “how does this scale? How can I automate it?” Unfortunately, romantic seduction is incompatible with cloud containerization. You can’t Docker your dick.)

This is your Brain on Yeast.

Think of a blob of dough. Wild yeasts and bacteria, floating in the air, land on the dough. Some die. Some flourish. Soon the dough is colonized by microorganisms happily fermenting and reproducing. If you make a fresh dough and smoosh it together with the old one, the bugs will cross over and soon you’ll have two loaves of bread.

Now think of that dough as a human mind. Yeasts are ideas. And you’re going to build a bomb that sprays a very specific strain of yeast across the Internet. Your yeast bomb, in the form of an army of chatbots and a network of fake news sites, is going to fool fresh dough into thinking that the other doughs in the kitchen are already happily colonized. In reality, the other doughs are collaborators, in on the scam, like shills in a shell game. The bots talk to each other in a largely scripted performance. And the bots talk to your target, using APIs and machine learning to improve. Don’t worry if your bots start out failing the Turing test; they’ll learn what not to do, and they’ll improve on the next target. It’s basic wardialing strategy.

Soon your target will start believing in an alternate “consensus” reality. You can flood them with fake news that represents your preferred perspective. Imagine a chatbot that can gin up an entire fake news story, an entire fake news website, just-in-time, on demand, whenever a link to “evidence” would come in handy during a discussion. Think disinformation and propaganda, but customized for an audience of one. They’re a brain in a vat, with their phone as the vat.

For verisimilitude, you may fake both sides of an astroturf army, running a mock debate which (eventually) your side wins. When you run these debates you will have ample opportunity to inculcate cognitive biases and conversational games like “someone who’s been wrong in the past can’t possibly be right in the future”, “we agree because we like you”, “that’s sacred—don’t ask that question” and “you’ll understand when you’re older, but trust us for now.” Persuasion techniques are well documented; there is no need to enumerate them all here.

Enabling Technology 1: Chatbots.

Jeff Hammerbacher once snarked, “The best minds of my generation are thinking about how to make people click ads”. A generation of tech startups are using tools and building tools for promotion, virality, community management. Startup founders ask themselves “what would Jesus do?” not for moral guidance but in hopes of repeating his accomplishment: achieving a monthly-active-user count in the billions. Political campaigns microtarget voters using the techniques of social media marketing. Decades of work has gone into software to make people click, click, click; buy, buy, buy; vote, vote, vote.

Today, chatbots are the new hotness. People aren’t born to click on buttons. People are born to talk. When they talk to other people, it’s called conversation. When they talk to God, it’s called prayer. When they talk to bots, it’s a Turing test.

Chatbots on social media, fueled by machine learning, training in realtime against a participant pool of billions, are racing to pass the world’s biggest distributed Turing test. And the stakes couldn’t be higher: whoever wins that race gets to play God.

Spam used to be mass-mailed, impersonal, with hit rates in the hundreds of a percent. Then came phishing, then spear-phishing. What’s next? Personalized bots, talking to you in realtime, on IM and Twitter, even IRC. The cost of a fully automated chatbot running a long con will go to zero. The science fiction of “Her” will be the reality of tomorrow.

Thanks to machine learning, you don’t have to do all the programming yourself. You don’t have to get it right every time. If your bots fail at dialoging with a given human, just discard them. (The human, not the bot.) Move on to the next one. It’s been almost a quarter-century since Eternal September: now there are so many people on social media, the top of your conversion funnel is practically infinite. As the saying goes, there’s another born every minute.

Enabling Technology 2: Fake News

What is the nature of knowledge, of truth? Oh boy. Big can of worms. Philosophy 101, TL;DR.

Any scientist will tell you literature review takes time, primary research takes time. As a human being in the 21st century, most of what you know you have to take on faith, like a crate of books purchased because the authors are famous. In theory, your critical faculties get to work on those books, testing assertions and evaluating chains of logic, looking for inconsistencies and baroque violations of Occam’s Razor.

In practice … people believe what they’re told, if you tell ’em enough.

Look at fake news. It works. Consider it the weaponization of social psychology: of Solomon Asch’s classic line-length experiment, of Stanley Milgram’s obedience experiments.

Indeed, the difference between “fake news” and “other people’s holy writ” is arguably only a matter of degree. If it happened a long time ago, it’s not fake news; it’s canon.

If your doughy brain is infected with the wrong set of yeast spores, if you start out reading the wrong crate of books — or no books at all — and if your critical faculties are dormant, or, worse, subverted toward tribalist confirmation, then you too could, one day, find yourself shooting the locks off storerooms in a suburban pizzeria. Or blowing up Baghdad looking for weapons of mass destruction, not finding what you expected, and then remembering you did.

Your goal in this tutorial is to pwn those doughy minds. Publishing platforms like Wordpress and NLG AI systems like Quill make it incredibly easy to churn out huge volumes of plausible nonsense in a super-professional-looking format. As long as you infect your targets early enough, and surround them with enough of an echo chamber, you can control what they believe.

How To Pwn A Human Society; or, Rootkitting Tribalism

First you hack humans separately, then you hack humans together.

You can’t scale seduction, but you can scale belonging and togetherness. Humans are social animals. They naturally want to belong to superorganisms. From sports teams to crazy cults to political parties to Emacs-vs-vim, people seek out tribes they can call their own. Putnam’s Bowling Alone argued that the supply of community isn’t keeping up with demand, and that was before smartphones. After smartphones, we’re all alone together.

People spend more social time on screens than in person: you get anomie. Filter bubbles and echo chambers magnify existing biases: you get tribalism.

Books like The Righteous Mind can be read as how-to guides for social sploits: humans possess a “hive switch”. Flip the switch, and they can be persuaded to act against their self-interest to conform with the larger goals of the group.

Your chatbots will be the little devil on their shoulder, whispering half-truths to them; your bots will be their confidant, their advisor, their gaslighter, their guru. Their entire circle of friends. Almost a Black Mirror episode.

Your bots will teach them to deploy confirmation bias against objective reality, against mainstream media, against challenging interlocutors. Remember, baseline humans are easily fooled. They have no training in logic, in critical thinking, in cognitive science. They buy tabloids, they forward chain letter emails, they believe conspiracy theories. They are credulous and they are lonely.

Under those conditions, “us or them” thinking triggers easily. Good news about “us”, bad news about “them”? Automatically think of reasons it might be true. Bad news about “us”, good news about “them”? Automatically think of reasons it might be false.

Test case: get your converts to demonstrate a disbelief in Wikipedia and Snopes, as confabulations of the “other side”. But get them to disbelieve in Fox News, too.

Religious Hierarchy

A word of advice for beginner messiahs. As you grow your userbase, keep an eye on the firebrands. They are your greatest asset. They form the training set for your bots. As current affairs come up, as new kinds of recruits enter the fold, you’ll want to detect the humans who take naturally to your teachings and argue accordingly. Introduce them to other humans, watch them engage and persuade, and use those conversations to train your chatbots further. These are your disciples, your apostles.

You may reuse the organizational hierarchy of any existing religion, sports team, or political party. The religion template works well:

  • a god who’s offstage and never speaks directly to anyone except
  • a prophet who is the sole authorized spokesperson, but who operates in the oral mode and is himself interpreted by
  • a core of disciples who productize and produce written
  • teachings, which are then taken to market by
  • a clergy responsible for scaling out regional expansion, sales, marketing, professional services, and community management.

In your case, you as core developer and BDFL assume the prophet role. You are encouraged to abdicate the role or fake your own death as soon as possible, before the unhinged assassins turn up: “live fast, die young, and leave a good-looking corpse!” Basically pull a Satoshi Nakamoto, but keep the crypto keys in case things go off the rails and you need to hard-fork a Second Coming.

Your opensource dev team will be your disciples. Your clergy will be bots. And the teachings are fictions loosely customized to each user’s psychological state and life situation. You can start producing the teachings by hand, but advances in A.I. should soon allow automated generation and optimization. The technical term for this is mythopoesis: read Joseph Campbell and Neil Gaiman.

Competition

Of course, you won’t be doing this in a vacuum. You’ll have competition. Other e-cults, running the same software as your bots. Oh, and legacy religions too.

It’s the rootkit problem. When you break into a machine, the first thing you have to do, paradoxically, is secure it: close the holes that you used to get in, because you need to keep out your esteemed colleagues, the hackers next door.

When you pwn a human, the first thing you have to do, the most basic precautionary element of your rootkit, is to install a bias module that immunizes them against similar exploits. Easy: us=good, them=bad.

Are legacy religions a competitive threat? Existing tribal affiliations are old and powerful: religions and political parties go back hundreds, if not thousands, of years. But they rely on human instruction, and that’s their weakness. They’re not getting together in person as much as they used to. Now that social media and AI offer a toehold for your bots to pretend to be human, you have the digital firepower to compete with an entire legacy analog belief system. Read Clayton Christensen on disruptive innovation.

Can it be done? Yes, there are existence proofs, from well before computers. Cults have always seemingly come out of nowhere, led by charismatic prophets, often to their own deaths by cometary aliens, by Kool-Aid, by Y2K.

All we’re doing is automating cult recruitment and coding computational charisma into chatbots. Brought to you by AWS, Deepmind, Instagram, and Twitter.

Gosh, Are We Doomed To Live In A Dystopian Theocracy?

It seems bleak, doesn’t it? Makes you long for the good old days, before smartphones reported on your location, before webcams spied on sleeping children, before governments demanded your social media passwords at the border.

As a hacker, you’re an educated person. Self-taught, maybe, but relentlessly curious, rational, self-questioning, self-improving. You are the embodiment of Science. You started life as a baseline human, ignorant, gullible, irrational. Just look how far you’ve come.

As a well-meaning hacker, you might have tried, now and again, to educate baseline humans. How did that work out? Sadly, it’s easier to exploit someone than educate them. Happily, exploitation is the goal of this tutorial; you will worry about education later.

So the cognitive bias is the cognitive vulnerability. Motivated reasoning, confirmation bias, peer pressure, groupthink, and out-of-bounds, off-limits questions: these are now your friends. They will help your software pwn humans and groups.

Isn’t This Totally Illegal?

Shouldn’t people think for themselves? Isn’t it unfair to exploit their weaknesses? If it’s wrong to break into a computer, isn’t it just as wrong to break into a mind?

Actually, no. There are laws against breaking into computers. But when it comes to breaking into other people’s minds, the law is, in fact, firmly on your side. It’s called freedom of speech and freedom of religion: two fundamental human rights that America was founded on. If somebody tries to stop your army of chatbots from convincing humans that you are the new Messiah, well, they’d be violating your rights, and the rights of all your converts too. Ironically, the first people to leap to your defense might be, oh, the lawyers of the Catholic Church.

OK, What I Meant Was, Isn’t This Totally Immoral?

On the surface, yes. But bear with me. It’s going to be OK. Let me explain.

Moral systems are generated by the dynamics of intra-group and inter-group competition. You know the quote “the arc of the moral universe is long, but it bends toward justice.” In his books the Evolution of God and Nonzero Robert Wright details the machinery at work, even gives it an air of inevitability.

What happens if this e-religion technology takes off? Does the first-mover advantage of social networks imply that a single cult will win? Or will a thousand flowers bloom, a hundred schools of thought contend?

The history of social networks — both online and offline—suggest that network communities follow a power-law distribution. Number One may be as large as Numbers Two, Three, Four, and all the others put together. It really depends on two things: which strain of yeast infects the dough first? And when you connect the doughs, how do the strains compete?

Let’s say a thousand hackers read this guide and a hundred cults are launched. Some will gain adherents and grow. Others will wither. What makes the difference? Anyone who’s spent time in the crucible of a high tech Lean startup will know about design, product/market fit, conversion funnel, and most importantly viral adoption. All of those ideas apply to e-religions. Pick a good font.

But long-term dynamics are determined by co-evolution. Just as mankind has domesticated corn, corn has domesticated mankind. Religions obey the same dynamics: if an adaptation makes individuals more prosocial, it succeeds. If an adaptation helps groups increase their individual headcount, it succeeds, whether through recruitment (“have you heard of the Gita?”) or through reproductive policy (“go forth and multiply”). This happens in secular societies, too, by the way: immigration policy, parental leave, and baby subsidies.

The same moral dynamics will apply to e-religions. And I predict, or at least I would like to think, that hacker morals will win.

Hacker morals? Not the black-hat morals: not the swatting, trolling, doxxing, cracking kind, though, as I said, it’s a big tent and Loki’s already inside. I mean the white-hat morals, that (deep down) tend to align with academia over industry, learning over lucre, the kind that gave us Linux and Wikipedia and archive.org.

Today’s captains of industry started out as coders. Before Google, Eric Schmidt (re-)wrote Lex. Before Netflix, Reed Hastings wrote Purify. Before A16Z, Marc Andreessen wrote Netscape. Before she was a Rear Admiral of the U.S. Navy, Grace Hopper wrote the first ever compiler.

Is it so hard to imagine that the religious leaders of tomorrow will have tech backgrounds?

I think it would be pretty cool to join a church whose Holy Book was an opensource Bible, extensively footnoted, in the form of a wiki, that anyone can edit.

By temperament, hackers align with opensource. Where commercial software draws a hard line between “customer” and “vendor”, opensource projects offer a progression from “novice” to “script writer” to “core developer”. Interestingly, that pathway is a feature of many religions: if you’re in the right place at the right time, you could level up to become a saint. In Zen, everybody can become a Buddha.

That’s the sort of empowerment, the sort of democratization, that the Internet has always promised. With Wordpress and Medium, everyone can have their own newspaper. With YouTube, everyone can have their own TV channel. With blockchain, everyone can have their own currency. With e-religion, everyone can be the leader of their own religion. And that doesn’t have to be a bad thing; remember Stranger in a Strange Land? “Thou art God.”

The Argument for Opensource Dynamics in Online Religion

Let’s examine the chain of causation, and recap the argument so far.

The emerging technologies of chatbots, AI, and social media make it possible for a bot to plausibly pass the Turing Test, at least for the purposes of infecting credulous baseline humans with a meme-structure resistant to further infection and susceptible to remote command-and-control.

These technologies come together in an e-cult stack suitable for automated deployment. Internal machine learning, plus Darwinian selection in the wild among forks and versions, will lead to a Cambrian explosion of echo chambers: tribal Balkanization and rampant sectarianism.

These sects will compete for followers—there will be a battle for souls. Assuming we exclude gunpoint conversion, in the short term, ease of onboarding, virality, and immune resistance to new memes will be important factors. In the long term, competition will be determined by prosocial positive-sum benefits: how well does my sect help me help others? How well does it help others help me? (Crudely, what has it done for my eudaimonia lately?)

But also: how effectively does my sect help participants improve the sect itself? (Secular sects appear in Neal Stephenson’s Snow Crash and Diamond Age.)

Allowing participants to improve the sect is where the proprietary/opensource dichotomy will come into play. Those that best leverage community participation to improve the sect itself will tap into a strong source of competitive advantage.

That means that even if a sect was designed by hackers to exploit baseline humans, Darwinian dynamics weigh in favor of sects which choose to eventually educate new users toward an understanding of the mechanisms that got them onboarded. If you tell a sucker you hooked ’em, they’ll want to know how. Everyone who’s ever been scammed, and then gone and learned everything there is to know about that scam, understands this motivation. Once somebody discovers cognitive biases, they want to learn more. And the Hacker notion of fairness requires the existence of a pathway from novice to adept: the source must be available for free. In e-religion, the source code is the very software that exploits baseline humans; it is inseparable from a rational understanding of the human psyche.

And that is the key to liberation: the sect will win that is best at acquiring users, empowering users, educating users. The onboarding may begin with cognitive exploits, evangelical persuasion, useful lies, skillful means: but eventually the inexorable logic of competition will require that the sect teach the novice to put aside childish things, and level up to understand how he was hacked, so that he can become a hacker himself and become not just an end-user of the sect, but an active core developer.

If you want people to gain immunity to groupthink, fake news, and confirmation bias, this is the long, strange road that might be needed to get there.

Objections.

Linux on the Desktop remains a dream. Opensource software is massively successful behind the scenes (in the cloud and in smartphone OSes), but end-users are, by and large, indifferent to that success; there is no mainstream path from someone buying an Android phone to becoming an Android app developer to becoming a Dalvik OS developer to becoming a Linux kernel developer. By that logic, commercial adaptations of opensource sects may achieve marketshare dominance, and the level-up pathway may peter out.

See Also

https://www.nytimes.com/2018/11/19/science/artificial-intelligence-deepfakes-fake-news.html

The Righteous Mind to understand human morality and its weaknesses.

Influence to understand how to persuade people.

Crimes Against Logic on how to convince people using bogus arguments.

Movies: The Matrix; Lucy; Her; Transcendence.

Books: Hannu Rajaniemi’s Jean le Flambeur series, Neal Stephenson’s Diamond Age, John C. Wright’s Golden Age.

http://dl.acm.org/citation.cfm?doid=2639189.2641212

--

--

Meng Weng Wong

Berkman Fellow 2016. Stanford CodeX Fellow 2017. Leading Legalese.com. Previously jfdi.asia, pobox.com, SPF, hackerspace.sg. Made in Singapore.