I happen to be a bit of a student of history. As such, I’ve noticed that what we are witnessing in the United States (and much of the West, as I understand it) is a case study in the break down of civil society and the emergence of civil conflict. I imagine that if you could see into the living rooms and street corners of the American, French and Russian revolutions or the American, Spanish or English Civil Wars, you would find a lot that is similar to the current dynamic.
But this time something new is happening. This time, huge portions of the conversation (perhaps the majority) are happening online. And, it seems, the entire process is rolling out in what feels like “fast forward.” This is interesting. And, of course, in this most dangerous of times, extraordinarily dangerous.
I’m wondering: Isn’t this precisely the kind of data set that you would need to train a deep learning AI on something? Are we in the process of producing a mechanism whereby anyone who wanted to (and had access to the right portion of the available information) could train an AI to weaponize civil conflict?
Step back and take a look at civil society as a social graph. You have people in relationship with each-other and you have their communication flow both mediated by and modifying these relational dispositions. Now imagine analyzing this social graph, the states and dispositions of relationship and the flows of communication from the perspective of “civic binding”.
Use semantic analysis and clustering algorithms to identify the emerging “teams”. Notice, for example, that virtue signaling, echo chambers, cognitive bias, etc. all make this rather easy — most of the communication of this sort exists more or less only to identify and bind tribes. In the past few weeks, for example, the “conversation” around gun control is largely nothing more than a signaling mechanism for what team you are on. If you like “sentence A or article B” then you are on team X; if you like “sentence C or article D” then you are on team Y. Any AI worth its salt should be able to make easy sense of all this.
Watch how these relationships change and develop in the context of communication. Take keen notice of things like when someone severs a relationship (“unfriend, block”) after some notable communication (a lot of this has been going on last week). At a human level, we would understand this as when someone discovers that their Facebook friend is a “gun nut / grabber” and, therefore, unfriends them. At a machinic level, what we see is a package of semantics flowing across relationships that result in the asymmetric transformation of those relationships. If you want to weaponize civil society, this is the good stuff.
Now, if you are something like Facebook or Google (or anyone else who can access their data or similar data sets), where you can see hundreds of millions of people in billions of relationships make trillions of communications every day, it doesn’t take long before you could have a very nice handle on the dynamics. What are the natural social clusters, what binds them, what breaks them down?
The next part is a bit harder, but rest assured, we are already close and lots of smart people are working on it: we need to fashion some plausibly convincing chatbots to start injecting new semantics into the social graph and evolving them until they shape the civil conflict in a desired direction. Really, just watch out for emerging “wedge issue” semantics, find the ones that the humans themselves are innovating that have the most impact on reshaping the social graph and then just upregulate them using your botnets. You don’t have to light the fires, just fan the flames.
Now, of course, what would be amazing would be if you could employ the “gig economy” to really rile things up. How hard would it be to create a sort of “Uber for protests”? Surely an AI could use existing services to print out a bunch of pre-tested protest signs and arrange for them to be delivered at a location. I imagine that it wouldn’t be much work to empower an AI to post ads on Craigslist (etc) to gather and hand out those protest signs. At a minimum, this would increase the impact and “semantic coherence” of protests. Actually, now that I think about it, I would be a bit surprised if these sorts of bots didn’t already exist.
And I bet we aren’t very far at all from AI that can identify and empower “leaders” who are particularly well suited to divide and conquer tactics. People aren’t really that complicated, after all. Particularly when we have been riled up and activated into aggressive tribalism and a mob mentality. Watch.
Yep. Seems like if this isn’t already in-place, it is pretty much inevitable. So long as meaningful populations spend meaningful time connected over weak social links and continue to allow themselves to be yanked about by the news of the moment (fake or otherwise, this distinction means little)— instead of learning how to think (rather than simulating thinking) and building their own deep ability to respond to the real world.
We can, of course, try and regulate the social media and perhaps the AI — but be mindful that the collective intelligence that is necessary to figure out how to do this and then to actually execute on it seems rather lacking. And is precisely the sort of thing that weaponized civil conflict would be well suited to disrupt and undermine.
I’ll have to add this to the list of the problems with social media: https://medium.com/@jordangreenhall/what-is-the-problem-with-social-media-5ec873f7a738