Counting Consciousness, Part 2

Power and Security in the 21st Century

Connor Leahy
30 min readJul 15, 2019

A little while ago, I wrote this essay and this followup, where I talked about my thoughts on GPT2 and what it meant for security and trust. In the title is the somewhat strange phrase “Counting Consciousness”, which you may have noticed I never truly talk about in the text. This is because the original essay was meant to be much longer, and I cut most of that discussion, but it was too cool of a title to change, forgive me.

Unfortunately, in this post I also won’t be getting to fully explaining what I mean by “Counting Consciousness”. Turns out I have a lot of thoughts in my head that want out, and it will just take a little longer to get to. Instead, I want to add a lot of the context that was missing from the first post. I’m going to talk about security, warfare, internet manipulation and a bit of my own story. I usually keep myself out of my arguments, because they should stand on their own, independent of the person making them, but I like to bring my own history and personality in when I think it helps make an interesting point or tell a compelling story. My hope is that this story can raise some interesting thoughts to think about, clarify some of the arguments I’ve made and provide a more holistic explanation for why I acted as I have in the past.

This essay is a continuation of and elaboration on the thoughts in my first essay, and you should read it first for context. If you disliked the style of my previous essays, you probably shouldn’t attempt to read this one. There is lots of enthusiastic philosophizing, rambling tangents and mildly quixotic storytelling ahead.

What is security?

Human communication tends to work through the intermediary of words. Unfortunately, words don’t have universal meanings, they are tools we use to “carve reality at the joints”. We want to use words to group together concepts that are similar, even if those similarities aren’t immediately obvious (such as fire and breathing). So, for us to have a productive discussion instead of getting lost in pointless word-games, I want to start by defining some of my terms. I’m not claiming these to be some kind of holy true definitions, just that they are useful definitions for the concepts I want to talk about. So whenever I make an authoritative statement like “X is Y”, imagine a little “for my personal definition that I am using in this essay” disclaimer attached to it.

The first word I want to define is power.

I define power as “the degree to which you can impose your will on reality”. If I have a lot of power, I can cause more complex changes, in more situations, over longer times, to occur. Power is generally always defined in a context. I might have a lot of power over a narrow domain (e.g. my opinion might have a lot of influence on my close friends), or little power over a wider domain (e.g. my single democratic vote in my country has a non-zero, but small, influence). I might have temporary power (e.g. such as directly after getting a lot of media exposure), or power conditional on some outside factors (e.g. such as being reliant on powerful supporters whose loyalty could shift). I am not going to try to find a truly universal definition of power, though it might theoretically be possible in some vague way related to information/entropy (Something like “cumulative number of bits of information required to describe the interventions I can cause”, maybe). In general, we as humans have a pretty good intuitive sense of what power means, and I’m going to mostly rely on that. This is a philosophical essay, not a mathematics paper, so I can do that.

The next term I want to define is security.

Security is an interesting concept. Again, we all have a pretty good intuitive idea of what security means. It’s walls and guards and locked doors, it’s defending against criminals and invaders. But there always was something that rubbed me the wrong way about defining security as defense. I never could quite formulate the thought, but it always felt weird to me that whenever I saw something truly aggressive and bad happening, like a government cracking down on its citizens, it was almost invariably described as “security”. Why do we go to war? National “security”. What is a company called that sells the modern equivalent of mercenaries? Private “security” contractor. Generally I thought this was just a classically Orwellian trick of language, but I’ve actually somewhat recently found a different definition for security that suddenly makes much more sense.

I will define security as the configuration of power.

Think about it this way: You have power over the contents of your house, it’s your property. Now you wouldn’t want other (nefarious) people to have power over your possessions, for example by stealing them. But if you had your house unlocked and unguarded, there would be a large number of people for whom the action “take your possessions” is totally within their power. Now security comes in. You reinforce the door, install a lock, and buy a dog. You’ve used the tools of security to reconfigured power. No longer is the action “take your possessions” within the grasp of many people (not within none of their reaches, but definitely less accessible). You now have reconfigured the power others have over your possessions.

This is security, and it perfectly explains why both offense and defense is part of security.

Your citizens are trying to take your power, as the glorious dictator for life, away? Seems like you need to “reconfigure power” with a little bit of tear gas. You don’t like those big companies keeping their juicy secrets behind closed doors? A bit of “offensive security” by hacking their servers and dumping it to WikiLeaks should do the job.

Security is neither inherently good nor inherently bad, it’s just the collection of methods we use to reconfigure how much power we and others have over various things.

I like Bruce Schneier’s saying that “Security is a tax on the honest”. If everyone would be perfectly honest, there would be no need for security. But I actually don’t think that’s quite true. For us to not need security, we would all have to be perfectly honest and all need to agree on the exact balance of power. As long as two people disagree on who should have what kind of power, even if they are both perfectly honest, there is a need for security.

Lets create a little syllogism:

  • People want things.
  • Achieving things requires power.
  • Different people want different things.
  • Achieving different things requires different kinds/amounts/configurations of power.
  • Not all things can be achieved simultaneously.
  • Therefor, people will disagree on how power should be configured, and will fight over it.

Expanding the arguments made

So far, I don’t think I’ve said anything particularly novel, though perhaps in a more explicit way than most people would put it. It’s no secret that humans will be humans and will fight over power. It’s the oldest story in history. What I want to talk about in this essay is a bit about how the security landscape has changed in the 21st century. This is (obviously) not exhaustive by any means, only confined to a few areas I am most familiar with and interested in. I’m no professional cyber security analyst or anything, but hope I still have some interesting thoughts to discuss.

I already talked about some security concerns in my first essay to some degree (especially concerning attacks on the biological blockchain of trust). After having had more time to think, discussing with knowledgeable people, remembering some of my own past, and consulting one of the most valuable sources of wisdom, history, I now want to expand upon what was missing from the first essay. One thing I won’t be doing is going into any detail of what I think can be done about all these problems, that will be discussed in Part 3.

Trust, again

We rely on trust, every day, all the time. I really think that no one (including me) truly grasps just how much we rely on trust. It’s hard to appreciate just how terrible a world without trust would be. We would basically be living like chimpanzees (Or worse since chimpanzees actually have quite a lot of trust among each other). I find it hard to even imagine what a truly “trustless” society would look like, but here’s an amusing little anecdote from my younger years:

When I was…I don’t quite remember, something between 13 and 15, I already knew I wanted to be a scientist (I had wanted to be a scientist since I was 4 years old). One day I had some kind of “brilliant” idea for a clinical trial (I don’t remember what it was). I had some kind of great idea for a treatment, and all I needed to do was test it on people and prove that it worked. And that got me thinking, how do I accomplish that?

I don’t like to think of myself as a particularly distrusting or cynical person, but I’ve always had a knack for thinking of the absolute most worst-case paranoid-delusionally-bad situations possible. I had no idea as a kid how clinical trials worked, so I started to imagine this incredibly complex scheme to convince people my trial was real. I thought of elaborate schemes like filming people in real time as they enrolled, digitally signing every action taken with time stamps, sending samples to multiple independent labs to cross validate the results, livestreaming the administration of the treatment (again, digitally signed of course) etc etc. The plan just kept growing more elaborate as I found more and more ways to “attack” it. Livestreaming is great, but how would people know my “patients” weren’t just hired actors? How would I prove that the liquid I was injecting was actually my drug and not some other substance?

The thought experiment completely spiraled out of control as no matter how many levels of extra security and verification I added to the process, I could still find some way of casting doubt on the experiment as an outsider. There was always an attack, I just couldn’t devise a perfectly trustless system. I think this was the first time I realized that it was just not realistically possible to create such a “perfect trustless” system for any kind of non-trivial purpose.

Later, I learned the way real clinical trials work: The scientist just claims they did what they say and everyone takes their word for it if what they claim seems plausible and his colleagues all corroborate the story (I’m oversimplifying of course, but that is indeed pretty close to the real process).

I find this story interesting because, at least for me, it was a visceral realization of just how much easier trust makes everything. Not just easier, but fundamentally possible. Our modern medical and scientific establishments would be flat out impossible if there was a large number of sociopaths with misaligned incentives that were perfectly comfortable with lying through their teeth (not that such individuals don’t exist, but they are below a critical number).

Hacking Trust

I’ve had the displeasure of having to deal with some sociopaths first hand, without knowing of their sociopathic nature beforehand. In retrospect, it was an incredibly enlightening experience. I learned first hand the amount of damage even one truly dedicated liar can cause. It’s actually incredible. If you’ve never experienced the harm a single dedicated, glib and evil person can wreak, be warned.

I don’t want to get into the details of where and how exactly I interacted with these sociopaths, it’s a very long story, maybe for another day. But the important lesson is that the damage they can cause to an unsuspecting group can be phenomenally large. We were close to 100 people, most of us friends, having a good time working together on a fun project, and there were at most 1–3 real sociopaths among us. You might expect 1–3% of people being malicious to cause about 1–3% of damage, but oh no, that is not the case. They lied, cheated, manipulated, bullied and fed on our goodwill like parasites. They pretended to be our friends, played us against each other, manipulated our emotions and guilt tripped us when we tried to fight back with elaborate sob stories. A single bad apple can indeed spoil the whole barrel. What strikes me most about these people in retrospect is how they would do things that just felt…impossible for anyone to do. Things that were so socially taboo, so cruel, so petty that I never even considered the possibility of someone acting as they did. Their actions were so bafflingly abnormal that me and others were often left rationalizing that their actions had to have some logical explanation, no one could be so truly evil/petty (It’s been a reoccurring pattern in my observations that evilness and pettiness tend to strongly correlate).

But what we, and me in particular, ultimately learned was: No, they were actually that evil/petty. And I mean, I should have known. I know how evil people can be. There are people that cheat on their spouses for 20 years and somehow can sleep calmly at night. I’ve always been drawn to stories like these, of people doing things so phenomenally cruel that it just baffles my perception of reality (I don’t think I have to get into serial killers and other extreme examples here, there are plenty of depressingly banal forms of evil). But despite knowing this, despite my already unusually paranoid personality, I still got duped, they still hacked my system and used it for their own ends.

If you’ve never dealt with a sociopath or abuser, it can be hard to fully appreciate just how manipulative they can be. I think the reason these kinds of tactics work is because they exploit attack vectors in our trust system, they hack our trust. And it’s often a shockingly vulnerable attack surface that can be manipulated with remarkable ease.

Over the years since those first encounters, I’ve thought a lot more about this. As I’ve said, I’ve always had a knack for paranoid thoughts, but I always suppressed them and was ashamed of my unusually paranoid demeanor. But over time, I’ve honed my paranoia to the point I’m now starting to see it as more of an asset than a character flaw. Seeing ways systems can be abused has always just come naturally to me, it’s an involuntary reflex for me to think of lies and manipulations that I know I could get away with and cause real harm, if I wanted to. And I find that, to some degree, embarrassing, because I don’t want to hurt people. On the upside, I write some pretty kick ass conspiracy fiction. In a way, I think my personality quirks have to some degree inclined me to what Bruce Schneier calls a security mindset.

It took me a while to understand, but eventually it made sense why I was drawn so much to offensive thinking: Because the skills required for offense are the same that are required for defense.

Security Hackers: Offense and Defense

I recently finished a book called Cult of the Dead Cow, which chronicles the history of the eponymous hacker group. I found it an utterly engrossing read. Reading about the members’ early history as a bunch of edgy teens on BBS systems and them then developing into politically minded hackers felt like remembering a childhood I never had. I was viscerally reminded of my own interests and personality as a teenager that never quite developed in the way the members of CDC did, but could have (The only part I couldn’t relate to was the sex, drugs and rock’n’roll. While I do love extreme music, I’m not a very hedonic person and I’d consider my drug consumption moderate for a student).

Many of the struggles and problems I have faced, especially surrounding the GPT2 controversy, have, as is so often in history, already been encountered in the past. CDC members were forerunners in the hacking community in terms of responsible disclosure of vulnerabilities, getting big corporations to do the right thing and becoming politically active (they actually coined the term “hacktivist”). It was fascinating how, whether I knew it or not, I was in many ways re-experiencing many of the lessons CDC had learned two decades prior (I had read some about CDC during my teen years, but overall didn’t know much about them prior to reading the book).

One of the most enlightening pieces of hacker history is how early companies responded to hackers disclosing vulnerabilities. In hindsight, the reactions of corporations to hackers revealing how insecure their systems were, seem baffling negligent and stupid to me (Microsoft being one of the biggest offenders here, luckily they have greatly improved since then). Mostly they wouldn’t care at all, especially if the vulnerability wasn’t disclosed publicly.

Imagine being in this situation: You’re a hacker and have discovered a huge problem that could put thousands of people at risk. You tell the company to fix it and they ignore you. What the hell do you do? You don’t want these people being in danger from criminals. But you’re caught in a catch-22. If you release the vulnerability, you might be able to twist the arm of the corporation to finally fix the flaw, but you are also revealing the vulnerability to criminals, possibly putting people at even more risk. I think this is a fundamentally bad situation, with no clear right answer. (It’s also somewhat analogues, but not identical, to my GPT2 situation) And sometimes things got even worse! Some corporations actually sued hackers for finding their flaws. I find this behavior so stunningly short sighted and infuriating it’s hard to put into words. Imagine someone telling you you left your house key in plain sight in front of your door and that you might want to put it somewhere else, and you then sue them. Thank god we’ve moved on from those dark times (mostly).

CDC and other hackers pioneered the concept of responsible disclosure. To this day, the protocol is to give the corporation advance notice of the vulnerability, then wait a certain predetermined period before releasing the details of the vulnerability publicly. This was vaguely the kind of behavior I was trying to emulate with my own promise of releasing GPT2–1.5B on July 1st. Of course, as I now understand, my situation was not the same as the kind CDC faced in the 90s, and I don’t want anyone to think it was. There were no negligent corporations in my situation, only insightful and intelligent colleagues that were more than helpful in helping me find the right course of action.

The fundamental thinking error many entities had back then, and many (including corporations and governments) unfortunately still have today, is the idea that offense and defense are fundamentally different areas, that you can work on defense without working on offense. I think it has become abundantly clear that this is simply not true. There is no “offensive security” and “defensive security”, there is only security (Sure, there are some examples of purely offensive or defensive methods, but these are the exceptions that prove the rule). Nowadays, if you’re a big corporation, what is the best way to improve your defenses? You hire a hacker to attack you. This is the best (and arguably only) way to truly improve your defenses against real attackers.

In my last essay, I talked about the concept of the “curious hacker”. You might wonder why I was so insistent on always including the “curious” label. The reason for that is that the word hacker can describe a number of different kinds of people, some overlapping more than others. One of the archetypes is the “curious subtype”, that I talked about in Part 3 of my first essay.

Here, I want to define a second “subtype” of hacker, the “security hacker”: Someone who is unusually talented at the skills required for security. And this talent gives them offensive capability, but that very same capability is exactly what we need to improve our defense. Hackers have an unusual mix of talents that lets them think up and exploit attacks that others might not even imagine.

For reasons that are amusing to speculate about, curious hackers are often security hackers and vice versa. But the overlap isn’t perfect, so I think it’s useful to differentiate the two subtypes. I think most security hackers are curious hackers, but not all curious hackers are security hackers. Whenever I use the word “hacker” in this essay, I’m referring to security hackers unless otherwise noted.

(Cyber) War

I want to go out on a bit of a limb here and present an unusual proposal for rethinking what war really is.

As Prussian general Carl von Clausewitz quipped:

“War is the continuation of diplomacy by other means.”
- Carl von Clausewitz

And I think that’s true. I also think we can put it the other way as well: Diplomacy is the continuation of war by other means.

This begs the question, if diplomacy and war are fundamentally the same thing, what are they? My answer, given what I’ve talked about so far, should be clear: Configuration of power (on a large scale).

In the 21st century, war is still with us, though not in the same way as, say, in the 20th century, or even earlier centuries. The reasons for this are many, personally I think the concept of “gentle commerce” (The idea that, due to commerce, keeping my neighbors alive is more useful to me than killing them and taking their stuff) is one of or even the primary driver of this “new peace”. But if we consider diplomacy to be an alternative form of war, it quite clearly never ceased. And in these modern times, we are confronted with a new kind of warfare: Cyber warfare.

I will be using “cyber warfare” in a pretty broad sense to encompass “any offensive or defensive security measures involving information systems used with the goal of affecting politics on a large scale”. So this would include the classical nation state attacks on each other, but also online propaganda campaigns and hacktivism (state sponsored or not). Many people outside of the tech world (and many within as well) I believe do not fully appreciate just how high the stakes are in cyber warfare. As Bruce Schneier explains at length in his delightfully titled book “Click Here to Kill Everybody”, everything is turning into a computer. Our home appliances, our cars, our power grids. It has become a threateningly real possibility for digital actors to cause real, physical harm on scales equivalent to or even surpassing more traditional weapons using cyber attacks.

Now, I want to go even further and propose an unusual hypothesis: Every war is a special case of a cyber war. Allow me to explain.

Two quotes illustrate my point:

“Wars begin in the minds of men, and in those minds, love and compassion would have built the defenses of peace.”
- U Thant

“Battles are won in the hearts of men.”
- Vince Lombardi.

Say you want to wage a war, what is the first thing you need to do? Get on your horse with a sword and ride against your enemies? Obviously not. If every war was one guy on a horse, then wars would hardly be as scary as they are. No, the first step to starting a war is convincing more people to join in on your war effort. Then come all the other steps of preparing tactics, executing plans etc. You need to have the power to orchestrate a war in the first place before you can start a war to reconfigure more power. War is like any other security measure, albeit a very extreme one.

So why do I say every war is a cyber war? Because humans are ultimately information processing systems piloting a meat-and-bone mecha suit. In order to get soldiers, you need to attack the programming of their minds. Why the hell would some peasant want to go to war in some far off country? He just wants to tend his fields and raise a family. So as a leader bent on war, you need to attack this programming, through tactics like forced conscription or patriotic propaganda (or even more dystopian sci-fi methods yet to come). You need to hack your soldiers.

And if you think I’m stretching the definition of hacking, I’m really not. Social Engineering has long been considered a part of a security hacker’s toolbox, often their most powerful tool. Most high profile hacks nowadays aren’t due to some arcane zero day exploit, but rather by social engineering and tricking someone into revealing their password. Remember:

“The biggest security vulnerability is sitting in front of the screen.”

The skills that lend themselves to technical hacking often also lend themselves to social hacking. I think the reason for that is pretty clear, because hacking is all about having that security/attacker mindset, and the social system is just another system that can be attacked.

It goes even further beyond that, too. As already mentioned, Bruce Schneier illustrates vividly in “Click Here to Kill Everybody” how everything is becoming a computer. Your car is a computer with an engine, a tank is an armored CPU on treads, an ICBM is a computer with a missile attached. This means even a purely digital attack can effectively become a physical attack.

Now of course you might question whether there’s anything to be gained from considering traditional and cyber warfare to be fundamentally the same thing. And I think there is, because it means that the lessons we have learned from the one can quite often be applied to the other as well.

What I am arguing here is that a security hacker is someone that is unusually talented in security, and that their skills don’t only apply to breaking into computer systems. Rather, I’m arguing that a hacker’s skills are also unusually useful in warfare of any kind. And because of the equivalence of the skills needed for offense and defense, they aren’t only the kind of people we need to win wars, but the kind we need to prevent wars.

Hackers: The new super soldiers

The idea of a “super soldier” is as old as myth. From Hercules and Achilles to modern day super heroes, there has always been something compelling about the idea of the super powerful soldier. And I think the reason for that, on an abstract level, is obvious: A super soldier is someone with an unusually large amount of power to affect the big things we care about (wars tend to be fought about big things). We can define a super soldier as someone that has an unusually large influence on the outcome of a war or battle compared to the average soldier. (And to be really pedantic, we define soldier as someone who is directly participating in a struggle for power)

Super soldiers have, to some degree or another, always existed. Unusually strong (or lucky) warriors, smart (or lucky) tacticians and charismatic (or lucky) diplomats. I’d gladly argue for Bismarck (the most German person to ever live) having been a super soldier (even so he “fought” mostly through diplomacy), being able to accomplish much much more than an average soldier could have.

It’s logically always been in the interest of nations and other entities keen on winning big fights to find and exploit the abilities of super soldiers (I know this is getting dangerously close to “great man theory”. I’m not saying a few people had full control over the course of history, just that there was some subset that had an outsized influence). For the most part, there wasn’t a clear way to identify super soldiers. They were diverse, talented and flawed in many different and often unique ways (that’s why they make for such good historical narratives). What I would like to argue is that the hacker has become a reasonably easy to identify archetype of the super soldier.

If what I propose is true and hackers, by their nature, are unusually talented in all things security, whether technical or social security, that makes them almost by definition super soldiers. And nations are becoming very, very aware of this fact.

Hackers are often masters of “asymmetric warfare”. Imagine I have 1000 soldiers and you have 500 soldiers, all soldiers equally well equipped. If we fought, this would be symmetric warfare, since we’re fighting on the same terms and can both make pretty symmetric analysis of our projected chances of winning. Of course, my chance of winning will be significantly higher, because I have more troops. But this is just because I payed the higher cost to have more soldiers. If our troop numbers were switched, I would now be the one losing.

Asymmetric warfare is different, it’s when you can get an outsized benefit given the costs. Say your troops have some kind of unusual terrain advantage (for example, they are Finns on skis). Then even your lesser forces might suddenly be able to deal with my larger force.

Computer hacking is often the epitome of an asymmetric advantage. If I have a zero day exploit for your system, I can cripple it without having to fear any kind of direct retaliation. This is also used by weak actors such as activists, sometimes allowing them to challenge entities much, much more powerful than themselves (Say, Anonymous during Operation Payback). A single hacker could never take on a nation’s army hand to hand, but they might be able to cripple the whole country by attacking the power grid. And if they hide their tracks well, they might never be caught.

Terrorism works the same way. There is absolutely no way in hell that Al Qaeda, the Taliban or any other terrorist group has even the faintest chance of taking down an entity as powerful as the USA. And they know that. So instead of mounting a conventional war, which would be hilariously pointless, they wage an asymmetric war, using tactics that yield them outsized benefits for their efforts. The primary example is weaponizing media hype and fear. Far, far more damage has been caused by the USA’s response to terrorism than by terrorism itself. It’s like a fly angering an elephant in a porcelain shop. The fly can’t destroy the porcelain itself, but it might be able to enrage the elephant enough to do the job for it. (Credit to Yuval Harari for this metaphor)

Security through lack of creativity

There is one thing I find absolutely fascinating about terrorists: Their stupendous levels of incompetence. Now I’m not trying to say that terrorists haven’t caused great harm, and I feel for their victims every bit as much as everyone else. But what strikes me as crying out for an explanation is why things aren’t much worse.

I can, off the top of my head, think of at least two methods of assassinating a major world leader that could be implemented by anyone with a B.Sc. (or an internet connection and lots of time), for which there are no defenses in place, and that would be almost completely untraceable. I don’t want to discuss the first method because of how disturbingly easy it would be (It’s been discussed relatively widely in the media, I’m genuinely surprised no one has done it yet), and the second because of how terrifyingly effective and undetectable it would be (and because explaining it makes me sound like a paranoid schizophrenic, even though the technology for it has existed in mainstream science for decades, no crank pseudoscience required), and I want to focus on a different kind of security hole in this series of posts. With just a little bit of creativity and modest resources, it seems frighteningly easy to cause society to collapse, or at least cause billions of dollars of economic damage.

Obviously, I don’t want society to collapse, I live there! What I find so perplexing is that, at least to my mind (and those of some others), there are these gaping, fatal holes in our security, and no one has exploited them! The only thing I can attribute this to is that terrorists are astonishingly uncreative and stupid people (or there is a secret league of super heroes somewhere). This seems plausible, because if you had any sense in your head at all, why the hell would you want to collapse society? I can’t imagine that ever being in your best interest.

I often feel kinda ashamed for thinking of these genuinely terrifying schemes with which one could harm untold number of people. I don’t do it because I want these things to happen, I’m afraid of these things! But the abilities to attack are the same abilities that help you defend. This knack makes it clear, at least to me, that a frightening number of catastrophic dangers are kept at bay by the mere defense that “it’s in no one’s interest to do that right now” or the even weaker “those in whose interest it is are too stupid to figure it out”. Imagine just having a nuclear launch button sitting in the middle of Time Square. Yea, it’s in no one’s interest to press the button, but should the button really be there? I really, really don’t think so.

And sometimes, incentives are different, and someone exploits one of those holes…

Just add creativity…

And this brings me back to the original discussion: Disinformation on the internet.

I have, like many others, been on the internet for a while. I’ve watched many things happen, participated in some, laughed at others. But as the internet evolved, my usual hacker mind of course couldn’t rest. With the rise of social media, the attack vectors quickly became obvious.

I’d like to tell a really interesting story from a few years back. At this time, I was incredibly ill, to the point I was in constant pain and could barely leave the house, or even my bed, and, according to every medical opinion, there wasn’t much chance of it ever getting better. I did get better anyways, after four years. By pure luck, I found a doctor that prescribed me a simple pill and everything went away. Funny how that kind of stuff goes. The lesson: Never give up.

But that wasn’t actually the story I wanted to tell. Before I had found the doctor that saved my life, I had to come to terms with the reality that I was probably going to be bed-bound and in pain for the rest of my life. One of the things I really wanted is to find some way to be less of a drain on my family. Unable to go out and work a normal job, I turned to the internet and tried to find if there was any way I could make money to support myself. This led me into the fascinating and at times shady world of online marketing and Search Engine Optimization (SEO, basically manipulating Google to display your page higher in search results).

Online marketing and SEO seemed like things I might be able to do, requiring only an amount of technical skill that I definitely had, and doable from my bed. It didn’t take me long though in my research to slide from the “white hat” side of the industry into the more seedy underbelly. I quickly found myself on a few very interesting forums catering to the more “semi-legal” methods of the trade.

I’d like to mention at this point that I never did go into SEO and never engaged in any of the semi-legal or illegal methods I read about. But even if I never put any methods into practice, it was fascinating. I learned so much. I remember a little while ago, an Alphabet owned security team announced they had bought a small “propaganda campaign” from a Russian site and it caused a bit of a stir in the media. When I first read about it, all I could think was: “That’s cute.”

On the forums I frequented, you could buy fake accounts by the thousands, coordinated twitter campaigns, fake websites and traffic, mass-scale writing services, you name it. The service I found the most noteworthy was for getting your own article into well known news sources (including the Guardian and other reputable sources). For a nominal fee (from a few hundred to a few thousand dollars, depending on the quality of the site), you could get your story onto one of those sites as if it was a totally organic article one of their journalists had spontaneously decided to write themselves.

I was so curious, I messaged a few of these sellers, posing as an interested buyer. They were very polite and professional, talking me through the process, the prices, everything. They even offered to write the article for me, for a small surcharge, I just needed to provide the topic and narrative goal. There were some limits on what they would post, of course. Mostly this service was for promoting new products or companies, not for pushing a political message (Be skeptical the next time you see some new app or startup promoted in your news source of choice). But I’m pretty sure for the right price and with the right seller, I could have gotten that service as well.

I had always had a healthy suspicion about anything I read, online or elsewhere, but this was a smoking gun. I could get onto a “real” news site, front page even if I payed enough (one seller even had a system where you could pay for front page featuring by the day), promoting anything I want, with no disclaimer about sponsorship or anything. At the time, I dreamed of having a few grand to spare, so I could get these corrupt journalists to post false stories, and then expose them all at once. Alas, I had no such resources.

And it went so much deeper. I could get a hashtag trending on Twitter, priced by the thousand tweets. Or I could buy endorsements from popular Instagram pages. In SEO, the main way to get your website to a higher ranking is to have other websites link to yours. But Google is smart at detecting spam, so you couldn’t just have a ton of empty websites spamming your link. So, for a pretty reasonable price, I could have a whole network of blogs and websites created and populated with fake content, all for the sole purpose of having a single link placed somewhere to promote my website to the Google algorithm. If I wanted, I could then resell link space on my blog network to other SEO people.

What really struck me was the numbers involved. I really never realized just how much money you can make by being the top search result for “best mattress” or whatever. Being the top for a good “keyword” that is linked to people wanting to buy something, and then pushing some affiliate link or product on them, was incredibly lucrative. Even a tiny niche keyword could bring in a full time salary if you did it right. What also disturbed me is how the sketchier something became, the more lucrative it was. While Amazon might pay you a few percent for a sale, sketchy weight loss products would offer you tens or even hundreds of dollars for each sale you facilitated. You could become a millionaire just by having thousands of fake websites, accounts and links promoting some (probably ineffective or even dangerous) diet pill to desperate people. And those weren’t even the most unethical things I saw (I think a course promoting an ‘all natural cure’ for diabetes took the cake, or the vast amount of disturbing adult content).

As I soaked this all in, more and more morally sickened by the day, it dawned on me just how exploitable the system was. Here, there were some actually creative people. Not many, most were as uncreative and clueless as in any other hive of scum and villainy. But there were the pros that obviously knew what they were doing.

Social media became a massive joke to me. Thinking as an attacker, I could immediately figure out how I could break the system. With modest programming effort, and some cash, I could easily create thousands, maybe millions, of false identities. No need for identity theft or anything. It would be trivial to design software that could keep track of all these accounts, automatically using them to create believable online identities. Hell, it would be pretty fun, honestly. I could write software to generate personalities, daily routines and habits of every person. Use procedural and machine learning methods to generate text, scrape websites for pictures, custom code lots of simple narrative behaviors for the agents to engage in…Hell, I could have them talk to each other, stage fake little dramas, there was no limit! I could, single-handedly, create an entire army of perfectly inconspicuous sleeper agents. And this wasn’t even getting into the truly illegal methods like leveraging botnets and stolen identity information. I could promote anything I want, commercial or political, I could create a fake controversy from nothing, I could promote myself to impressive online (pseudo-)fame, I could create the illusion that legions of people agreed with my every argument, and, if I was truly ruthless, there were all kinds of sociopathic manipulative (and highly effective) gaslighting I could do…

Patching the hole

I had found one of those oh so gaping holes. And this is the primary one I discussed in my previous post in relation to the biological blockchain. Back in the day, a single person couldn’t have created an army of sleeper agents. But things have changed, asymmetric warfare. If I could do it, then others can and will.

When GPT2 came around, I thought of it in the wider context of this exact lesson I had learned earlier. To me, it was just another incremental little improvement on this already terrifyingly powerful security flaw. And it wasn’t even that useful of an improvement, either. The bottleneck for this kind of mass disinformation wasn’t generating text, that was already doable to a totally acceptable level (for this purpose) using other, much simpler methods. The only bottleneck to this method was…I…I don’t know? (IP Addresses, maybe? But those are also just a moderate capital investment, or a great use of a botnet)

That’s what made this so important to me. I don’t see a bottleneck. This seemed like a perfect asymmetric tactic to me, with no obvious downsides. The flaw is so huge that there was, in my view, absolutely nothing stopping someone from exploiting it other than “security through lack of creativity”, and all I have to say are the words “Internet Research Agency” to prove that that security was definitely failing (I don’t want to pick only on the Russians here, as I can’t imagine the US and other countries being above using such tactics).

Ever since those days, it was crystal clear to me: The only solution is to patch. And the patch had to be applied to the human mind. This wasn’t something that could be fixed by patching some code, we as humans had to fundamentally change how we view online discourse and social media (see my biological blockchain idea and related concepts).

But for the most part, I let this fade to the back of my mind. I got healthy again, the world was open to me once more, and besides, it wasn’t like I could do anything about any of this, right? And then along came GPT2.

I hope it’s becoming clearer to you why I acted the way I did when I first wrote about GPT2. I saw myself as having a possibly one-time chance to tell a story and actually have people listen. I wanted to warn people that this isn’t something that will just go away. This is like the vulnerabilities CDC discovered back in the day, but it is a vulnerability that was in human brains.

I still see the best, or at least most immediate, way to patch this attack vector in changing human behavior and norms. It’s not the only way. There are many technical solutions being attempted, like detecting fake accounts and messages (which, while definitely to some degree necessary, I’m not a huge fan of, as you may know). I see this as the kind of bug that cannot be patched with “business as normal” operation. Every method I see as plausible to address this comes with significant changes in how we interact with these services.

I firmly believe that there is no easy solution, technical or social, to this problem.

I wanted to talk about some of my concrete suggestions for both social and technical patches that could be applied, but this post sort of became much larger than originally envisioned. Let me just tease with the words “universal adoption of public key cryptography” and I will elaborate on that in my next post.

In Summary

In this essay, I’ve expanded on a lot of the context missing from the first post. In particular we’ve:

  • Defined security as the configuration of power.
  • Discovered the “security hacker” archetype and that the skills of offensive security are the same as for defensive security.
  • Considered the hypothesis that every war is a special case of a cyber war, and therefor is amenable to the same strategies.
  • Discussed how we rely on “security through lack of creativity” much more than we should be comfortable with.
  • Heard my story of learning how the current state of the manipulability of online discourse is in serious trouble.
  • Hopefully all come to the conclusion that this problem is serious, not new and not easy to fix.

The glaring omission in this post is my suggestions for “patches” that could help with this problem. I will discuss those in the two next posts: Part 3 will discuss the near- and mid-term future (where things are mostly still like they are today), and the final Part 4 will discuss the far future (including delightful topics like AI super intelligence and our species’ ultimate fate).

I’m always delighted to discuss these or any other interesting topics, and greatly appreciate any feedback on my essays. Contact me on Twitter @NPCollapse or by email thecurioushacker@outlook.com

--

--