Counting Consciousness, Part 3

The looming future we would rather ignore

Connor Leahy
18 min readDec 30, 2019

When I started writing this series, the impetus was my replication of GPT2. But the content wasn’t merely a result of me grappling with GPT2 and its implications, but rather is the coalescing of a lot of thoughts and ideas that have been in my head for a long time.

In Part 1, I explain not only why I wanted to release GPT2 and why I replicated it in the first place, but also lay the groundwork for many important concepts around trust and economics. In the intermittent essay, I discuss the real intentions of OpenAI, my misconceptions and my decision to ultimately not release. In Part 2, I expand greatly on my arguments about hackers, security and the dangers from technologies like GPT2 on the web and beyond.

This will be a four part series, with the final part releasing soon. In this part, I want to finally lay out what I think this is all building towards, and what we have to do about it. I will begin by laying out the threats I see on the horizon and then I will suggest a moderately radical proposal for how the internet and society as a whole can respond to the coming threats (and get a much, much better internet in the process!).

Reading the first essay is mandatory, the intermittent and second essays are mostly optional. Go read it now, I’ll be waiting here for you.

The Future

Predicting the future is an activity notorious for how badly it ages. Almost every prediction ever made was wrong. But not all of them.

I want to go out on a limb here with some predictions I am very, very confident of. I have no idea when these things will actually come to pass. It might be in a few decades, a few centuries, who knows? (If I’d had to guess I’d say “within this century”, but don’t make me bet money on it.) But what I want to discuss is independent of when it happens, because I think it has to happen. I’d even go so far as to say I cannot imagine a plausible future in which these things don’t happen (other than complete destruction of intelligent life by some calamity or another). In fact, I think what I’m about to describe should be completely blatantly obvious to anyone following the progress of technology and logically extrapolating from that.

But still, very few discuss these topics seriously. There are multiple reasons surely, but probably the main one is “It’s uncomfortable to think about and doesn’t aid me in my present life in any way”, which is a very powerful way to get Homo Sapiens to stop thinking about something. Everybody has been guilty of this at one time or another.

So let’s recall some of the arguments from my previous posts:

  • Babbling is a kind of low(ish) level of communication based mostly on statistical pattern matching.
  • Lots of what humans do is babbling. (“How are you?” “Good. And you?”)
  • AIs have recently been learning to babble quite well.

Ok, but this still means that not all of human communication is babbling. Indeed, of course not. Humans are also capable of types of thought that still puts us far ahead of our machines or other animals (Possibly related to Judea Pearl’s Ladder of Causation). While far from perfect (though there is a point to be made about bounded optimality here), humans are by far the most effective truth generating engines known. We are capable of understanding astonishingly complex topics, sometimes with remarkable ease. (If you haven’t worked in computer vision, you have no idea how hard vision is…)

Allow me to make three assumptions, and from these derive some important implications. If you disagree with any of these assumptions, that’s a discussion for another time, I won’t be qualifying the statements here.

  1. All human behavior and thought is fully describable by some Turing-complete computation.
  2. Computing hardware and algorithms will continue to improve until they hit some physical limit.
  3. That limit is still very far away, and the human brain is nowhere near it.

From these simple (and I think demonstrably true) statements, we can derive:

  1. At some point in time, any behavior a human can perform can be performed by a machine.
  2. At some point after that, any behavior a human can perform can be performed both cheaper and better than a human by a machine.

If you disagree with either of these (very uncomfortable) conclusions, you must disagree with one of the three previous assumptions.

Lets take these conclusions as true. Just savor them for a moment. Instead of rushing to a moral judgement about whether this is good or not (which is not the goal of this essay), lets instead just calmly analyze the consequences of this.

The Dissolving Barrier

I could replace this whole section with this image, really. (source)

Currently, there is a decently sharp line between what is and what isn’t human. It’s not perfect, tensions mount in areas such as whether or not animals should be considered conscious enough to deserve rights similar to humans, but it’s still decent. Humans do things other things don’t do, and in 99% of cases it’s pretty easy to determine whether a given blob of matter is part of a human or not. Again, not perfect (brain dead patients, immortal cancer cell lines etc), but good enough.

Through most of history, the line was more than sharp enough for us to believe in human exceptionalism, because, for all intents and purposes, humans were exceptional. I don’t think I’m treading new intellectual ground here in saying that this exceptionalism has been crumbling for quite a while. But even when we figured out that the Earth wasn’t the center of the universe, or that animals had brains and emotions too, it wasn’t really directly threatening in that it changed how we went about daily life (except by the second order effects of challenging dogma, creating new movements such as animal rights etc). Humans still did all the labor and weren’t threatened to be destroyed by the realization that baboons appreciate magic tricks too.

But this is different. As I said in a previous essay, “Morality is how the world ought to work, economics is how the world actually works.” (I’m sorry I couldn’t track down where the quote comes from, it’s definitely not original.) The only thing stopping employers from employing something other than humans is their availability and cost (though it has happened in the past). As NASA put it in 1965:

Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labour.

Those are overall some pretty nice properties to have in a system. Honestly given the things a human can do, labor wages are a steal (so become an employer!). But don’t be fooled. As we established in the previous section, at some point in time more efficient, cheaper systems will be available, and by the unyielding laws of economics, they will be adopted. Anyone who doesn’t will be at a competitive disadvantage and eventually competed out of the market.

A lot of people tip toe around taking things to their logical conclusion, or outright fiercely deny it like the gentleman in the above comic. But let me be completely blunt here:

Humans will be, at some point in time, completely inferior in skills and cost-effectiveness to artificial systems in every possible way.

I think there is no reasonable way to not reach this conclusion given the reasonable assumptions we have made so far.

AI Assault on Authentication

Well, we’re not quite there yet of course, but we’re moving there, and it’s worth thinking about what this means. We don’t know how gradual or abrupt this change will be. So far it’s been in fits and spurts. Agriculture disrupted a bunch, then things calmed down, then steam power came around, then electricity, then computers, etc etc. It doesn’t matter for our discussion, but if it’s somewhat gradual that means we should already be preparing to implement the changes needed to weather this transition.

A transition to where? I’ll discuss that in the final section. For now lets consider the immediate future and our original topics of online security and information.

Moving beyond humans changes things in fundamental ways, many systems don’t work without the biological blockchain of reproduction (voting, most unlimited access to services etc). Many systems are built with unconscious assumptions about the constraints about the entities using them. Throughout most of history, the constraints were mostly the same: The user would be a human of some kind. Humans have a lot of variability, sure, but there still were clear constraints. No human can remember 1000000000 digit numbers, no human can run as fast as a racecar, no human can teleport through walls.

These were all important considerations. Imagine you’re building a bank vault and didn’t make the assumption that thieves can’t teleport. That would greatly change your design, wouldn’t it? Why even bother with walls?

Recently, some of our systems have already been dealing with the first constraint-breaking entities. Take email spam. It’s a reasonable assumption that no human could manually type and send a million emails in 5 seconds. But it’s not a reasonable assumption that a computer program couldn’t do just that. So our email systems can’t rest on that assumption and need to be designed around adversaries with this capability. This is a pretty obvious example to us, but it wasn’t at all obvious to early developers of email, which is why it’s such an insecure system! (Did you know you can just claim your email is sent by any address, no verification at all? Email just assumes everyone would be honest! There has been some improvement in this area, but it’s not part of the original email standard.)

So what is important for us to understand in this context is how our (often subconscious) assumptions about other entities are violated as technology becomes more and more powerful.

One approach is to design for these entities, build systems for computers and humans. But this often will result in systems not very usable for humans. The other obvious approach, which is what is adopted by almost any website you know, is to filter the humans from the non humans, and have a special “humans only” section where the constraints are again present. This is why registering on a website requires you to solve a captcha, so that it can be (reasonably) sure you’re a human and give you a “human compatible” experience (instead of one designed to anticipate all the possible non-human options…which is both hard to design and most likely not very user friendly).

This is a problem of authentication. We want to authenticate that some user is a real human with all the constraints that that brings and not some kind of technological system. There are generally three kinds of authentication: “What you know” (e.g. passwords), “What you have” (e.g. a key) and “What you can do” (e.g. captcha).

So it should be pretty clear what (one of) the problem(s) with stronger and stronger AI systems is: They break capabilities based authentication! If an AI can break a captcha, the captcha is no longer useful for differentiating humans from AIs. And it’s not just websites’ registrations that are being broken. The biological blockchain uses what you can do. Can you speak? Write coherent sentences? Look like a human? Authentication (which really is a different word for “hard to fake signalling proving an assertion”) is a fundamental issue that is everywhere.

The end of capability based trust

So if AIs keep getting stronger to the point they can do anything a human can do…what now? Using any kind of captcha to sort humans from machines is by definition impossible (an AI could always pretend to be less capable than it is if necessary as well). How do we separate humans from technological systems like AIs? This I think, is at the core of the issue.

At some point, we can’t verify whether a video is real, whether a recording is real, whether a picture is real, whether a text is real, or even whether a person is real. (This is already becoming a problem with technologies like Deepfakes.)

Once we reach this point, and we will sooner or later, we have to fundamentally rethink how we do…well, everything. Every piece of information, every interaction can be generated for an arbitrary, potentially sinister goal (probably just to sell you something though). Is this it? The death of trust? Will we all descend into collective algorithmically generated madness?

I mean, maybe, you never know, but maybe not all hope is lost. Even in this post-captcha world, there are ways we know of right now that can reaffirm trust, maybe even make it stronger than ever before. But it’s not going to be easy. I’m not going to hand you a clear, practical solution. What I’m about to suggest is clearly wildly infeasible by any standard metrics of today. But we aren’t talking about today. We are talking about a tomorrow where stakes are a lot higher than they are right now, and we might have no choice.

The answer is that even if a computer can do anything a human can do, there are things neither a human nor a computer can ever do. And the obvious solution we’re building towards is cryptography.

Public Key Cryptography (PKC)

The whole point of cryptography is to find ways to secure information. Authenticate who it’s from, whether you are allowed to read it or not, whether it has been tampered with etc etc. Modern cryptography can do amazing things that, before I learned about them, I would not have considered obviously possible at all (like having two people find out who is richer without revealing their net worth). In particular, what we need is public key cryptography.

Bob sending a secret message to Alice using public key cryptography

Public key cryptography gives us an amazing set of tools. You start by generating two “keys” (small files containing really long numbers), a “public” key (imagine it like being your name) and a “private” key (imagine it like being your passport).

Like your name or address, you share your public key, well, publicly. You shout out into the world “This key is me!”. Your private key you keep a closely guarded secret. What this allows someone to do is encrypt a message for you with your public key, creating an unreadable message. Now, unlike with symmetric encryption, you can’t decrypt the message again with the public key! Instead, you need the private key. So as long as your private key is secret, you are the only one that can decrypt and read that message.

By expanding this method some more, we can also do what is called digital signing. Given a message, we can use our private key to generate a little “signature”, which anyone can then verify as belonging uniquely to this message and your public key. No one else can create a valid signature for your public key without your private key, and changing even a single letter in the message will also render the signature invalid.

This gives us exactly what we need: An unbreakable way to authenticate! (Assuming we get around the problem of quantum computers breaking current public key systems, which seems likely.)

By requiring a message to be signed, we can circumvent the problem of having to determine the source of the message purely from its content, because the author has to sign the communication. No matter how clever an AI, it won’t be able to fake such a signature.

Now unfortunately, that’s the easy half of the problem. While PKC allows us to be secure once we have set up our identities, it doesn’t solve how to actually verify who or what is holding the key. It’s a move to token based authentication, “what you have”. If you own a private key, you are that public/private key.

This has obvious problems. PKC can “sustain” trust within a system, but we have to “put trust into the system” first. And that trust has to come from somewhere. And this, is where I finally get to the proposal I have been building towards…

Replacing the Biological Blockchain

To recap my arguments so far:

  • Humans and machines have different constraints on their capabilities
  • Most systems designed for humans only work properly if the system is actually only used by humans
  • We need a way to authenticate humans (both individuals and just to be sure of their species) to allow such a guarantee
  • All current ways of authentication will be broken sooner or later
  • Public key cryptography offers a way to move away from capability based authentication to a system secure from even the strongest AIs

But PKC is only as good as what you put into it. If anyone and anything can just create new keys, that’s nice for source verification but is useless for authenticating humans. And even for source verification, if I see someone with a key claim to be the president, how would I know if this was true? I can authenticate any messages as being from the same source, but not what that source is.

As established in earlier essays, a perfectly trustless system is impossible. As hard as we try, we’re not going to get around some kind of initial investment of trust. What we need is something like a Web of Trust.

A Web of Trust

Using digital signatures, it becomes possible to make verifiable public statements. If someone says “Connor doesn’t like anime”, that may or may not be true. But if that message comes attached with my signature, it’s clear I endorsed that message! This mechanism can be used in all kinds of creative ways. I could for example sign messages like “I trust Alice” or “Don’t trust Bob”, and people caring about my endorsements could update their own trust based on this.

The way a WoT works is by having one or more initial sources of trust that then can issue statements of trust about various other people in the network. For example, there could be a master government key that verifies other keys as belonging to people of that nation.

Each individual could select which sources to trust, and how much. It doesn’t have to be a simple binary “trust/don’t trust” system. Maybe you trust any statement that at least 5 of your friends verify as true, or you weight them in different proportions. Maybe Alice is super trustworthy, and you believe anything she signs, but you want at least 3 votes from other friends. Maybe you don’t trust what the Chinese government says, but do if another government backs up (signs) the same claim.

How exactly this WoT would be distributed is up for debate. Each source of trust could be a server you query, or there could be a central blockchain of all relevant statements or, more likely, many, many blockchains. I imagine a huge interconnecting web of blockchains, some big (governments or big signing authorities), some small (individuals).

I’ll be frank: What I’m proposing is that this kind of cryptographically backed trust will have to become utterly ubiquitous. What do I mean by that? I mean signing every single text message, every tweet, every youtube video, absolutely everything.

How could this look? As an example, social networks could require you to register with a public key. The network then checks whether this key has been vouched for as human by a trustworthy source like a government. If it checks out, you’re registered. From then on, you automatically, transparently sign any posts you make. This could be accomplished for example with a client side program that signs your messages with your secret private key before sending it to the social network. The social network then checks that the signature is valid, and rejects the message if it isn’t. Your private key never leaves your computer and you can be sure that only humans use the social network.

You wouldn’t have to show the whole signature on each post, it can be hidden away under a little checkmark next to a post or something that verifies the post was signed by the person claiming to have posted it (it’s important to still make that information accessible so it can be independently verified). The end user experience, beyond setting up the initial keys, is basically unchanged. You just have a button that shows you the signing stuff, which you can check if you’re a geek.

It can easily go beyond just text and just people! Cameras can sign what they film, video creators can sign their creations, all checked completely transparently by a huge web of invisible software tools.

But wouldn’t this destroy any semblance of anonymity forever, making the internet an authoritarian nightmare? Not at all! If you know me, you know I’m as big of a proponent of free and unrestricted speech as it comes. I grew up on 4chan, I like “unusual” blogs, I support political dissenters, I wouldn’t want it to all go away. There is a very simple way of dealing with this: Only require as much authentication for a key as necessary for your application!

For example, maybe Facebook only allows me to register if my key is vouched for by a trustworthy authority. But maybe 4chan lets anyone post anything with no authentication whatsoever. That’s fine, all we need is one ingredient more: A big red warning sign!

Imagine every piece of your software, your browser, your website, your email client, maybe even your file browser, naturally and quietly checking the signatures of everything you look at. And when it sees something isn’t properly authenticated, it can just tastefully make you aware of the problem and then you can decide for yourself what you want to do about it. You see a blog you like is marked as “not vouched as human”? Maybe you don’t care because you know the author is an anonymous human rights advocate! But when you see that same message on a spammy looking tweet? Great, gone with it!

Introducing pervasive authentication wouldn’t restrict our capabilities, it would expand them immensely. We could build spaces with clear rules and standards. We can say “this place is only for real people willing to put their real names on the line”, like a public town square, while others say “go wild and decide for yourself what’s real!”, a digital wild west like the heydays of Web 2.0. It would allow us to finally fulfill that promise of social networks as real(ish) expansions of public life, without requiring us to sacrifice the more anarchic joys of the internet. You can always have many keys, some more trustworthy than others. Key based authentication doesn’t harm anonymity, it only removes impersonation.

You could also use the vouching system for all kinds of other things, like organizational affiliation, endorsements, warnings, anything. There could be a huge number of trust sources that verify all kinds of things. A company could verify who is employed by it, security researchers could flag suspicious keys, accreditation bodies could verify someone as actually having the skills they claim to have (maybe you have a board where only people verified to have an M.D. can post?).

There would surely be a whole new industry for companies that verify varies things about some key, similar to how key signing authorities work today. Your key could become your social media profile, your resume, your ID. You could be verified for your education by your university, your achievements by sports or coding companies, anything! Each such signing company would be incentivized to be trustworthy, as otherwise their endorsements wouldn’t mean anything. Each person could transparently select who to trust as easy as who to follow on Twitter.

If implemented effectively and rigorously, this system could create an amazing thing and maybe even let the internet live up to its true potential. Fact check sources with a few clicks, sign up to any site in an instant, say goodbye to (traditional) identity theft and impersonation (“why isn’t my friend using their usual key?”), and say hello to a whole new era of verified media and proper attribution.

This only scratches the surface of possibilities. Imagine the possibilities with smart contracts and other blockchain technology. I could gush about the potential for hours.

The gains in security would be immeasurable, it boggles my mind to even speculate how much damage this system could prevent. Security wouldn’t be something you add on after, it would be the default.

Tempering Enthusiasm

Now ok, it’s time to pull back and acknowledge that I understand how incomplete and idealistic this picture is. Are there problems with this scheme? Absolutely. Am I the first to suggest something like this? Absolutely not, security researchers and cypherpunks have been suggesting things like this for decades. Basically no idea in this post is original.

Creating a system like this would not only take a huge amount of collaboration between basically all major software vendors, but also huge changes in social norms and law. There are a huge number of unknowns here and I am in no means qualified to solve them.

But we’re already further along than you might think. HTTPS already embodies the majority of concepts here in a limited form. You have trusted keys (usually preinstalled on your device) of signing authorities, who vouch that the website you’re talking to is really who they claim to be. This is already a fully functioning WoT, and completely transparent! I’m basically simply proposing to expand that kind of authentication to everything we do.

Surely you’ve also come up with plenty of possible attacks on this system while reading. Some I have replies to, some I don’t. What happens if a key is stolen? (There needs to be some blockchain mechanism to recall a key after a given timepoint maybe?) What happens if governments or other authorities abuse their duties? (Unsolvable in principle other than making defecting more expensive than cooperating and let the market do its thing) What if a government revokes people’s keys? (That’s why there needs to be multiple sources of trust! And governments can already revoke passports, so this isn’t a new problem.) How do we get such a wide range of software developers to cooperate? (No easy solution, but not impossible, it has happened before)

I’m not saying I know how to do this and we should get going right away. What I’m saying is that we have to do something like this sooner or later, if we want to avoid a serious catastrophe (and reap all these amazing benefits!). Maybe there are other ways to address this I haven’t thought of yet (I’d love to hear your ideas if you have them), but I strongly believe that this is the way to go.

The biological blockchain will fall and we have to build its replacement.

I’m always delighted to discuss these or any other interesting topics, and greatly appreciate any feedback on my essays. Contact me on Twitter @NPCollapse or by email thecurioushacker@outlook.com

--

--