Counting Consciousness, Part 4

Humanity for Humanity’s Sake?

Connor Leahy
12 min readJan 31, 2020

It’s been a long journey, and in this final essay I want to finally explain what I mean by “counting consciousness”. I’m sorry it took this long, but I needed all that build up to explain why what is coming next is important. Really I shouldn’t have had “counting consciousness” in the title all this time, but it was just too beautiful a phrase. This is definitely the most philosophical part of this series of essays, so be warned.

In this essay, I want to discuss what I think will be the biggest change in our society to come, why we shouldn’t fear it and why we need to take it seriously. This isn’t a defense of consciousness as a concept, it’s a critique. It’s a warning about taking the term too seriously. In the future I’ll definitely at some point write an even more damning deconstruction of consciousness and qualia as concepts. But not today, today I want to stay hopeful and optimistic and trade a thorough philosophical analysis for a pleasant introduction to thoughts worth thinking about.

Reading at least essays 1 and 3 is greatly recommended if any of this is supposed to make sense. (Links to the whole series: Part 1, Part 1.5, Part 2, Part 3)

Human vs Not

In my last essay, I laid out my vision of a solution to many of the coming problems: Widespread public key cryptography to authenticate who is and isn’t humans (and much more). If you haven’t read the previous essay, you should really do so now.

Part 3 focused on the most practical aspects of the proposal. The what, the why, the how. It was intended to be naively practical, not thinking too much about abstract philosophical issues that might arise. Now, I want to pull back the curtain and reveal the underlying, admittedly quite weird philosophical questions I’ve been building towards this whole time.

Assume we successfully implemented a Web of Trust system as described previously. We have robust, ubiquitous software that transparently handles authentication across all kinds of media. This ecosystem has many uses, and one of the most important uses is verifying who is and isn’t human. And now we’re going to tackle one of the thorny questions that raises: Who gets to decide who is and isn’t human?

The obvious answer is for there to be many “sources” of authentication (like different governments and private companies), and each person can decide which ones they trust or not. These sources of trust could perform some high cost authentication before signing a key, like having you appear in person at a government office, maybe even undergoing a physical or DNA test if humanlike robots are a thing. But these sources could always “defect” and sign unworthy keys. There are many reasons to want to do this, such as governments creating fake identities for spies or voting fraud, or unscrupulous dealers authenticating bots as humans for some financial purpose. The countermeasure of course would be to stop trusting a source if it authenticates too many false keys. Not perfect, but it’s a tragedy of the commons problem like any other. The collective would benefit if no one cheated, but many/all individuals would profit from cheating individually. Things aren’t totally hopeless in practice, as the cost of getting caught is probably pretty high since you’d lose trust, so I don’t feel like this is a deal-breaker.

So far, so uncontroversial. In general I think most of what I’ve suggested so far, while radical in scope and implementation cost, isn’t really all that radical in terms of desirability and technical complexity (if it is implemented properly, of course). Now I want to ask something more radical:

What if we want to authenticate non-humans?

What do we value?

Let us return briefly to GPT2 and its (ab)use.

The function of GPT2 and its use is clear: It produces text, which is then sent to a website or individual, and this produces some kind of effect that we may or may not like. Okay, but “sending text to a website and causing an effect” is what the internet is all about. We want people to click on our links, to leave comments and reviews and engage with our content and each other. So the mere fact that it generates texts isn’t the bad part, obviously.

So we have to ask ourselves the question: “What interactions, with our websites or otherwise, do we value?”

Say, for example, I developed some kind of super sophisticated new AI. This AI can not only write reviews, it actually orders the products, thoroughly tests them in every useful way and then posts extremely accurate and helpful reviews to the website it bought the product from. Would you consider this AI malicious? It’s technically a computer program, with no human involvement, writing reviews, something we agreed was a bad thing for GPT2 to do.

But I’d argue this AI, if it existed, wouldn’t be all that bad at all. Quite the opposite, I think people would be willing to pay for the AI to interact with their website!

Or how about an AI that writes really good, engaging fiction and posts it to fan fiction websites? The only effect this interaction would have is that more humans have more fun and engaging fiction to read, improving their lives. Thumbs up from me!

Ok, those are (relatively) uncontroversial examples (I hope), but lets try a more risque thought experiment: What if I had an AI that makes profiles on chatting and social media websites and then talks to humans there. For sake of this argument, lets say this AI always identifies itself clearly as an AI and that it is incredibly good at providing wholesome, fun interactions to humans. It shares the latest memes and jokes and always has an open ear for you to vent to, along with a superhuman good knowledge of the latest psychological research on how to be supportive, and when to refer you to a specialist.

Now I think many people would consider an AI like this to be “creepy”. Talking to people, listening to their emotions, even providing them with a kind of minor psychotherapy. Feels…kinda wrong, right? But lets think of things from a more practical perspective:

Scenario 1, the AI exists: Some number of humans get to see more funny and inspiring content and get to have pleasant discussions that improve their mood.

Scenario 2, the AI doesn’t exist: None of those things happen.

Which one is better? (For the sake of this thought experiment, please withhold your questions of feasibility) I would argue that I see no reason Scenario 2 is superior to Scenario 1. I care about humans having better lives, and if having this AI makes people’s lives better, I cannot think of a reason to not have this AI.

Now maybe your personal morality differs from mine. Maybe you think that it is, for one reason or another, inherently morally wrong for AIs to do such things, or even exist. If you think this is more important than human well-being, then I probably cannot convince you of my side, because our terminal values differ. For the rest of this post, I will assume you are something similar to a utilitarian like myself, who’s ultimate interest is that as many people as possible lead as good a life as possible (the intractable difficulties of defining all those things are left as an exercise to the reader. I’m not claiming I have solved all of utilitarian morality, I’m just using it as a pretty decent first approximation of morality).

So I think we have now gained a better understanding of the problem we face with GPT2. It’s not that it’s inherently a problem to have an AI that can generate human-like text. It’s that it’s an AI that generates human-like text that doesn’t add positive value to our lives. If somehow GPT2 only produced high quality, truthful texts, then sure, go ahead and spam the internet with that! Could you imagine if spambots only providing truth and valuable texts? What a world that would be!

Counting Consciousness

And this is why, I’d argue, sometimes we’d want to let non-humans into our spaces. Maybe not all spaces. Maybe there’s good reasons for communities to only allow flesh and blood humans to participate. But I’ll be so bold as to ask: What reasons would those be, exactly?

Do we really want to count consciousness? Assign individuals their verified keys only if they fulfill some “consciousness” criteria? Would only humans pass these criteria? All humans, or just some humans?

Maybe we could do this, as said previously it’s usually pretty easy to count how many humans you have in front of you. But is this really what we want? Would we maybe benefit far more from generalizing our concept of the “consciousnesses” we are counting?

Imagine I build a robot, lets call it Alice, that is not human, but just as intelligent, with memories, goals, emotions etc. (We’ll assume this robot is purposefully built to have a mind very similar to a human’s) Not only that, but this robot is incredibly moral. It is kind, funny and selfless. It feels no anger, no hate, it only labors tirelessly and humbly to make everyone around it as happy as possible. Imagine this robot makes you laugh, builds up your self-confidence, educates your children to be better adults, treats your sickly parents with care and love. It’s always there for you. Sure, sometimes it messes up and makes a mistake, but it always apologizes the moment it realizes, and does everything in its power to make it right.

Let me ask you: Do you know a human like that?

If you’re lucky, you might. And I hope you value them as much as you should. My question is, if you value such a human (even just hypothetically if you don’t know one), why would you not value this robot?

Imagine you have the choice between painting your room purple or green (or some equally trivial thing) and you don’t care either way. The robot says it likes the color purple, but is perfectly fine with whatever color you prefer. Would this affect your opinion of which color to paint the wall? It’s “just” a robot. Do you care about its opinion?

I would! That robot is my friend, and if I can do a little something that feels nice to it, why the hell wouldn’t I?

This is all just common sense, gut-response thinking here, no weird deep philosophizing. This is what naturally feels right for me to do. I’d care about my robot, I’d be sad if it got hurt, I’d like to do nice things for it. Am I anthropomorphizing? Maybe, but maybe not. What, exactly, would I gain from this?

Imagine now a human, Bob. Bob is a bit nasty. He’s lazy, selfish and not very bright. He’s not necessarily evil or anything, but pretty unlikable to be around. He only parrots opinions he has heard elsewhere and will become quite unpleasant if anyone tries to convince him he is wrong about anything. He isn’t well educated and has zero interest in becoming more educated. He considers himself to be very empathetic, and will often tell everyone else about this “fact”, but is actually spiteful and vindictive to almost everyone. Bob isn’t evil, he’s not even that unusual.

I don’t think I have to ask you whether you know a person like Bob.

Now imagine you are the gatekeeper to some community. You have one key to hand out to a new member. Your choices are Alice the robot and Bob the human. Who would you pick?

For me, Alice is the obvious choice. They’re smart, polite and I know for a fact they contribute value to discussions. Including them would improve the quality of my community.

You could pick Bob, and if you would, I’d be very curious as to why. It’s not very clear to me where the advantage of this decision would be. The only thing I can think of is that he’s human. I guess if you value humans for humans’ sake, that could be relevant to you? It wouldn’t make your community a much nicer place to be, though.

You could also choose “neither”, though that’s kind of going against the spirit of the question. In this case, your community definitely wouldn’t get any worse, but it wouldn’t be getting any better, either. So you’ve lost out on potential gains and therefor made your community worse than it could have been.

So if what you care about in a community is its quality of interactions, it seems to me the only rational choice is to admit the robot.

What do we really value?

I’m a big fan of “short shorts”, basically one sentence long short stories. A while ago, I wrote one myself that I think encapsulates my feelings here.

“I don’t care if he was made of metal, he was my friend.”

When you think about someone you love, your mother, your spouse, whoever, what are the things that make you love them? Is it that they are composed of organic carbon molecules? That they share the Homo sapiens genome? Is that really what you care about? If you found out your spouse carried some kind of bizarre mutation that made their DNA technically non-human…would you care? If they are still the same person you fell in love with, why does it matter? (Of course, there’s also many humans that are already more than happy to marry non-human things, both in fantasy and reality. Looking at you, Japan)

For some people, it might really matter. But I’m not one of those people. Of course I care about humans, there’s nothing I care about more in fact, but I don’t care about humans for humans’ sake. I care about humans because they can experience, they can love, learn, create art, be heroes. It took me a while as a kid to figure it out, but all the things I truly care about are “substrate independent”. If some creature, that happens to not be human, is heroic and saves lives, what kind of weird morality must I have to not value this just as highly as if a human had committed the heroic act? I care about the heroism, not about the exact atomic composition of the hero.

Currently, humans are the only 150-pound, nonlinear, all-purpose computer system that can be heroic.

But that not only can but will change. And that’s scary. Scary, and complicated, and all kinds of things. But is it bad?

It depends on what you value, your terminal goals. If you value humans for their carbon and their DNA, I have bad news for you about the future. But for people like me, there is no reason the things we care about need to disappear, or that they can’t even flourish.

Of course, making such a future is still incredibly hard. If you think of all the possible configurations all the atoms in the universe could be in, only a preciously tiny amount contains the kinds of things I care about at all, and isn’t just a chaotic, empty void.

Whether we like it or not, I think it is inevitable that we need to “count consciousness” beyond the human. And we need to generalize what it means to have a life worth living. Currently, things are still pretty easy. AIs are like insects. They don’t suffer (hopefully), and are often a nuisance we need to defend against. But this will change, and the question won’t be as simple as “how do we keep the AIs out”.

We need to think hard and honestly about what we truly care about, and bravely discard what we don’t really need. All of this kind of discussion feels weird and uncomfortable, I know. But ignoring things to come isn’t going to help anyone. We have the amazing luck of being able to foresee these changes long before they actually come to pass. We should use this time to prepare. Not just the world, but most of all our own minds. We need to tame our gut reactions and think calmly and with compassion about how we want our future of mixed human/AI minds to look, and we shouldn’t only think about the humans.

This essay is a call against the narrow mindedness that haunts the use of the word “consciousness” and its ilk. Do you want to be the person that “counts consciousness” and hands out “consciousness licenses”? I don’t want to end up as that person. I’m not saying I have all the answers, that I know where the line should be drawn, what I’m saying is that the first step to solving a problem is to realize that it even is a problem.

If we take these problems seriously, I am confident we can adapt and create a future we can all be proud of. A future filled with minds, human and not, living lives far better than anything we can imagine today. I can barely think of anything more worthy of our attention.

Or we fuck up AI Alignment and everything goes to hell so hard mere human words cannot even begin to describe the horror. More on that in future blog posts. Good Night!

This concludes the Counting Consciousness series. For those that have gotten this far, I sincerely thank you for reading my ramblings. It’s been one hell of a ride and I’ve learned so much. Thank you. But this is not the last you will hear from me! I will be continuing to write essays on all kinds of interesting topics. Up next: “Why Genetic Engineering is not Nearly as Exciting as you Think”, stay tuned!

A huge thanks to Dominik Biller and the many others that have helped me with editing and improving these essays. They wouldn’t have been nearly as coherent without their help.

Contact me on Twitter @NPCollapse any time, I love to talk.

--

--