In that sense mountains do objectively exist even though we just made up the term
The argument for why 20 watt can simulate millions of minds is a consequence of the limits of…
Matias Frank Jensen
72

So I see what you’re saying here, and mostly I agree. I don’t know if you’ve heard the terms “map” and “territory” before, but with respect to the topic, our “map” we navigate internally with language, and we want that “map” to correspond to what we see in what we call reality (“territory”). Instead of objective, I prefer to say “multiply observable” (because not everyone agrees even when it’s plain to literally see).

Yes, we “create” the concepts with language, but only to describe something that is represented another way, and is hence multiply observable. When we get to morals, I think we have to ask a very foundational question: Why do we living creatures seem to care about each other?

The answer is, sometimes we don’t. There are many animal species that eat their own children and other brutal, anti-moral ideas. Because nature doesn’t have morals, it has selection pressures. It just so happens that those selection pressures, mixed with already complex entities, found that certain kinds of psychological bonds improved survival of the species — discovered only because those that didn’t have the bonds died off.

I think our current morals are a function of our complexity and necessity for survival. Millions of years of evidence has shown us a few good rules of thumb for carrying on to the next generation; those rules don’t necessarily generate the most happiness, or any at all. Actually, quite the opposite: having a child can be a tremendous burden.

Of course our current morals are a little more complex than just a relationship between parent and child, we feel a closeness to others in our family (usually), to our neighbors that we know help us survive (more so back when that was actually the case), but now we can rationalize and intellectualize reasons to be good to each other.

However I’d like you to consider this idea as the potential moral framework for an ASI:

Instead of focusing on maximizing happiness, what if the machine’s morals were to help people reach personal satisfaction? I think satisfaction is a much deeper, more important goal than happiness, which helps people see the meaning in their life, especially in the context of a necessary struggle.

I can see a counter point now: what if someone can only be satisfied by killing others? Well, what if the ASI knew that someone would only be satisfied if they were dead? Could the ASI not pair them up? How would that be a bad outcome in either event, if both are achieving maximum satisfaction?

Maybe not all moral dilemmas with satisfaction could be resolved like this, but I see this result leading to a better outcome for humanity because generally people can be satisfied in more than one way; it’s easy to morally justify to a machine that someone should get a lesser satisfaction that does not hurt another person than a greater satisfaction that does.

Like what you read? Give Benji Lampel a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.