I do agree with you that this particular problem I have proposed here probably isn’t the most…
Matias Frank Jensen
11

No problem, thanks for the reply. :)

I would still say that neither we, nor any future AI’s, are duty-bound to bring any hypothetical beings into existence. I don’t mean to endorse the extreme views of antinataliats; instead, I take a middle ground. If society does not require the creation of new conscious minds, and there is a conflict of interest, the needs of those who are already here come first. Again, the reason is that non-existent beings are not conscious and can’t suffer.

If “moral AI’s” are really going to be this stingy, they might as well limit the number of minds created so that the minds that are created can last longer (and live better “lives” too). I know you can take this to absurd extremes (i.e., antinatalism), or picture a world where a single AI kills all other conscious beings so that it can exist for as long as possible…but again, this is a “moral” AI we’re talking about. There has to be at least a very good chance that it would conclude that murder is wrong. If morality truly is about maximising well-being, it can’t involve killing conscious beings without very, very good reason.

Of course, many people do feel selfish when they consider this. It would be a shame, many feel, not let certain beings come into existence just because we want to hang on to our own measly lives. But it’s a false dichotomy. If they truly are that measly, then the loss is also measly. Even if we look at it in terms of entropy. How many more galactic years will a lack of humans add to the lifespan of the other minds in the universe? Is that amount really worth the unnecessary murder and death?

And while we’re on that note, the reason why the analogy with murder doesn’t really work is this: murdered people were actually alive before they were killed, whereas the AI is only considering bringing new beings into existence.

And again, this is the part of morality that concerns duty and meaning (as we are talking about actively creating minds and even choosing what they will live for), not the more calculating part of morality that concerns harm and fairness. We still don’t know what an “optimally meaningful” life would consist of. Perhaps such a state is impossible to optimise for. Perhaps, as many people think, it isn’t even a relevant question. But if it isn’t…why worry about any of these questions at all? Any outcome is fine so long as it doesn’t involve unnecessary suffering.

Anyway, there is another thing worth bearing in mind. If it really is about entropy, these moral questions are only likely to be of relevance at the end of time, because it is only then that conscious minds will pay the real-world price for decisions made trillions of years before. And there’s a strong possibility that the beings of that era will be so different from the way that we are now that these. concerns will seem truly incomprehensible.

Thanks for the great discussion!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.