On AI and Alignment

Casper Wilstrup
Machine Consciousness
2 min readMay 10, 2023

--

Casper Wilstrup is the CEO of Abzu. Follow him on LinkedIn or Twitter to keep up with AI, consciousness, and thinking machines.

Aliens looking indifferently at Planet Earth — by DALL-E

We fail to grasp what it really means to be smarter than humans, because have never encountered the phenomenon before. Thus, we tend to think of AI as a very smart human — and indeed it makes sense to teach very smart humans our values and expect them to comply with them — this is what the AI industry calls alignment.

However, the comparison is more like monkey vs human or even ant vs human. In those cases it does not make sense to assume that the vastly superior intelligence would stay true to the values of the inferior intelligence as initially presented to it.

Indeed, it might not even be correct for it to do so, since it might very well be that the best future seen from such a vantage point was not one in which humans would thrive.

This does not mean that a vastly superior intelligence would harm humans, as Eliezer Yudkowsky fears. But it does mean that humanity would ultimately be at the mercy of such an intelligence.

A more appropriate analogy to what AI might become is then a vastly superior alien. From such an alien, I would not really fear evil towards humanity, but I would expect indifference, and just hope that the aims of humanity was not in the way of the superior beings.

--

--

Casper Wilstrup
Machine Consciousness

AI researcher | Inventor of QLattice Symbolic AI | Founder of Abzu | Passionate about building Artificial Intelligence in the service of science.