The Abolition of Man and AI

Lorenzo Barberis Canonico
6 min readJun 10, 2017

In 1943 CS Lewis wrote “The Abolition of Man” and, unbeknownst to him, in it laid the foundation for the most probable dystopian scenario relating the rise of artificial intelligence.

Lewis begins with a now common critique of postmodernism. He suggests that our effort to deconstruct all systems and ideologies would turn out to undermine the very reference point from which we deconstruct these systems and ideologies, leaving behind a world without notions of objectivity, truth, and righteousness. If every moral intuition is purely an anthropological byproduct and has no connection to an objective metaphysics, then there can be no moral statement that can carry normative force.

Lewis however takes it a step further, and posits that this urge to purge humanity of our traditions and pre-conceptions would eventually lead to a society where a few individuals theoretically capable of perfect neural programming would inadvertently reprogram all of humanity with an arbitrary value system. Even in the case such “programmers” merely programmed every future human not with a fixed value system but rather with a constantly evolving one, the process would encode the programmers’ biases at its inception thereby setting up a destructive evolutionary process.

Essentially Lewis pointed out that there no effective way a human can completely overcome implicit bias, no matter how much it re-engineers its brain. By his logic, without an objective reference point (he refers to it as the Tao) for a value system, the subsequent value system that emerges from the initial programming and future evolution would turn out to be actually quite arbitrary and merely exacerbating the initial biases. Lewis correctly argues that if we reject a “Tao” then either a) we would not improve humanity because we would simply deconstruct away any possible alternative value system or b) we would fool ourselves into believing that programmers’ value system is so great that all of humanity should always follow it, thereby never questioning its validity once the neural programming process begins.

Even though he was making a statement about cultural relativism that resonated much more in our post-modern era, his ideas are even more relevant today as we discuss the implications of the rise of artificial intelligence.

Machines up until now have been quasi-perfectly replicating human actions (assembling auto parts, cleaning floors, 3D printing lego pieces…). Computers specifically have been progressively replicating human cognitive tasks (counting numbers, rendering images, deriving statistical results…). With the modern rise of AI however we have begun the process of enabling machines to simulate human reasoning (image recognition, developing Go strategies, playing videogames…) thus it has become apparent that eventually we will be able to create machines intelligent enough to move beyond merely executing one tasks but instead develop creative solutions to problems and new perspectives (deep learning and machine learning).

Imagine a world where there are no car accidents because self-driving cars have taken over, financial markets are always stable because high frequency trading bots have taken over, and most diseases are diagnosed so early that rarely are they fatal because intelligent health scanners have taken over: sounds like a Utopia. In order to get to such a world however, we need to enable machines not just to substitute humans, for in that scenario we would merely have the exact same world at a fraction of the cost, but rather move past humans by engaging in complex, higher level thinking our brains are not capable of.

Most dystopian views of the rise of AI paint such a world as unsafe for humans because we would have essentially created entities that can wipe out humanity very effectively. Such scenario makes some sense because we would not even need to develop a fully super-intelligent AI, but rather simply a machine sufficiently intelligent that it can learn from its mistakes and thereby recursively self-improve. That moment — the inflection point — is when artificial intelligence would explode and improve itself exponentially very quickly, move past humans and past the point of no return IE the point where we as humans could possibly “beat it” in the case of danger.

What’s unclear however is whether it necessarily follows that such an event would lead to human extinction. After all, we as the engineers of such an AI can encode specific parameters that value human life (analogous to Asimov’s Three Laws of Robotics) and that’s when the more plausible scenario emerges. The real risk here is not that AI will kill all of us, but rather that it will run the world so efficiently and executive its agenda so well that it will remove any possibility for us humans to change the course of our own development and future. In a world where AI runs everything, we could end up into an equilibrium where the rest of our human progress becomes set in stone, without the possibility for deviation.

This doesn’t necessarily mean it would be a bad thing for we could essentially build an AI that implements a Utopian dream where all human problems go away and that no amount of human stupidity can destroy. But at the outset, when we define the parameters and value systems of the AIs that will control transportation, healthcare, economics, education and so on is where CS Lewis’ criticism comes in. The question of what kind of values to embed into our technology becomes less of a technological, scientific one and a purely ethical and philosophical one. Thanks to post-modernism, we have no deconstructed every possible ethical premise and shown how it’s entirely arbitrary. With no objective moral reference, what basis will the programmers that build our “AI rulers” have to rely upon?

The risk here is the same: a) we do not build “AI rulers” that can make the world much better because we cannot come to a sufficient consensus over what their value system should b) we build “AI rulers” without a value system and thereby risking extinction c) we think we have reached some “superior” and “unbiased” moral viewpoint that we encode into the “AI rulers” but accidentally embed our implicit biases.

An AI should not really question its purpose too much. If you build a machine to pick up apples, it should not stop at random points in its process asking whether it should pick apples and oranges. The point of machines is that they are more reliable than humans, and a lot of it comes from the lack of any “distraction”. Because of this, AIs would not question the value system they are encoded with, leaving behind no possibility for uncovering implicit biases, which by definition the designers themselves are not even aware of.

This is not mere theory, it’s happening right now. It’s not just Snapchat’s faceswap that couldn’t recognize black faces, HP’s webcams don’t recognize people of African descent. The worst though had to be Google Photos labeling people of African descent as gorillas. This is a persistent problem in tech because the data sets used to train facial recognition algorithms are not diverse or inclusive. It’s important to underscore these problems are not arising out of malice from the researchers or engineers, but rather from “implicit biases” about race, gender and so on. These issues however lead to much more destructive when AIs start running large services. For example, AI’s managing criminal activity are trained with data from biased police outcomes thereby systematically implementing racial discrimination.

If the whole point of AI is an objective and effective system can eventually run major social functions, we need to constantly question our own implicit biases as we engineer this technology, otherwise we will end up propelling humanity into the future just like CS Lewis warned us. A future where humanity progresses to an inevitable sub-optimal state, dominated by implicit bias and the unreflected whims of a flawed human engineers.

P.S. We have not even gotten to discussing the scenario in which AI also managed gene editing for human embryos….

--

--