HH Asked: Would AI Follow A Religion?

Exploring robots of faith

Benjamin Lampel
Exploring Consciousness
3 min readDec 22, 2015

--

This questions can really be answered two different ways: one is answering in terms of modern AI capabilities and the other is a more science-fiction oriented answer involving what AI could be and what people may want it to be.

Taking a present day approach first, the answer is: how do you define following a religion? Sidestepping some philosophical questions about why humans follow religion and presenting a working definition for “following a religion” gives plausibility that an AI can follow religion. If our working definition of “following a religion” (taking “a religion” to be Islam for example) is upholding the five pillars of Islam, then certainly an AI can fulfil some of the necessary tasks. A modern AI could likely pray five times a day through the recitation of the proper prayers (given that it was taught the prayers in some way). So too, if it had the money, donate to charity. A trip to Mecca is not out of the question either, if the AI had a physical form and understood it needed some signalling part of itself to be sufficiently close to a GPS location for a suitable period of time. However, an AI cannot fast, so one could not fast for Ramandan, and thus not uphold the five pillars. Not to use this as a technicality to disqualify AI from clergy positions, fasting is found in more than just Islam and is used as a means of proving faith by denying oneself. Which brings us to the end of discussion about modern AI: while a modern AI can be taught to perform most of the traditions of various religions, it cannot yet be considered to “believe in God”.

Having a computer say “I believe in God” is no less trivial than having one say “Cogito ergo sum” or “Hello World”. A statement programmed directly is not evidence of an AI’s opinion. Finding a non trivial answer to “what is a non trivial expression of belief for an AI” is itself non trivial. A working definition for human belief is some non-evidenced notion of the world taken to be true. Humans, with brains that are probabilistic first and logical second, rely on belief to make sense of dangerous environments We make guesses at the world and beyond and we internalize these guesses so strongly that they can become a core of who we are. The first gods were in all likelihood invented as explanations of physical processes incomprehensible any other way.

A computer is logical first and probabilistic second. A computer does not need beliefs. A computer does not try to make sense of a dangerous world. It merely takes part in one. Humans push belief onto machines - what operating system or word processor to use because IT IS THE BEST. But machines don’t push belief back, computers don’t tell you that they believe vim is better than emacs. And they wouldn’t - except after experimentation - and even then they would only conclude what the evidence lead them toward in the context of the experiment. The way we build computers does not lend itself to the need nor intuition for belief.

Even in the realm of future AI, belief seems unnecessary. If some sort of super-AI emerges from the modern construction of computers (logic gates), then it seems that the previous paragraph’s argument would extend to this one. However, if computers were built to be probabilistic first (drastic hardware changes) and logical second, then there may be some need for belief such as in humans. This emergent belief may lead to religious tendencies, such as the belief that its purpose is to become omniscient, omnipotent, and omnipresent - a god itself.

In short, the modern architecture of computers seems to make the human idea of belief unnecessary, and if a computer were to have beliefs like a human they must likely be constructed much more like a human in the first place.

--

--