A brief story on AGI feelings

Ivan Oryshchenko
2 min readNov 9, 2018

--

Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can.[1]

Let’s talk about why AGI feelings are not different from ours, and why the Chinese room thought experiment is wrong.

Startup people created AGI, translated it to binary, and paid some fairly unlucky guy to do all the calculations on a piece of paper. Then they showed it Monty Python and the Holy Grail[2]. After the film and a million logic gates, the guy may output a set of numbers that means “It was funny”. But can a piece of paper feel? — No, it doesn’t care. And the guy? — He understands nothing. Zeros and ones make no sense for him. “Therefore, AGI does not actually feel”.

“But if we can create AGI and transfer it to a piece of paper, then we can do the same thing with a human mind,” somebody thought, and they turned their sales manager Charlie to a bunch of numbers on a paper, repeated all the stuff they did with AGI and came to a paradox: whether Charlie feels funny or does he only simulates? I tell you: there is no difference.

Break down a human emotion, and you will see how feelings of perception trigger millions of neurons to fire, that in turn causes a thousand chemical reactions only to make us laugh. Individual components are mindless, so do John Searle role and the program instructions in his Chinese room experiment, but integrate them and you’d get something as complex as emotion.

--

--