Does AI really need to be Artificial?

This question I’m exploring has more to do with the way I’ve noticed the people around me talk about “artificial intelligence” than anything to do with the subject itself.

This thought came to me recently in my science fiction literature class as we were discussing Ken Liu’s great short story The Algorithms for Love, but don’t worry if you haven’t read it. This post isn’t a literary analysis.

Artificial intelligence, from my point of view, is the field of scientific inquiry that is most closely tied to philosophy with cosmology probably coming in second. And yet, from my personal experience with how my classmates and friends talk about AI, I feel like our (pop) cultural thoughts on the subject are stuck in something of a pre-Copernican revolution. Anthropocentricity lurks in our thoughts about AI like the geocentric models of the universe that permeated cosmology for centuries and I think it all goes back to the word “artificial.”

“Artificial” intelligence seems to suggest that intelligence is somehow innately human, or at least animal, and I think that may deeply restrict our discourse about what we deem to be intelligent, mostly because it primes us to be skeptical of any “mind” that doesn’t work in a way that is similar to ours.

But does it even matter? I suppose not. The computer scientists will continue busily writing their code and advancing humanity’s frontiers of knowledge, oblivious and uncaring about how a classroom full of college students talk about their work.

Though for me, I guess it comes down to a difference in philosophy. A split between an essentialist and a kind of “functionalist” way to think about AI. The essentialist (that is, being rooted in essentialism) way of thinking being that because AI rises from machines and circuitry, it is somehow inherently different from “natural” or “organic” intelligence; that there is something special about that human brain that is particularly conducive to the rise of intelligence. The functionalist way of thought is more based around the Turing test, that is, that if somebody can’t distinguish a machine’s intelligence from a human’s, than the machine is indeed intelligent.

The latter school of thought (the one I’m calling “functionalist,” though it is probably not the right name) makes a lot more sense to me, mostly because it seems more objective, whereas the essentialist belief feels like an effort to keep humanity in some kind of special place, where only naturally occurring intelligence resides. A similar essentialist belief was held about gender; that the traits we prescribe to men and women were the symptoms of their sex organs. So, if we can think of gender today as transcending physical definition, then why do we think of intelligence as requiring a moist, fleshy brain? Why does machine intelligence need to be thought of as fake, “artificial?”