When considering if a machine is intelligent, I can’t help but ask why we are attempting to create an intelligent machine. Intelligence in machines seems torn between two goals.
In 1950, when Alan Turing wrote about his famous Turing Test that would decide the intelligence of a machine, he thought we could prove a machine was intelligent by mistaking it for a human. Humans, he acknowledges, make mistakes. That is fundamentally human and so “the machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problem. It would deliberately introduce mistakes in matter calculated to confuse the interrogator.” (448, Computing Machinery and Intelligence) But what exactly is the point of creating a machine that can think, respond, act like a human? We can already produce intelligent machines quite easily. And we could (ignoring social, moral, legal, etc. issues that would prohibit us) probably develop cloning technology (which Turing quickly rules out as against the rules of his game) before we develop an electric machine that can convince us (in the flesh) that it (he, she, ze) is human.
Machines possess a capability that humans don’t have. Turing writes that humans would be foolish to try to impersonate a machine. “[They] would be give[n] away at once by slowness and inaccuracy of arithmetic” (436, Computing Machinery and Intelligence). J. C. R. Licklider writes in his call for human-computer symbiosis that “computing machines are very fast and very accurate…” (6, Man-Computer Symbiosis).
To have machines be both accurate and fast and yet fallible seems to be a contradiction. And indeed, to pass the Turing Test, the machine will be useless for any other task. Our craving for intelligent machines, is perhaps depicted in this “creepiest funny” car commercial from Toyota.
We are really looking for machines that can make our lives easier/more efficient but delivered as if by a perpetually delighted human. The machines’ rapid efficiency is cold but we can dress it with the trappings of humanity. And while Alan F. Blackwell finds that “reducing humans to acting as data sources [for machine learning] is fundamentally inhumane” (Interacting with an Inferred World: The Challenge of Machine Learning for Humane Computer Interaction). There is something inhumane too in anthropomorphizing our objects and then allowing us to discard them and deject them.
Some psychologists express fears that toys with artificial intelligence will seem too real for children, supplanting their existing or potential relationships with real humans. Noel Sharkey, an ethics professor, asks, regarding Hello Barbie, “‘If you’ve got someone who you can talk to all the time, why bother making friends?’”(“Barbie Wants to Know Your Child”, New York Times). But Hello Barbie occupies a strange position in the child’s life — seen as both friend and disposable. You can divulge your secrets to her and then leave her in the bottom of the toy chest. I myself recall getting a Furby during the craze and quickly tiring of it — it was my “new best friend” that I promptly put in a dark closet and never thought about again.