Creating intelligent machines — A self-defeating goal?

Intelligence is hard to define. On one hand, we refer to it while talking about analytical problem solving. We claim to be able to measure it, compare it and improve it. On the other hand, we talk of fuzzy immeasurables like empathy, emotions and creativity. I think based on which ‘level’ of intelligence we’re looking at machines can (and already are!) intelligent or never will be.

Turing agrees with the first process driven definition of intelligence; and he’s very optimistic about the possibility of intelligent systems. He even refers to humans as human computers and suggests that their processes be deconstructed in order to construct the models for digital computers. That way, digital computers can ‘mimic the behavior of the human computer’. According to me, this is not intelligence; it’s simply pattern recognition and replication. One could argue that humans’ intelligence and thoughts are also a way of responding to patterns. Perhaps if all the variables were modelled and independent machine learning was involved, machines could be considered intelligent. After all, our responses -whether emotional or rational — could be broken down as stimuli-responses or calculations in a purely neurological sense.

If we have modelled machines on us, I find it difficult to understand how machines can be intelligent just yet. We don’t completely understand the functions of our own brains so how can we apply it to something else.

I don’t think the Turing test is a valid one. Just because a machine can insert randomness, make calculated errors and make ‘scientific inductions’ to trick humans doesn’t make it intelligent — that too is pattern recognition. I tend to agree with Professor Jefferson, whom Turing quotes, “Not until a machine can write a sonnet, or compose a concerto, because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals the brain — that is, not only write it but know that it had written it.

I agree with Licklider’s approach of man (note that man precedes computer)-computer symbiosis. It fits in with how we view computers today, as clerks. And it makes it easier for us as designers to categorically think about function allocation in computer systems. He also brings up the memory component. Today, machine learning and other technology has advanced so much that even Barbie has ‘long term’ memory. Licklider also clearly places man above machine and doesn’t even bother to talk about intelligence. I think the conversation has become more complex since then.

I’m not sure about Blackwell’s approach because it seems so humanist centred. I’m tending toward a more techno-deterministic approach in my reading of ‘Interacting with an Inferred World’. I agree that it’s important for us as designers to be aware of the ‘inferred world’. At the same time, is it a problem if ‘the human has become too much like a computer’? Is self-determination really a fundamental right and are our choices never shaped by anyone or anything other than ourselves?

Turing hopes ‘that machines will eventually compete with man on all purely intellectual fields’. Licklider envisions a world where humans can make computers work for them. Right at the end Blackwell says, “Doing so requires a philosophical framework in which labour, identity and human rights are recognized as central concerns of the digital era” Perhaps this is the fuzzy, unquantifiable part of intelligence that we don’t trust machines with

References

  • Turing, “Computing Machinery and Intelligence.” Mind 59, no. 236 (1950): 433–60. (Box.com)
  • J.C.R. Licklider, “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics HFE-1, no. 1 (1960): 4–11. (Box.com)
  • Alan Blackwell, “Interacting with an Inferred World: The Challenge of Machine Learning for Humane Computer Interaction,” Critical Alternatives 2015: The 5th Decennial Aarhus Conference (2015).
  • James Vlahos, “Barbie Wants to Get to Know Your Child.” New York Times, September 16, 2015,