Is Turing right?
The recent google I/O presentation revealed google duplex, an AI powered assistant. There is a lot of fuss now whether what was heard during the show was a machine passing the Turing test. What does it mean or does it mean anything at all? But is this the right question in the first place?
Do we want machines to be like us? Yes? Maybe? Do we even know? Is being human-like an accurate measure of achievement in AI field? So far many assume that the ultimate goal of AI research is to replicate us and I can understand this. It is quite difficult to imagine intelligent life forms to be different from us? How differrent? Do we have a reference point to compare? An objective reference point, not human-centered one?
So let’s take an easy route and have human-like AI. Is then Turing test the right test? I’d say sort of. Passing it is indeed a milestone. One of many on a long journey because there are more tests for being human. Like can a machine feel empathy? Would we be able to fall in love in a digital being? Would the digital being be able to love us so that we feel loved? Would it feel sadness when I die? How about compassion for other beings (human or digital) when they lose someone (human or digital)?
We thrive when we feel purpose and meaning in our lives. Can a digital being develop a feeling of purpose?
We feel safe when we have a tribe to belong to. Can AI feel like they belong to a tribe? And the tribe is not digital-only.
Yesterday when driving a car here on the Big Island of Hawaii we heard a story on the radio. In short this is how it goes. A lady and her son, in a car got stuck due to road flooding. There was a way out of flooded area but it went through private land with a „no trespassing” sign. Nevertheless they decided to take that road, approached the owner and asked if they could stay on her land until water disappears. The owner refused to help and asked them to leave the property. They spent the night on the roof of their car and then got badly sunburnt next day before help came.
Later on they sued the owner of the property for refusing to help, what led to near-deadly consequences, medical bills, etc. The line of defence was that they were trespassing and the landlord had no obligation to help. Being not from US the verdict would be obvious for me but here in America owning the land is sacred, right after owning a gun and I can imagine the jury could have a difficult time choosing between the right to own and the right to live.
And what would an AI judge do? Do we want to know? Are we capable of building a human-like AI with all the right answers? Or is being in doubt part of being human?
