Koan about artificial intelligence

A. Rosa Castillo
4 min readJul 3, 2017
(origin of the image)

At the beginning of the 20th century there was the industrialization era, when many manual jobs were replaced by mechanical forces specially in the industry field. Engineers and architects designed and built many of the hardware-pieces of the automation-revolution. The rise of the machines was considered as a huge performance improvement (environmental considerations apart). Thus the seeds of the artificial intelligence were planted…
Almost 80 years later, we gave more tasks to those first prototypes. We wanted them to do not only simple things but more complex ones. The programming era showed us that we could improve even more the performance of those machines by putting more efforts at the software part. Besides, we began to realize that we can gather functionalities all in the same device. Instead of having a machine to listen to music, a machine to make a photo or a machine to talk to our friends. Why not having all in one? Steve Jobs held his now legendary keynote for the presentation of the iPhone in 2007, to announce three devices in one: a phone, a device to listen to music on the go (integrating the successfull iPod) and a device connected to the internet. The idea of the smartphones was born. However those first “robot-prototypes” were still considered dumb because we have to build their “intelligence”, programming all their capabilities in advance.

Why did we begin to build machines? Because we, humans, are physically limited and conscious about our fragile, short existence. Our force is limited, our speed is limited, our memory is limited… Only the evolution through thousand years can change slowly the biology of humans but we cannot wait so long… That is why we began to put all those improved and desired skills in something not-human. A robot-arm in a factory can work 3000 times faster than a human-arm. A phone can memorize hundreds of numbers, far more than a human. Or a computer can play chess better than a human. This way we are extending our capabilities thanks to the technology but at the same time we are not challenged anymore to do many of the things that we used to do in the past. These tasks look like tedious or boring for humans now, and have been transformed into thought-for-machine ones. Our dependency with technology has never been so huge like now and it could be even more in the future when robots become our new teachers, nurses and… friends?

There is no way back in the artificial intelligence transformation we are living at this era. However I have the impression that the development of this field is growing so fast that many people are ignoring or postponing some relevant issues. For instance the ethics or morals behind these powerful tools. (Check this TED talk where Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns). If we are creating artificial “life” taking humans as models, should we also take “errors” as a measure of how human-like they are? “The right question to ask isn’t how to make an error-free system; it’s how much error you’re willing to tolerate, and how much you’re willing to pay to reduce errors to that level”(reference). Or we just want something perfectly designed for some specific purposes? Will these creatures be our new race to use as “slaves”? Do we want to give them human-feelings but no human-rights(*)?

I also wonder what will happen with human to human interactions… Think about this scenario: a person has to choose between a virtual friend (boyfriend or girlfriend), who will be perfectly designed (thanks to artificial intelligence) for the likes and dislikes of this person. A user-defined robot/operative system/intelligence with a 99.99% probability for a perfect match (check the sci-fi movie “Her” showing this interaction); or another human person based in our classical way of finding friends/partners with a a much lower probability (say 20–30%) for a perfect match. Which option would most people in this planet choose? Will future humans, less used to make things by themselves and more to delegate to machines, make the efforts to find a real human partner?

Artificial intelligence is supposed to improve our human experiences in many areas by performing things more efficiently, by helping us or by optimizing many processes… However it is maybe very naïve to think that “we”, humans, will remain as a invariable constant in this transformation-formula. There will be an irreversible impact in our psychology and our social life. “Autonomous systems are changing workplaces, streets and schools. We need to ensure that those changes are beneficial, before they are built further into the infrastructure of every­day life”(article). By changing the machines, humans will also be indirectly changed. The question would be maybe… into better ones?

(*) In fact there might be already governments discussing this issue.

--

--

A. Rosa Castillo

Imagination is more important than knowledge,but I prefer to encourage both of them ;) Topics of interest:#DataScience #AI #Cybersecurity @rosacastilloPhD