“Bombe” of Alan Turing with a Human Operator. Source: https://en.wikipedia.org/wiki/Bombe

We Put in a Chip in It

At first, everything was dumb. We, humans, created a world for ourselves on planet earth. We have interacted with our physical environment, have manipulated it, and we finally have transformed it…It is the version 2016 of our world and it is still continuing to develop. From the very beginning of the human existence to this date, our world has been artificial, it was man made. This week, I will talk about a term to explain how we are trying to create a world that thinks as same as us or even more than us in some ways: artificial intelligence.

Throughout the years, technology was developed, maybe mostly because there were “wars” to win. In the last one of that “world” wars, a scientist, Alan Turing created an automatic decryption machine in order to crack the secret internal messaging system (telegraph) of Nazi soldiers , where he used his idea of a machine (1936) that can compute anything that can be described with an algorithm. The machine was magically transforming a block of nonsense (decrypted) text to something meaningful, which takes ages to do in those days by hand, by human operators. Then Turing created an algorithm that can work as a human decoder and programmed his machine to use this algorithm. It was probably the first time that a machine thinks like a human and solve a complex problem. This universal computing machine, which later called as “Turing Machine” become the ancestor of today’s computers.

“The Future of Computing” by The Economist. http://www.economist.com/news/leaders/21694528-era-predictable-improvement-computer-hardware-ending-what-comes-next-future

After Turing Machine, Turing also worked on more abstract ways of computing. He worked on how machines can or should think. As a result of his work on this area, he laid the foundations of artificial intelligence and how to classify machines’ intelligence with a test, Turing test. According to him if a human interrogator can’t tell apart a machines conversation from humans’, that machine can be classified as “intelligent”. He argued that rather than making computers think like a grown adult by mimicking mature human mind, we should make their brains work like a children’s brain. To him, the idea of teaching computers our world and how to live on it was more important than filling their brains with ready-made programs / algorithms.

For me, learning about Turing’s vision of AI was an eye opener and I totally agree with it because I believe if we load a machine’s brain with information and data about our world, it will think like someone else, it will have someone’s biases and assumptions of the world because no information is bias free. But if we only guide a computer in teaching it how to think a bit more abstract and how to learn something, I believe it will be more likely that it will look for different perspectives or biases and then form its own opinion or bias.

“Atlas: The Next Generation” by Boston Dynamics. https://www.youtube.com/watch?v=rVlhMGQgDkY

As Turing envisioned, today we are sharing “our world” with some machines that educated to think like humans. To visit history again, in the early days, these machines, computers, were bulky and they were only expected to calculate like humans and solve scientific equations like humans. With the development of smaller and more portable computers, these machines programmed to assist us in many ways and their form factor changed the way we live. After one-on-one conversations with computers, our interactions with them changed drastically by the introduction of Internet, the idea of bridging computers like a spider web in the global level. But it is important to don’t forget that just like Turing’s “codebreaking” machine, Internet was also developed for military purposes in the first hand. After the introduction of Internet, we suddenly have found ourselves in front of computers more than any time, trying to communicate with them and between ourselves through inputs like keyboard presses, and mouse clicks. For example, after the famous text-based online messaging program, “mirc” introduced, it took only a few years (or maybe months) to change the norms about communication. Then around in 2000’s, the brains of “our computers” have begun to scale down smaller than a penny with much more “computing” power. For this reason, new ideas around bridging not only computers to computers but also objects or spaces to computers begin to arise. Suddenly, we’ve put in chips in everything and we’ve started to call these things (objects or and spaces), “smart things”, which think like computers but does not look like computers.

“We Put a Chip In It” by weputachipinit.tumblr.com

Parallel to Mark Weiser’s vision of ubiquitous computing, we’ve started to can’t decide if computers are smart or smart things are computers. Rather than asking should dumb things have a brain or not, tech world has more and more started to question what kind of data that they can collect from the world and this creates a new problem. Since all of these smart things somehow “ease” our lives, they become an addiction for us to depend on and when we depend on them more, we are trying to make them like us, think like us, behave like us. We simply raise our expectations towards them and want them to be intelligent as much as ourselves, after all, we are living in the version 2016 of our worlds, eh? But intelligence does not merely depend on being mathematically smart or being able to climb a staircase (although it is a huge advancement in robotics), it also urges to think critically to use the prior knowledge and take the responsibility of the outcomes of that intelligent. Although seeing computers as kids somehow put the parents, their creators, in the center of related ethical questions, it is simply a very naive thought to allow a kid to drive a car and give a lift to some other living being.

“The Ethics of Artificial Intelligence in Intelligence Agencies” by RAND Corporation. http://www.rand.org/blog/2016/07/the-ethics-of-artificial-intelligence-in-intelligence.html

Overall, It feels like we are experiencing the history of the AI research from backward and that this makes me put more interest on it. For the AI research, we are like vulnerable astronauts without space suits, who are heading for the moon for the first time in a spaceship driven by “nonhuman” entity, rather than sending “an unmanned” spaceship to the moon, which is controlled by human operators. I’m aware that someone has to take the risk for the advancement of the humankind but I’m again skeptical about how we proceed to the next step in our advancement in AI.

--

--