Will AI and machine learning take over the world?

The work of Alan Turing in the 1940’s is commonly appointed as the origin of modern computers. Not so long after that, people already started to wonder: how far can artificial intelligence go? This is well illustrated in the classics of Isaac Asimov, specially The Robot Series that includes the well known ‘I, Robot’ — written in 1950! The main point of concern is: will AI someday take over the world?

This discussion gained an extra “impulse” after the popularization of the concept of Machine Learning, Neural Networks. In substitution to the classical concept of programming, according to which computers only follows explicit instructions, now the machines are being instructed on how to learn by themselves, based on experiences to which they are exposed by us.

As explained in my last post, this is a definition of machine learning and such approach is becoming more and more widely used in multiple fields (ads, sports, object recognition etc.), since it yields solutions to problems of high complexity that could not be easily (or not at all) solved using explicit programming. While this new concept represented such a breakthrough, on the other hand it yields “opaque” solutions, in the sense that we can not verify how the solution was achieved, what are the operations being performed by the program.

In fact, I believe this can be well compared/correlated with our knowledge about how the human brain works. This super powerful “machine” is result from millions of years of evolution, adaptation to multiple environments and situations. The concept of neural networks, that are currently so widely used to tackle complex tasks, is in fact a “simple” approximation of the way human brain works. So imagine how many improvements will become possible as we learn more about our “gray matter” — e.g. deeper knowledge on how their multiple neurons and areas interact, how brain plasticity works.

Moreover, currently machine learning “evaluates” the world based on visual, tactile and auditory signal stimulus (inputs), while senses as smell and taste still need to be better understood to allow their “artificial sensing” and following interpretation. At this point, the reader is probably wondering how is this related to the initial question.

Well, in my opinion — even though I’m still a “newbie” on this field — this illustrates how artificial intelligence and machines in general are still somehow far from reaching the complexity of the human biological and mental structure. Besides the two missing senses, computers are still not capable to reproduce crucial characteristics that differentiate humans from other animals: creativity, abstraction, imagination. Current machines are capable of learning by access to specific, mostly labeled/supervised inputs, but not to “create”, abstract experiences to other situations.

For these reasons, I don’t believe AI and machine learning will take over the world. Indeed, as it is already happening, machines will substitute humans in many activities and these concepts can someday give a kind of “freedom” to machines, since we don’t how exactly they are learning to take decisions.

In this regard, once again referring to Asimov, I believe there will always be rules as the Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Additionally, regarding the fact that giving too much power/freedom could lead to things going out of control, we can make a parallel with ourselves as society. In my opinion, humans can be viewed as “super machines”. And we are capable of handling our super capabilities. Even thought “system failures” as wars, conflicts were and still are part of our history, common sense prevails for the greater/common good. One example? For decades we already have the power of destroying the world with one button/click, by means of atomic bombs.

Finally, I particularly like the comparison provided as conclusion of the Wired article https://www.wired.com/2016/05/the-end-of-code/: in place of the classic approach of creating machines that do only what they are explicitly programmed/commanded to do, we will start to parent machines. As we instruct them on how and what to learn, we will have the responsibility of guiding their behaviors, so that they will make proper judgement of what to learn from the experiences they are exposed to.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.