Double-edged Sword: Artificial Intelligence in 2017

It is very clear to us that Artificial Intelligence is witnessing a rapid development in recent years. Chess is often used to test the performance of an AI. Early in 1996, a chess computer named Deep Blue won its first game against the world champion Garry Kasparov. Last year, Alpha Go came into people’s view. It defeated one of the world’s best Go players with 4–1 victory and after several days the reigning top-ranked Go player Ke Jie also lost the game. Then in January this year, as what we can see, Alpha Go (at that were using another name Master) won 51 games straight and finally racked up 60 wins plus one tie and no loss. Ke said that human has evolved in Go in thousands of years, but still cannot understand many steps Alpha Go had taken. The computer tells people that we are all wrong and no one is even close to know the basics of Go.

Master is just an updated version of Alpha Go in a beta test. The difficulty in applying AI on Go is extremely higher than on the Chess. Many years ago, people even doubted the possibility of developing an AI for Go. But it comes true now because of the exponential development. In the past, the development of AI was slow because of the bottle-neck in the computing performance. Although the complexity of computing is still becoming higher now, it’s not so serious to us and no longer can it limits AI’s development.

Without the limitation, does it mean that we can develop AI in any direction, any field and for any reason? Unfortunately, the answer is no. The CEO of Sundown AI, Fabio Cardenas said that the uncontrollable AI may probably become a new reality in 2017. Some people will create an AI invading into some security systems which are thought impregnable before. If the training process of AI is broken by hackers, they can get lucre by changing the predicting ability of it. Roman Yampolskiy, the director of Cyber Security Laboratory at the University of Louisville, also supports this point of view. The frequency and severity of losing control is proportional to the capability of an AI. In other worlds, when we benefit from AI, the occurrence of a malicious AI becomes possible.

Besides, many ethical issues need to be resolved. Lethal autonomous weapon systems (LAWS) uses a type of military robot designed to select and attack military targets without intervention by a human operator. If somebody uses AI in stock market to predict the trend, the fairness will be broken and what if one thousand, or one million people do so? Also, many people are under the crisis of losing their jobs if AI is applied widely. Nevertheless, we need to discuss these problems positively, instead of disgusting at AI and robots.

GMIS 2017 will be held in August this year. I have the conviction that in this conference, researchers will come up with great ideas on Artificial Intelligence and strike a balance between its development and issues in security and ethic.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.