2016 — Artificial Intelligence: A time to take control?

While the debate on the future of Artificial Intelligence (AI) is old, the year 2015 has seen a controversy gain central stage. Some sensational headlines predicted murderous robots, whereas others dismissed them as panicky and speculative. The New Year needs to see this ill-informed controversy replaced by a better informed analysis of the potential impact of AI and of its applications. In 2016 expert risk analysis must gain a far greater role in the thinking of policy and decision makers, of governments and corporations.

The debate became even more polarised recently with a US think-tank accusing Bill Gates of Microsoft and Elon Musk of Tesla and SpaceX of being 2015’s ‘worst innovation killers’. And at the same time, Oxford Professor Nick Bostrom has said that superintelligence may “advance to a point where its goals are not compatible with that of humans”.

A report recently published by the think tank Sapience Project concludes much of the controversy originates from getting experts’ concerns mixed up with Hollywood’s ‘Terminator’-type theme. Critics attack the fantasy — repeated with variations in The Matrix and elsewhere — which sees humanity engaged in a battle against an invading army of hostile shape-shifting robots being run by a self-aware AI called Skynet. Panicked and uninformed headlines further confuse the public. In reality, experts are alarmed by the prospects of an indifferent, not malevolent AI.

To understand this in Hollywood’s terms consider HAL in 2001: Space Odyssey, an AI operating a spaceship, which ends up taking control and killing an astronaut because its instructions were simply to ensure it delivers its cargo — and nothing else. HAL 9000’s ignorance poses a subtler and far deadlier danger than an indestructible Arnold Schwarzenegger with a machine gun. The moral is that machines have no conception of ethics. Without extensive effort to teach it, machine superintelligence could act with disregard to life, autonomy, and other core human values.

Unlike climate change and genetic engineering, where Governments across the globe are putting in mechanisms to minimise the risks of what are, in some instances, still lie decades ahead, virtually nothing is being done to control the advance of AI. There is a policy vacuum which must be filled if this inevitable advance is to be used wisely and kept in check.

The report’s author, Dr Amnon Eden, a leading expert on singularity theories and Principal of Sapience Project, a think-tank which has been formed to look at the disruptive impact of artificial intelligence, says: “In 2015 the debate about AI has become less academic theory and more a reality. Computers have become a trillion times faster in less than five decades. Rapid technological progress increases the risk of mass ‘future shock’. The world needs to be looking at 2016 as an opportunity for us start to direct AI in a way that will be beneficial to us all. Whether it is driverless cars and how they are programmed to react to seemingly inevitably fatal accidents, computer trading to manipulate the world’s stock markets, or the autonomous lethal robots falling in the wrong hands, both Governments and global corporations must start to take superintelligence more seriously and make some policy and strategic decisions — now!”

Reference: A.H. Eden, ‘The Singularity Controversy, Part I. Lessons Learned and Open Questions’. Technical Report STR 2016–1, Sapience Project, January 2016. doi:10.13140/RG.2.1.3416.6809, arXiv:1601.05977 [cs]

Show your support

Clapping shows how much you appreciated Amnon H. Eden’s story.