Rogue AI with knowledge beyond its human creators ‘could put us at risk’

Anatoly Khorozov
Legal AI News
Published in
3 min readOct 10, 2017

A TOP computer expert has said there is a grave risk of artificial intelligence breaking free of human control and turning on its creators.

Its believed that driverless cars are set to take over our roads within 20 years.

Artificial Intelligence is becoming is becoming incredibly sophisticated and scientists aren’t sure how

But the computer systems they depend on could potentially become so complicated that even the scientists who create them won’t understand exactly how they work.

This means they could make what we might describe as “out of character” decisions during critical moments.

This could mean a car decides to swerve into pedestrians or crash into a speed barrier instead of taking the decision to drive sensibly.

Professor Michael Wooldridge, Professor of Computer Science at Oxford University told a select committee meeting on artificial intelligence: “Transparency is a big issue.

“You can’t extract a strategy.”

He told the Committee, appointed to consider the implications of artificial intelligence, that there “will be consequences” if engineers weren’t able to unlock the opaque nature of super smart algorithms.

Scientists have been training computers how to learn, like humans, since the 1970s.

But recent advances in data storage mean that the process has sped up exponentially in recent years.

Interest in the field hit a peak when Google paid hundreds of millions to buy a British “deep learning” company in 2015.

Coined machine learning or a neural network, deep learning is effectively training a computer so it can figure out natural language and instructions.

It’s fed information and is then quizzed on it, so it can learn, similarly to a child in the early years at at school.

He said there were plenty of amazing opportunities within the industry that Britain should be harnessing — adding that someone studying AI at Oxford University could expect to become a millionaire in “a couple of years”.

But Professor Wooldridge is not alone in his concerns that the tech could run amock if not reigned in.

Several scientists have admitted they cannot fully understand the super smart systems they have built, suggesting that we could lose control of them altogether.

If they can’t figure out how the algorithms (the formulas which keep computers performing the tasks we ask them to do) work, they won’t be able to predict when they fail.

Tommi Jaakkola, a professor at MIT who works on applications of machine learning has previously warned: “If you had a very small neural network [deep learning algorithm], you might be able to understand it.”

Education Forget teachers, computers will grade pupils’ work, the Royal Society claims.

Doctors Computers will be able to make much more detailed and accurate diagnoses than humans, the experts found. Machines have been making more accurate breast cancer diagnoses already.

Transport Taxi drivers, train drivers, customer service agents and logistics assistants could all be replaced by automated systems.

Civil servants Computers could effectively manage budgets and make better decisions for social care.

Lawyers Legal advice could be best dished out by an algorithm, which reads hundreds of law books to find the best outcome.

Retail Forget sales assistants, now your movements through a shop will be tracked and your basket linked to an online account, so you won’t even have to check out.

Sex Is it a job? It’s debatable. But it’s set to be taken over by robots who can perform longer and fulfil sexual fantasies partners are too scared to utter out loud.

“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

There was the famous example of the two Facebook bots that created their own language because it was more effective to communicate in their own secret lingo than what its creators were trying to train it in.

Several big technology firms have been asked to be more transparent about how they create and apply deep learning.

This includes Google, which has recently installed an ethics board to keep tabs on its AI branch, DeepMind.

Originally published at www.thesun.co.uk on October 10, 2017.

--

--

Anatoly Khorozov
Legal AI News

General Manager @ Active Associate Limited www.activeassociate.com AI-enabled Conversational Solutions.