MIT Report Says — A.I. Only Goes So Far [The Current State of A.I.]
Everyone is talking about A.I. Breakthroughs are being announced every week. 100’s of Research Papers on Arxiv and Articles being circulated. Vendors make exotic claims. Everyone seems to be an expert on A.I. its being claimed as the answer to all of Humanity’s Problems. But what is the Reality? Who do we trust and how do we make sense of all this? What can we expect practically? How do we understand this? Lets start with what the top people in the field are saying.
Artificial Intelligence and Machine Learning: Deep Strengths, Narrow Capabilities
[MIT Report. Section 5.2 Page 31]
The spat between Elon Musk and others on the subject of A.I. is well known. For those who were not following, Elon Musk is basically saying that by developing A.I. we are inviting Dooms Day. And others like Mark Zuckerberg are saying that such statements are irresponsible and fear mongering.
It is clear that these people have very different notions of A.I. and its capabilities. While on one hand Elon Musk believes that A.I. is super intelligent and will take over the human race. Others don’t believe A.I. is capable of doing any of that atleast in the near future.
Elon created OpenAI to oversee the standardisation and development of ‘friendly’ A.I. and then early this year he left it due to disagreements on the vision and path forward.
Settling the Basic Premise of this Debate once and for all.
A.I. has not been invented. What we colloquially call as A.I. is basically Machine Learning.
And as it happens there is no Magic in Machine Learning. Everything is achieved using Algorithms. And each Algorithm has its pro’s and con’s, applicability and effectiveness.
With this in place I think we are in a better position to appreciate the debate and understand the MIT Report which is the subject of this article.
“With deep learning we have a clear case of overgeneralization. A case where people have found something works for a certain set of problems, and assume that it will work for all problems. And that’s nonsense.” — Professor Gary Marcus
“ I have been saying for several years that deep learning is shallow, that it doesn’t really capture how things work or what they do in the world. It’s just a certain kind of statistical analysis. And I was really struck when Yoshua Bengio, one of the fathers of deep learning, kind of reached the same conclusion.” — Professor Gary Marcus
“I do think people have been very clever about how they’ve used deep learning. It’s almost like if all you have is a screwdriver, you can try to adapt everything to be a screwdriver sort of problem. They’ve been good at, for example, using deep learning to make old video games have higher resolution. There have been a lot of clever applications in deep learning and it’s certainly had a lot of impact on the world, but I don’t think that it’s really solved the fundamental problems of artificial intelligence.” — Professor Gary Marcus
“ The real issue, as Ernest Davis and I argue in our forthcoming book Rebooting AI, is trust. For now, deep reinforcement learning can only be trusted in environments that are well controlled, with few surprises; that works fine for Go — neither the board nor the rules have changed in 2,000 years — but you wouldn’t want to rely on it in many real-world situations.” — Professor Gary Marcus
Little Commercial Success
“ In part because few real-world problems are as constrained as the games on which DeepMind has focused, DeepMind has yet to find any large-scale commercial application of deep reinforcement learning. So far Alphabet has invested roughly $2 billion (including the reported $650 million purchase price in 2014). The direct financial return, not counting publicity, has been modest by comparison, about $125 million of revenue last year, some of which came from applying deep reinforcement learning within Alphabet to reduce power costs for cooling Google’s servers.” — Professor Gary Marcus
“ What works for Go may not work for the challenging problems that DeepMind aspires to solve with AI, like cancer and clean energy. IBM learned this the hard way when it tried to take the Watson program that won Jeopardy! and apply it to medical diagnosis, with little success. Watson worked fine on some cases and failed on others, sometimes missing diagnoses like heart attacks that would be obvious to first-year medical students.” — Professor Gary Marcus
“It’s not just DeepMind. Many advances promised just a few years ago — such as cars that can drive on their own or chatbots that can understand conversations — haven’t yet materialized. Mark Zuckerberg’s April 2018 promises to Congress that AI would soon solve the fake news problem have already been tempered, much as Davis and I predicted. Talk is cheap; the ultimate degree of enthusiasm for AI will depend on what is delivered.” — Professor Gary Marcus
AI Hasn’t Found Its Isaac Newton: Gary Marcus on Deep Learning Defects & ‘Frenemy’ Yann LeCun
Synced is proud to present Gary Marcus as the last installment in our Lunar New Year Project — a series of interviews…
DeepMind’s Losses and the Future of Artificial Intelligence
Alphabet’s DeepMind lost $572 million last year. What does it mean? DeepMind, likely the world’s largest…
‘The Work of The Future — Shaping Technology and Institutions’ Report from MIT
So what does MIT have to say about all this?
Most contemporary AI successes involve forms of machine learning (ML) systems, in applications where large data sets are available. These basic techniques have been around for a long time, but in the past decade new computing hardware, software, and large-scale data have made ML notably more powerful.
ML applications include image classification, face recognition, and machine translation. They are familiar to consumers in applications like Amazon Alexa, real-time sports analytics, face recognition on social media, and customer recommendation engines. An equivalent array of applications is finding its footing in business, including document analysis, customer service, and data forecasting. The barriers to deploying these technologies are rapidly coming down, as cloud-based AI services make algorithms once available only to highly skilled, well-resourced companies available to small and even individual enterprises.
These applications are already replacing tasks and aspects of existing jobs: for example, workers labeling data, paralegals doing document discovery in law firms, or production workers performing quality inspection on factory lines.
We also see cases where AI and ML tools are deployed to make existing employees more effective, by aiding call center responses, for example, or speeding document retrieval and summary. Some applications in engineering involve using AI to search physical models and design spaces to propose alternatives to human designers — enabling people to come up with entirely novel designs. In short, AI and ML systems have deep implications for the workplace, as the tools on which we have come to rely become more intelligent and widespread.
“We are a long way from AI systems that can read the news, re-plan supply chains in response to anticipated events like Brexit or trade disputes, and adapt production tasks to new sources of parts and materials.”
“ML systems still face challenges with respect to robustness and explicability”
MIT says that Completely Autonomous Vehicles is probably a far fetched dream. What is far more practical and achievable are systems that assist human drivers and complement them rather than completely replace them. All the Autonomous Vehicles that have been promised by vendors work under absolutely tightly controlled environments. And fail randomly and a lot of times very seriously.
“Robots integrate cognition, perception, and actuation, and hence are inherently more complex to deploy than conventional software systems. Accordingly, they do not proliferate at the same rapid rates we are used to seeing for software-only products like apps or web-based services. Robots remain expensive, relatively inflexible, and challenging to integrate into work environments.”
MIT says what we have is NOT Artificial Life which are mythologies from Mary Shelley’s Frankenstein to modern science fiction villains.
“ Most companies we speak to now have adopted the language of augmentation: “Our robots complement human workers rather than replace them.” We are currently studying how well actual implementations match that rhetoric, though we do see potential here for technology to greatly augment human work and productivity.” — MIT
“Lights out” factories, with no human input, have long been a utopian/dystopian vision for the future. The vision may make sense for some situations where the product or process is mature and highly stable. But even the most automated electronics or assembly plants still require a large number of workers to set up, maintain, and repair production equipment. A typical mobile phone — a stable and uniform product made in very high volumes — is touched by dozens of human hands during production. As one CEO said to us, “You can’t innovate in a lights out factory.”
Download the Report here http://bit.ly/2kOWbC8
This is our Website http://automatski.com