3 Ways to Reset AI Expectations

Lessons learned from decades of AI hype, hope and disappointments

MIT IDE
MIT Initiative on the Digital Economy
7 min readJun 19, 2024

--

By Irving Wladawsky-Berger

AI is either the panacea for all that ails us or a dangerous threat to be approached with caution. Those are two extreme messages often heard about historically transformative technologies. The truth likely lies somewhere between the two.

At the recent 2024 MIT Sloan CIO Symposium AI was the dominant theme with a number of keynotes and panels devoted to the topic. In addition, a pre-event program included a number of informal roundtable discussions such as legal risks in AI deployment, AI as a driver for productivity, and human’s role in AI-augmented workplaces.

One highlight was the closing keynote, What Works and Doesn’t Work with AI, where MIT professor emeritus Rodney Brooks, offered some guidelines to sort the hype from the reality. Professor Brooks was director of the MIT AI Lab from 1997 to 2003, and was then the founding director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) until 2007. A robotics entrepreneur, he’s founded of a number of companies, including iRobot, Rethink Robotics, and Robust.Ai

Brooks presented his “Three Laws of Artificial Intelligence,” that put AI in perspective, as follows:

  1. When an AI system performs a task, human observers immediately estimate its general competence in areas that seem related. Usually that estimate is wildly overinflated.

2. Most successful AI deployments have a human somewhere in the loop (perhaps the person they are helping) and their intelligence smooths the edges.

3. Without carefully boxing in how an AI system is deployed, there is always a long tail of special cases that take decades to discover and fix.

Brooks reminded us that AI has been an academic discipline since the 1950s when the field’s founders believed that just about every aspect of human intelligence could in principle be precisely expressed as software and executed in increasingly powerful computers.

Decades of Efforts

Into the 1980s leading AI researchers were convinced that AI systems capable of human-like cognitive capabilities could be developed within a generation, and obtained government funding to implement their vision.

Eventually it became clear that all these various projects had grossly underestimated the difficulties of developing machines exhibiting human-like intelligence

because you cannot express in software cognitive capabilities like language, thinking, or reasoning. After years of unfulfilled promises and hype, these ambitious AI approaches were abandoned, and a so called AI winter of reduced interest and funding set in.

AI was reborn in the 1990s. Instead of trying to program human-like intelligence, the field embraced a statistical approach based on analyzing patterns in vast amounts of data with sophisticated algorithms and high performance supercomputers. AI researchers discovered that such an information-based approach produced something akin to intelligence. Moreover, unlike the earlier programming-based projects, the statistical approaches scaled very nicely.

The more information you have, the more sophisticated the algorithms, the more powerful the supercomputers, the better the results.

Over the next few decades AI achieved some very important milestones, including Deep Blue’s win over chess grandmaster Garry Kasparov in a 1997 six game match; Watson’s 2011 win of the Jeopardy! Challenge against the two best human Jeopardy! Players; and AlphaGo’s unexpected win in 2016 over Lee Sedol — one of the world’s top Go players. In addition, a number of entrants successfully completed the 2007 DARPA Grand Challenge for self-driving vehicles in an urban environment, and the 2012 DARPA Robotics Challenge for the use of robots in disaster or emergency-response scenarios.

Different Now?

After these and other milestones, AI appeared to be “on the verge of changing everything,” said Brooks. But is it? Since 2017, he’s been posting a Predictions Scorecard at the beginning of each year, where he compares the predictions for future milestones in robotics, AI and machine learning, in self-driving cars, and in human space travel.

“I made my predictions because at the time, just like now, I saw an immense amount of hype about these topics,” and the press and public were drawing conclusions about all sorts of things they feared (e.g., truck driving jobs and all manual labor of humans about to disappear). They also predicted safer roads, a safe haven for humans on Mars and other “imminent” advances.

“My predictions, with dates attached to them, were meant to slow down those expectations, and inject some reality into what I saw as irrational exuberance.”

Why have so many AI predictions been so wrong? Brooks (who apparenly favors lists) believes the answer is what he called the Seven Deadly Sins of Predicting the Future of AI. In a 2017 essay, he described these “sins:”

  1. Overestimating and Underestimating harks back to what’s become known as Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. “Artificial Intelligence has the distinction of having been the shiny new thing and being overestimated again and again, in the 1960’s, in the 1980’s, and I believe again now,” wrote Brooks.
  2. Indistinguishable from Magic is closely associated with a proverb by science fiction writer Arthur C. Clarke that’s become known as Clarke’s third law: Any sufficiently advanced technology is indistinguishable from magic. “This is a problem we all have with imagined future technology,” said Brooks. “If it is far enough away from the technology we have and understand today, then we do not know its limitations. It becomes indistinguishable from magic …”

3. Exponentials. “Many people are suffering from a severe case of exponentialism,” wrote Brooks. Exponentialism was put on the map in the technology world by the very impressive 50-years run of Moore’s Law. The semi-log graphs associated with Moore’s Law have since become a visual metaphor for the technology revolution unleashed by the exponential improvements of digital components, from processing speeds to storage capacity. Moore’s Law has had quite a run, but like all things based on exponential improvements, it must eventually slow down and flatten out.

Power + Performance

Over the past 30 years, the necessary ingredients to significantly increase the performance of AI systems were joined: powerful, inexpensive computer technologies; advanced algorithms and models; and huge amounts of all kinds of data. But there’s no law that says how often such events will happen. “So

when you see exponential arguments as justification for what will happen with AI, remember that not all so-called exponentials are really exponentials in the first place, and those that are can collapse suddenly

when a physical limit is hit, or there is no more economic impact to continue them.”

4. Performance versus Competence. “We all use cues about how people perform some particular task to estimate how well they might perform some different task,” wrote Brooks. For example: “People hear that some robot or some AI system has performed some task. They then generalize from that performance to a competence that a person performing the same task could be expected to have. And they apply that generalization to the robot or AI system. Today’s robots and AI systems are incredibly narrow in what they can do. Human-style generalizations do not apply.”

5. Speed of Deployment. “A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, or in the design of products. Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be widely deployed than people inside and outside the field and imagine.”

6. Hollywood Scenarios. Many AI researchers and pundits ignore the fact that if we’re able to eventually build super-intelligent AI systems, the world will have changed significantly by then.

“We will not suddenly be surprised by the existence of such super-intelligences.

They will evolve technologically over time, and our world will come to be populated by many other forms of intelligence, and we will have lots of experience already… I am not saying there may not be challenges. I’m saying that they will not be as sudden and unexpected, as many people think.”

7. Suitcase Words. A suitcase word is a term created by MIT AI pioneer Marvin Minsky to refer to words that can have multiple different and confusing meanings depending on the context. Learning, for example is one such word, which means something very different when applied to machine learning than when applied to human learning.

“Suitcase words mislead people about how well machines are doing at tasks that people can do,” said Brooks. “That is partly because AI researchers — and, worse, their institutional press offices — are eager to claim progress in one instance of a suitcase concept. The important phrase here is an instance. That detail soon gets lost. Headlines trumpet the concept, and warp the general understanding of where AI is and how close it is to accomplishing more.”

The bottom line? We’ve come a long way and there’s still a long way to go.

This blog first appeared June 6 here.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.