Rebooting AI: Experts Call for Real Progress

Katia Karpenko
Jan 3 · 11 min read
Image from Amazon.com

When Elon Musk and Jack Ma famously sat down for a chat about AI, their thoughts were inspiring for some and excruciating for others. They discussed AI in an all-too-common display of fantastical forecasts and philosophical musings. The status quo of conversations about AI shoots us into the future, where we’re allowed to get far ahead of ourselves simply because such discussions are admittedly more fun. But we’re nowhere near the results already in discussion. We need to get back to the present, so that we may actually solve the problems standing in the way of our projected future.

For that, I recommend a new book, Rebooting AI: Building Artificial Intelligence We Can Trust, which has set off a buzz in the land of AI enthusiasts and skeptics alike. Authored by Gary Marcus and Ernest Davis, two of the world’s leading AI researchers and thinkers, the book comes in a time when AI seems to be all the talk. Will AI doom us? Will it save us?

Rebooting AI speaks of neither doom nor salvation. It instead presents a balanced analysis of where the numerous fields of AI are today, and where they will realistically head in the future. Amidst all the uncertainty, or the abundance of the overly certain, it’s a relief to have this book land in our hands to bring us back to sanity, and to guide us out of a number of possible dead ends.

The book calls for a reboot that AI “desperately needs” and lays out a three-part formula for how we should approach true artificial intelligence. AI systems should be “deep, reliable, and trustworthy.” Marcus and Davis warn of nine risks (see Appendix 1) that plague the field of AI and provide an overview of the different branches of AI (see Appendix 2). Here, I will touch on only a few of those, namely: the inflexibility of machine learning, the flexibility of the human brain as a model for AI, how we might consider structuring AI going forward, what safety precautions we should have in place, and general conclusions of what we need to make progress.

Let’s first delve into one branch in particular: deep learning.

Deep learning is exciting because it is a “simple equation that seems to solve so much.” This is actually problematic because it’s misleadingly attractive. It creates what’s called an “illusory progress gap.” It can make positive results look like they’re borne of true intelligence. Ever heard of the Chinese Room Argument? You can familiarize yourself with the thought experiment here and here. In a nutshell, it imagines a machine that can, after analyzing data on the Chinese language, fool a human Chinese-speaker into believing it speaks and understands the language when, in fact, it does not comprehend the content at all. Google Translate, for example, is akin to the Chinese Room. Its answers stem from expansive language data that it memorizes. It does not actually comprehend the content. The façade of Google Translate (or chatbots, as another example) brings us nowhere near the intelligence we see in Ex Machina and Her.

Another downside to machine learning in general (since deep learning is a subset of machine learning) is that it is not built for rapid learning. As humans, we make a multitude of inferences to understand the world around us: “[M]uch of what we do as humans we learn in just a few moments; the first time you are handed a pair of 3-D glasses you can probably put them on and infer roughly what is going on, without having to try them on a hundred thousand times.” Repetitions upon repetitions, unnecessary for a human, would be absolutely necessary for machine learning to achieve the same results. Humans are drunken masters of sorts, if you remember the classic Jackie Chan movie. We’re only just beginning to wrap our minds around how our brains approach tasks and how it is that we learn so quickly. So, while we are masters at learning in a way that machines can still only dream of (were they capable of dreaming), we’re pretty drunk about it, in that, we are only partially aware of what’s going on behind the scenes of our perception.

We should not, however, strive for machines to copy the human brain either. Though almost magical in its ability to process the world around it and feed us inferences about its gatherings, the human brain is flawed in many ways. Namely, we shouldn’t copy the brain’s limitations. But in designing machines, we should certainly draw inspiration from how human minds work. The book presents this premise for AI research going forward: “The brain is a highly structured device, and a large part of our mental prowess comes from using the right neural tools at the right time. We can expect that true artificial intelligences will likely also be highly structured, with much of their power coming from the capacity to leverage that structure in the right ways at the right time, for a given cognitive challenge.”

Generalizations are both the human brain’s flaw and winning factor. Marcus and Davis applaud the ability of the human mind to cope with incomplete and inexact information: “[For now], we have no idea how to simulate the human brain, [since] capturing even a second in the life of [a person]’s nervous system might take decades of computer time. We need systems that abstract from the exact physics, in order to capture psychology.” It is this abstraction that is so hard to achieve in AI. As Bertrand Russell once wrote, “All human knowledge is uncertain, inexact, and partial.” Our brain’s extraordinary ability to fill in the blanks allows it to find order in chaos, without getting lost in it.

Now, we’ve certainly seen admirable advancements in AI. And though it is vital to celebrate even the smallest of advancements, it is also important to keep the reality of AI, as it is now, in check. There is, without a doubt, a glowing illusion of progress that the media, and even researchers themselves, fall into. We may be building AI in entirely the wrong way. Or, in a slightly more hopeful scenario, we might have some puzzle pieces in place, while the vast majority of the others remain unsorted, misplaced, or entirely unfound. Marcus and Davis put it this way: “[P]eople have gotten enormously excited about a particular set of algorithms that are terrifically useful, but that remain a very long way from genuine intelligence — as if the discovery of a power screwdriver suddenly made interstellar travel possible.” Or, in fewer words, Law 31 of Akin’s Laws of Spacecraft Design states: “You can’t get to the moon by climbing successively taller trees.” Indeed, we cannot bet all our money on screwdrivers or tree climbing alone. We need to look for more ways to foster hybrid systems in order to achieve broader applications of AI.

There already exists an example of how a successful hybrid system allowed AI to master the game of Go. Two different approaches were combined: deep learning and Monte Carlo Tree Search, with the latter a hybrid in itself (game tree search and Monte Carlo search). Had the systems remained separate, they would not have succeeded. The lesson here, Marcus and Davis suggest, is that “AI, like the mind, must be structured, with different kinds of tools for different aspects of complex problems.” The authors also lay out their reasoning for why learning from a clean slate, as machine learning does, is unlikely to work going forward. They doubt that, as an example, a system “will eventually learn all it needs to know just by watching YouTube videos, with no prior knowledge.”

This is where classical AI steps in. It’s an approach that allows us to hand-code knowledge or rules (think Isaac Asimov’s “Three Laws of Robotics,” though there are many problems with the laws) into a system, before it begins learning on its own. Marcus and Davis think that “learning from an absolutely blank slate, as machine-learning researchers often seem to wish to do, makes the game much harder than it should be. It’s nurture without nature, when the most effective solution is to combine the two.” Unfortunately, many see pre-wiring as cheating. Results borne of machines where there is little to nothing built in are seen as more impressive. One could argue that there are many aspects of encoded information that we do not understand, as with our own DNA. In pursuing machine learning, we’re taking the route of evolution by starting from scratch and relying on the program, like DNA, to write itself. There may be benefits and unexpectedly brilliant solutions — think of the unorthodox strategies the AI used in Go — but we might also end up with a mysterious black box we cannot attend to or edit in any way. (People in the field of genome editing may, of course, disagree). Classical AI, however, would give us a better opportunity to interact with it and edit it if necessary, since we would have hand-coded some of its parts. Being unable to insert edits could create dangerous repercussions. This, in turn, brings us to the topic of safety.

It is vital to think about safety sooner rather than later. It’s much more difficult, if not impossible, to survive an avalanche after you’ve been swept away. Both Marcus and Davis advocate for safe AI, or “trustworthy AI.” For trustworthy AI to exist, there should be laws and industry standards in place. This entails thinking long-term, while the status quo opts for short-term solutions, where code is meant to immediately run a system, with few side effects taken into account. The authors present an approving nod to how other engineers do business, where, in safety-critical situations, “good engineers always design structures and devices to be stronger than the minimum that their calculations suggest. If engineers expect an elevator to never carry more than half a ton, they make sure that it can actually carry five tons.” AI is normally celebrated as soon as it works, to any degree of function. And that’s seen as good enough. We need to change our attitude to one that is less casual, because the stakes will, eventually, be high. We must design for failure. Though it is impossible to anticipate the many ways something can go wrong, we need to do the maximum, rather than the minimum, in our power to avert mistakes. The space shuttle, for example, had five identical computers on board “to run diagnostics on one another and to be backups in case of failure. As long as any of the five was still running, the shuttle could be operated.” AI is, without a doubt, an area in our lives where we must opt to overthink each detail and plan for the long-term more than the short-term.

This brings us to the long-term risk of ignoring safety precautions. In the cyberworld, for example, there’s been an unfortunate prevalence of infrastructure vulnerable to unexpected failures and cyberattacks. Remember the notorious example of NotPetya? Here is a truly fascinating read: Wired Magazine’s account of the encrypting malware’s worldwide cyberattack. Now recall the downsides of a black box. It is difficult, if not impossible, to attend to. Marcus and Davis put it this way: “Car engines have to be serviceable; an operating system has to come with some way of installing updates.” Similarly, we need to be able to fix any issues that may arise in an AI system. AI maintenance technicians could be highly specialized and highly paid professionals of the future. At present, however, AI is run by big data and deep learning, with “hard-to-interpret models that are difficult to debug and challenging to maintain.”

Furthermore, better engineering requires better metrics. The best-known metric we have of artificial general intelligence is the Turing Test. But it only tests for whether a machine can fool a human into thinking that it, too, is human. We need metrics for the inner-workings, rather than a successful façade. The goal of AI, Marcus and Davis argue, is to “understand and act in the world in ways that are useful, powerful, and robust.” The Turing test fails us. The Allen Institute for Artificial Intelligence has been working on alternatives to the Turing Test, involving a wide array of challenges, ranging from language comprehension, inferring physical and mental states, and understanding YouTube videos, elementary science, and robotic abilities.

The authors argue we need two things to start making progress: an inventory of what kind of knowledge a general intelligence should have, and an understanding of how this knowledge would be presented inside a machine. This, and the eventual creation of an AGI, is the mission of our era. Interestingly, and perhaps counterintuitively, Marcus and Davis call for imbuing AI with common sense. They write: “Our current systems have nothing remotely like common sense, yet we increasingly rely on them. The real risk is not superintelligence, it is idiots savants with power, such as autonomous weapons that could target people, with no values to constrain them, or AI-driven newsfeeds that, lacking superintelligence, prioritize short-term sales without evaluating their impact on long-term values.”

Though we have an abundance of narrow AIs, the authors point out that we lack genuine intelligence. We have deep learning, but we do not have deep understanding. So, instead of shortsightedly focusing on narrow tasks, or thinking too far ahead, it’s time to broaden our current horizons to think of actual step-by-step solutions. To think of new approaches or give older approaches like classical AI another shot. This’ll allow us to focus on problems we face now, rather than fantasizing about an AI of the future in Musk-Ma fashion.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Appendix 1

Nine Risks That Plague the Field of AI

  1. Overattribution errors — the risk of jumping to conclusions based on biases
  2. Lack of robustness — an AI that works in one situation may not work in another
  3. Inflexible machine learning — leaning in one way or from a limited set of data
  4. Blind data dredging — can lead to nasty side-effects like social biases
  5. An echo-chamber effect — systems trained on data they themselves generate
  6. Reliance on data that the public can manipulate
  7. Social biases amplified by the echo effect
  8. AI with the wrong goals
  9. The multitude of ways that an AI can cause serious public harm

-

Appendix 2

The Branches of AI

  • There are probabilistic models. They, as the name suggests, model probabilities by evaluating the landscape of possible answers to then output an answer that is most probable. This was behind the success of IBM’s Watson.
  • Genetic algorithms, then, are modeled on the process of evolution. Essentially, they mutate algorithms to find ones that are more optimal for a given task.
  • Gradient descent and back propagation is another branch dubbed, “the workhorse of deep learning.”
  • Another way to learn from large-scale data is deep reinforcement learning, whereby a system learns from trial and error.
  • Neural networks form another branch of study in AI. They offer a misleading glimmer of hope for building more human-like machines. Yet, neural networks imply neither a brain in the human sense, nor a more promising approach to building AI. They require too much computational power to run (when there are many layers of neurons) and are often “black boxes” because the inner workings are difficult to understand even for the researcher who builds them.
  • When it comes to deep learning in general, Marcus and Davis put it this way: “Deep learning is greedy. In order to set all the connections in a neural net correctly, deep learning often requires a massive amount of data. AlphaGo required 30 million games to reach superhuman performance, far more games than any one human would ever play in a lifetime.”

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Please visit optionxbook.com to join the mailing list for my manuscript’s imminent release.

Katia Karpenko

Written by

I like how the ‘b’ in ‘subtle’ is so subtle. On a side note, please join the mailing list for my sci-fi: www.optionxbook.com.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade