Why General AI and Robots Won’t Take Over Any Time Soon

Tim Enwall
MistyRobotics
Published in
3 min readMar 9, 2018

This Helen Greiner tweet on Tuesday in response to Elon Musk Blasts Harvard’s Steven Pinker Over Comments Dismissing the Threat of Artificial Intelligence

…has prompted me to write this post.

I get asked all the time: “will robots [eat our lunch] [take our jobs] [be the end of us] [your doomsday phrase here]?”

My answer: not any time soon.

Yes, robots can take over certain tasks that we do and do them better because they’re rote or because they’re error-prone. And, yes, robots can amplify tasks we perform all the time using more powerful exoskeletons or collaborative appendages. The state of the art — and it’s a pretty expensive state at that — is robots that perform, essentially, one task.

Humans perform thousands of tasks per minute. We’re constantly seeing, hearing, sensing, feeling, thinking internally, flitting from one high-level concept to the next while multitasking a few hundred subconscious processes. We have hundreds of degrees of freedom amongst all our interspersed joints, muscles and nerves.

This gap between very expensive task-specific robots and a generalized robot that can perform thousands of tasks, both physical and metaphysical, per minute is beyond ginormous.

For us to get to a generalized AI that could subvert human dominance in the future we’re going to have to*:

  • have local processors capable of some large orders of magnitude more capability
  • network connectivity that operates, constantly, at the speed of light
  • processors capable of “learning” hundreds or thousands of lessons per minute
  • batteries with hard-to-quantify leaps forward in price-performance
  • motors that become radically smaller and more efficient (see batteries)

All of these advances have to occur and, I’m sure, many more. Unlike semiconductors (which are slowing down as it is), there isn’t a Moore’s Law (that I know of) for batteries, motors or networks.

A very simple illustration here at Misty Robotics brought the situation home for me the other day.

We use 2 powerful processors in our robot — one of which is the Qualcomm Snapdragon 820 (Samsung Galaxy Note7 class). This processor is capable of running face detection and recognition, which we’re doing in the Misty I robot. When describing the state of the art of this awesomeness, the engineers working on it noted that, if the robot wanted to detect and recognize any other object they would have to set aside this algorithm and run another. Both algorithms wouldn’t/don’t run at the same time.

Meanwhile, what we want in the Misty robot when it comes to object or face detection and recognition is such that it recognizes my loved ones immediately — not after some one-second delay while it goes off to “the cloud” to consult and then, maybe, recognize the person (who moved out of the camera frame half a second ago). In other words, the latency introduced with today’s Internet connections where we might be able to access enough compute power to simultaneously recognize, say, 10 objects (not the hundred or thousand visible in any one frame) is just too much.

To which I thought: wow — if we can’t simultaneously learn/recognize one or a few objects at a time, the whole notion of generalized AI and robots who are the delivery vehicles are a looooooonnnnnggg way away.

(*N.B.: I’m hardly a roboticist, robot expert, AI genius, or even technical expert. I just run a robot company. And, these opinions are mine and mine alone — they don’t represent some well-vetted corporate-speak that has been through the marketing/PR wringer. Or the engineering team.)

--

--

Tim Enwall
MistyRobotics

Visionary leader with passion and skill in building startup teams who perform in the Top 10th percentile.