As Yogi Berra (maybe apocryphally) and Niels Bohr (really) said, it’s difficult to predict…
Henry Jekyll

Henry, I am sympathetic.

We screwed up nuclear predictions because we underestimated the safety issues. Reactors kept having to be redesigned — instead of becoming cheaper with every generation, they became more expensive. Concrete isn’t free, and the improvements we got from experience in making plants couldn’t keep up with the rising expectations we had for safety. You could argue that something similar might happen for self-driving cars.

But the rate of improvement in AI is shockingly faster than in nuclear. Even as our expectations for safety rise, adding code doesn’t add marginal cost. You do the R&D once, and then deploy. No concrete to pour.

I’m still not wholly satisfied that I understand how to predict rates of progress in AI field. It’s a problem that I’m actively trying to wrap my brain around. But the experimental evidence so far indicates that these problems are more tractable than we feared. Or perhaps, so tractable that we should fear more…