Put me in the category of “people who would be overjoyed if any of our biggest problems were…
Tom Ritchford
171

Hey Tom —

Thanks for your thoughtful and detailed response! There is much to agree with in your nuanced writing. One of the few things I’d caution about is over-extrapolating from the renowned failings of the 80s AI boom. That was almost half the history of computing ago in terms of linear time; probably 95% of computing history “ago” in terms of the cumulative engineer-years dedicated to advancing computing worldwide; and it would be hard to argue that the tools or relevant horsepower of those days were remotely close to 1% as capable as those we have today. Millennia (literally!) of failed attempts at flight might have been sagely cited as proof of impossibility in early 20th century. But then Kittyhawk. And a few decades later, Apollo 11.

A two-pack-a-day smoker would indeed be misguided to fuss about asteroid strikes. But a society that contains him and 7.5 billion others can (and should) worry about this and other extreme scenarios, at least slightly. The odds against a devastating meteor impact are at “eight nines” in any given year (roughly 1 in 100 million) as I cite in my article — below the threshold of concern for an individual, but well above it for a planetary species whose best years (and vast majority of individuals) lie ahead of it if it doesn’t screw things up.

But should our smoker refrain from using seatbelts, simply because the danger of lung cancer exceeds that of a deadly collision? Of course not. No individual or society faces a solitary risk, so precautions can and should be taken on multiple fronts. The fact that smoking is more dangerous would be an insane reason to take no precautions against car wrecks. Saying that climate risk exceeds other risks is likewise a poor reason to disregard other dangers.

I agree with your uncertainty about the emergence of strong AI, in that it could come this year, decades hence, or centuries hence (and to this list, I’d add “never” as a very real possibility). But I don’t think anyone can preclude the attendant dangers with very many 9’s of certitude. I’d be quite open to one nine myself, if the case for that were made more strongly than I’ve seen it made thus far. But two 9’s would be a tough sale, and I haven’t seen a serious attempt at making an argument that strong anywhere. And two 9’s maps to an expected value of more deaths than we saw in WWII. If we make the extravagant assumption that future humans and their interests have zero value (a major source of our failings on climate precautions…).

Again though — I’ll close by restating my meta point that your arguments are quite sound, and extremely well-presented. I’m honored that you made the time to present them to me.