Although I agree with the intuition that it is unlikely we will live in a society of bug-free code, the case is not made here. This is why:
1. There is no discussion of the systems that in fact do run very well. (It’s trivial to create a well defined system that says in production for decades, I have done it myself.)
(For those less interested in anecdote, the works of Margaret Hamilton are worth investigating: https://en.wikipedia.org/wiki/Margaret_Hamilton_(scientist))
2. The entirety of this argument is predicated on the relationship between business cycles and development cycles
3. Although there are famous examples like The Halting Problem in the literature, in practice it is fairly trivial to create robust systems in spite of theoretical restrictions.
4. You fail to address the actual fundamental issues that result in large scale failures. Namely, lack of formal specification, lack of formal verification, and an over-reliance on Von Neumann architectures (, which resist formal verification).
Today it is rare that software is formally verified and even formal specification is on the decline. These trends could change, but who would cause such a thing to happen? The public does not have the domain specific knowledge to force the government or corporations to do the right thing. Corporations save billions of dollars year only oiling squeaky wheels. Governments seem to be addicted to these squeaky wheels as a means of grift/nepotism and as a major component in the contemporary cybersecurity narrative.
This final point is where I think leverage can be applied over the long run; The West, Russia and China all rely on software bugs for espionage. I think there is a strong argument to be made that secure computer networks might actually provide a net gain to all members of all societies in regards to espionage activities. But my reply has run long already…