Why should this be the key concern? It’s not like AI systems are replacing infallible non-automated systems. Either they are replacing fallible human systems or their doing jobs that simply could not be done.
The problem isn’t AI or it’s fallibility, the problem is that no system is complete without feedback and correction mechanisms. Typically true AI systems handle this better than most because AI requires feedback to become intelligent. But all decision making systems require feedback and correction.