The core issue here isn’t that AI is worse than the existing human-led processes that serve to make predictions and assign rankings. Indeed, there’s much hope that AI can be used to provide more objective assessments than humans, reducing bias and leading to better outcomes. The key concern is that AI systems are being integrated into key social institutions, even though their accuracy, and their social and economic effects, have not been rigorously studied or validated.
Who rigorously studied and validated the existing human-led processes that computer systems are replacing?
If you replaced each instance of “AI” with “bureaucracy”, this post could have been written 50 years ago. Is it meaningfully different to a person whose loan was rejected or content was censored whether it was done by a computer following an algorithm or a human following a procedures manual? If anything, it’s easier to hold algorithms accountable, because you can at least point at a single codebase and team of programmers, rather than having responsibility diffused over a giant organization.
Holding large organizations accountable is hard. It’s always been hard. The real question isn’t humans versus computers. The real question is, how much power should these institutions have in the first place?