An article I read recently describes a similar issue from a different perspective (using metrics to evaluate).
When metrics are left opaque, you can never really be certain that the results are gained from methods that work, or simply methods that seem to. There’s a reason that in Freestyle chess, the human+program teams beat both programs and grandmasters on their own. AI will never negate the need to be rational, observant, and empathetic. Any AI that solves problems for humans will need human input, and the interfaces that best allow for efficient and effective human input will win. This is not just because it will be more effective, but because people will certainly choose to use the system with a better interface than a system that is confusing or frustrating, Even, and this is important to understand, even if the complicated system gets better results.