See more
Why someone thinks that a model should be interpretable is mainly related to trust: if you don’t understand the inner workings and the decisions behind a prediction you won’t trust it.