Neural networks are accurate but un-interpretable. Decision Trees are interpretable but inaccurate in computer vision. We have a solution. — Don’t take it from me. Take it from IEEE Fellow Cuntai Guan, who recognizes “many machine decisions are still poorly understood”. Most papers even suggest a rigid dichotomy between accuracy and interpretability. Explainable AI (XAI) attempts to bridge this divide, but as we explain below, XAI justifies decisions without interpreting…