Reading Reflection 1
I first want to mention how I appreciate what “Questioning the AI: Informing Design Practices for Explainable AI User Experiences” actually delivers: a usable question bank. It is practical and, at this very moment could be used for real-world applications. It is unlike some other literature, for instance, the paper by Amershi et al. that was mentioned on page 2, which proposed “18 usability guidelines for AI systems”, but did not “provide actionable guidance on how to actualize these capabilities.” Also, the study in “Questioning the AI” formulated questions instead of, for example, principles, so as to encourage interaction and conversation, which I believe is absolutely necessary for understanding how to most effectively use AI algorithms.
The questions in the paper’s question bank can be used as a tool by people to advance society’s understanding of convoluted AI systems. In my opinion, professionals like teachers and insurance agents will never be totally replaced by technology/robots in the future because of the human ability to intuit someone else’s view of the world and personalize an explanation of the key parts of a complex concept. This idea is echoed in the paper when it states that human explanations are “desirable” because they are “selected”, meaning information can be conveyed in a compelling way without overwhelming someone, and “social”, meaning another person is there to help acknowledge the uncertainty and randomness that surrounds many AI systems.
“Questioning the AI” notes that, currently, data scientists and design practitioners take on this role of explaining how AI works, but “their advocacy is often hindered by skill gaps to engage themselves and the team in “finding the right pairing to put the ideas of what’s right for the user together with what’s doable given the tools or the algorithms”.” I wonder whether, in the future, a popular job might be an AI consultant, where their primary job description is to speak with customers who want to use AI systems, figure out their needs, and enable them to adapt their usage to better utilize the AI or optimize it for fairness and sustainability. Such consultants can train using a question bank such as the one provided in the paper, and they will need to have experience with algorithms, user research, and data science.
Perhaps the most critical piece of developing new technology is enabling its integration into society, and we cannot do that without simplifying how it is understood and communicating its effectiveness to the majority of members of the community. I recently heard a talk from a software engineer saying that the most crucial part of his job is being able to explain his work to people. He needs to speak about his code accurately and precisely while ensuring that it is understandable by the layperson. Furthermore, he needs to convince his colleagues, managers, and customers why what he is doing is relevant and important. I see parallels between how he views software and how the authors of the paper view AI: as something that might truly be useless if misunderstood.