Explicable Artificial Intelligence

YelizDoker
5 min readAug 25, 2019

The major problem of the Artificial Intelligence (AI) is the system itself cannot describe the reasoning behind the actions as well as decision trees; the accountability and fairness are restricted by the machine’s ongoing incapacity of explicability. Due to the opaqueness of the mechanisms’ decision process “the BlackBox”, the society is intimidated by artificial intelligence developments therefore, it is scholars’ duty to determine and make this process transparent to the humanity.

AI is a part of the Algorithm family, however, AI can amend its algorithms and build new ones in response to received inputs and data instead of relying solely on the inputs provided. The machine is assigned a task and fed with myriad amounts of data, at the end of the process, the machine will be asked to decide the most purposive and acceptable way to complete the ultimate task. However, the accountability and fairness issue rise up in this process since the data or the inputs gathered could tend to be discriminatory or even biased. However, the algorithmic bias is uncatchable due to the inscrutability of the black box inside the, in other words, the decision-making phase is undefinable.

Therefore, the Explainable AI (XAI) will be the solution for justifying the decisions given by these systems. In order to understand and control, even improve the decision-making process, users and engineers should acknowledge the rationale behind the system itself. The lifeblood of the AI is data and this data tends to be bias and discriminatory, so the decision produced acts in accordance with the inputs and become contaminated. This vagueness of AI’s decisions sparks a debate on the accountability and fairness conflicts with regards to AI, especially considering the past biased decisions provided by these mechanisms exacerbated the debate. For instance, Google’s Translation Algorithm was found to perform gender apartheid when translating a sentence written in “gender-equal” language into another language that has a pronoun for each gender; Also, an image recognition algorithm characterized an African American couple as gorillas. These are, whether it was made by a human or not, accepted violations against human rights since Both direct and indirect discrimination was prohibited by the European Convention on Human Rights.

no copyright infringement intended

To cure the debates around unreliable decisions, “explainability” will play the fundamental remedy and The XAI become the main topic of conversation since understanding the decisions is not enough, in order society to trust these systems they should trust the mechanism itself. Moreover, the law should be ready for making necessary amendments to prevent illegal actions performed by AIs since in today’s world ignoring the massive development of these technologies would be pointless.

Moreover, The General Data Protection Regulation(GDPR) has the authority to strain the data endowed to AI since Article 22 attributes to Automated Decision-Making systems. GDPR was established mainly to protect the data subjects by providing fairness and fundamental rights, such as the right to privacy and the right to non-discrimination, however, the past data embedded inside the AI was out of scope. Especially, the clause of Article 22 restricted by the decisions made solely by an algorithm excludes the essential issues of algorithms that assist human decision-making, since, nowadays, corporations tend to use this kind of algorithms. In fact, at a vantage point, there is no such an algorithm that independently decides on its own and take actions in accordance with it so this clause actually creates a deadlock for the use of this provision. Considering the given rights to data-subjects, as per recital 71 “right to explanation” could be mentioned with one those remedies. However, it should be noted, the alleged remedy was mentioned in non-binding recital 71, not the binding Article 22. There is a public disagreement as to the GDPR rules on ADMs whether they have created a “right to explanation” for the decision regarding individuals. Also, it would immensely difficult to get an explanation from a complex mechanism with an opaque brain. Moreover, even if the code would be audited by a professional bystander, it will not give meaningful information about the decision-making process since an AI can only be assessed during the practice period in real life with real users.

XAI is not yet a reality, even though a top priority nowadays for researchers and businesses. If these systems will continue to be used or developed the words “fairness, transparency and accountability” should have vital importance for scientists, scholars, and engineers. The XAI will be an initial point to create trustworthy mechanisms. These AIs will have the capacity to elucidate their rationale, discriminate the week and powerful parts of their systems. However, this new generation AIs have not been accomplished, not yet.

Yeliz Figen Döker

Further Reading:

· Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1.

· Adadi A. and Berrada M., ‘Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence(XAI)’ (2018) IEEE Access

· Borgesius F.Z., ‘Discrimination, artificial intelligence, and algorithmic decision-making’ (2018) Directorate General of Democracy, Council of Europe

· Burrell J., ‘How the Machine “Thinks”: Understanding the Opacity in Machine Learning Algorithms’ (2016) 3 Big Data&Society 1.

· Crawford K. et al., ‘The AI Now Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near Term’ (2016) AI Now public symposium, hosted by the White House and New York University’s Information Law Institute

· Edwards L. and Veale M., ‘Enslaving the algorithm: from a ‘right to an explanation’ to a ‘right to better decisions’? (2018) IEEE Security and Privacy

· Edwards L. and Veale M., ‘Slave to the Algorithm: Why a Right to an Explanation is Probably Not the Remedy You are Looking For’ (2017) 16 Duke L. & Tech. Rev. 18

· Gunning D., ‘Explainable Artificial Intelligence (XAI)’ Defense Advanced Research Projects Agency

· Aylin Caliskan et al., ‘Semantics Derived Automatically from Language Corpora

Contain Human-Like Biases’ (2017) 356 SCIENCE 183–84.

· Goodman B. and Flaxman S., ‘European Union regulations on algorithmic decision-making and a “right to explanation”’ (ICML Workshop on Human Interpretability in Machine Learning, New York, 2016).

· Wachter S. et al., ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) International Data Privacy Law

· Wachter S.et al., ‘ Counterfactual Explanations Without Opening the Black Box Automated Decision and the GDPR’ (2017) 31 Harvard Journal of Law&Technology

· FAT/ML, Fairness, Accountability, and Transparency in Machine Learning

http://www.fatml.org

· FICO Community, ‘Explainable Machine-Learning Challenge’

https://www.fico.com/blogs/analytics-optimization/how-to-make-artificial-intelligence-explainable/

--

--