Applied AI Ethics Reading & Resource List

Jessica Rose Morley
47 min readJun 29, 2019

--

Below is the full reading and resource list I have compiled for the What to How of AI Ethics project with Luciano Floridi, Libby Kinsey and Anat Elhalal

Current version of the preprint is here: https://arxiv.org/abs/1905.06876

Current version of the typology is here: https://docs.google.com/document/d/1h6nK9K7qspG74_HyVlT0Lx97URM0dRoGbJ3ivPxMhaE/edit

a3i. (n.d.). The Trust-in-AI Framework. Retrieved from http://a3i.ai/trust-in-ai

Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems — CHI ’18, 1–18. https://doi.org/10.1145/3173574.3174156

Ackerly, B. A. (2018). Human Rights: Principles in Practice Without the Promise of Principles. Human Rights Review, 19(3), 391–394. https://doi.org/10.1007/s12142-018-0523-5

Acquisti, A. (2009). Nudging privacy: The behavioral economics of personal information. IEEE Security and Privacy, 7(6), 82–85. https://doi.org/10.1109/MSP.2009.163

Adamson, G., Havens, J. C., & Chatila, R. (2019). Designing a Value-Driven Future for Ethical Autonomous and Intelligent Systems. Proceedings of the IEEE, 107(3), 518–525. https://doi.org/10.1109/JPROC.2018.2884923

Aequias. Bias and Fairness Audit Toolkit. (n.d.). Retrieved from http://aequitas.dssg.io/

Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., & Wallach, H. (2018). A Reductions Approach to Fair Classification. ArXiv:1803.02453 [Cs]. Retrieved from http://arxiv.org/abs/1803.02453

AI Commons. (n.d.). Retrieved from AI Commons website: https://aicommons.com/

AI Now Institute Algorithmic Accountability Policy Toolkit. (n.d.). Retrieved from https://ainowinstitute.org/aap-toolkit.pdf

AINow Institute. (2018, October 24). AI IN 2018: A YEAR IN REVIEW ETHICS, ORGANIZING, AND ACCOUNTABILITY. Retrieved from https://medium.com/@AINowInstitute/ai-in-2018-a-year-in-review-8b161ead2b4e

AI-RFX Procuement Framework. (n.d.). Retrieved from https://ethical.institute/rfx.html

Aitchison, G. (2018). Are Human Rights Moralistic? Human Rights Review, 19(1), 23–43. https://doi.org/10.1007/s12142-017-0480-4

Aközer, M., & Aközer, E. (2016). Basing Science Ethics on Respect for Human Dignity. Science and Engineering Ethics, 22(6), 1627–1647. https://doi.org/10.1007/s11948-015-9731-4

Alfino, M. (2012). Twenty Years of Information Ethics and the Journal of Information Ethics. Journal of Information Ethics, 21(2), 13–16. https://doi.org/10.3172/JIE.21.2.13

Algorithm Tips Resources and leads for investigating algorithms in society. (n.d.). Retrieved from Northwestern University website: http://algorithmtips.org/resources/

Aliman, N. M., & Kester, L. (2018). Hybrid strategies towards safe “self-aware” superintelligent systems.

Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261. https://doi.org/10.1080/09528130050111428

Alshammari, M., & Simpson, A. (2017). Towards a Principled Approach for Engineering Privacy by Design. In E. Schweighofer, H. Leitold, A. Mitrakas, & K. Rannenberg (Eds.), Privacy Technologies and Policy (Vol. 10518, pp. 161–177). https://doi.org/10.1007/978-3-319-67280-9_9

Altman, M. C. (2011). Kant and applied ethics: The uses and limits of Kant’s practical philosophy. Malden, MA: Wiley-Blackwell.

Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete Problems in AI Safety. ArXiv:1606.06565 [Cs]. Retrieved from http://arxiv.org/abs/1606.06565

Anabo, I. F., Elexpuru-Albizuri, I., & Villardón-Gallego, L. (2019). Revisiting the Belmont Report’s ethical principles in internet-mediated research: Perspectives from disciplinary associations in the social sciences. Ethics and Information Technology, 21(2), 137–149. https://doi.org/10.1007/s10676-018-9495-z

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645

Anderson, M., & Anderson, S. L. (2018). GenEth: A general ethical dilemma analyzer. Paladyn, Journal of Behavioral Robotics, 9(1), 337–357. https://doi.org/10.1515/pjbr-2018-0024

Andreopoulos, G. (2018). Human Rights Reporting: Rights, Responsibilities, and Challenges. Human Rights Review, 19(2), 147–166. https://doi.org/10.1007/s12142-018-0499-1

Antignac, T., Sands, D., & Schneider, G. (2016). Data Minimisation: A Language-Based Approach (Long Version). ArXiv:1611.05642 [Cs]. Retrieved from http://arxiv.org/abs/1611.05642

Antunes, N., Balby, L., Figueiredo, F., Lourenco, N., Meira, W., & Santos, W. (2018). Fairness and Transparency of Machine Learning for Trustworthy Cloud Services. 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), 188–193. https://doi.org/10.1109/DSN-W.2018.00063

Archard, D. (2011). WHY MORAL PHILOSOPHERS ARE NOT AND SHOULD NOT BE MORAL EXPERTS: Why Moral Philosophers Are Not and Should Not Be Moral Experts. Bioethics, 25(3), 119–127. https://doi.org/10.1111/j.1467-8519.2009.01748.x

Arnold, M., Bellamy, R. K. E., Hind, M., Houde, S., Mehta, S., Mojsilovic, A., … Varshney, K. R. (2018). FactSheets: Increasing Trust in AI Services through Supplier’s Declarations of Conformity. ArXiv:1808.07261 [Cs]. Retrieved from http://arxiv.org/abs/1808.07261

Arnold, T, Kasenberg, D., & Scheutz, M. (2017a). Value Alignment or Misalignment — What Will Keep Systems Accountable? AAAI Workshops.

Arnold, T, Kasenberg, D., & Scheutz, M. (2017b, March 21). Value Alignment or Misalignment — What Will Keep Systems Accountable? Presented at the Workshops at the Thirty-First AAAI Conference on Artificial Intelligence. Retrieved from https://www.aaai.org/ocs/index.php/WS/AAAIW17/paper/viewFile/15216/14648

Arnold, Thomas, & Scheutz, M. (2016). Against the moral Turing test: Accountable design and the moral reasoning of autonomous systems. Ethics and Information Technology, 18(2), 103–115. https://doi.org/10.1007/s10676-016-9389-x

Arnold, Thomas, & Scheutz, M. (2018). The “big red button” is too late: An alternative model for the ethical evaluation of AI systems. Ethics and Information Technology, 20(1), 59–69. https://doi.org/10.1007/s10676-018-9447-7

Arsovski, S., Wong, S. H., & Cheok, A. D. (2018). Open-domain neural conversational agents: The step towards artificial general intelligence. International Journal of Advanced Computer Science and Applications, 9(6), 403–408. https://doi.org/10.14569/IJACSA.2018.090654

Arvan, M. (2014). A Better, Dual Theory of Human Rights: A Better, Dual Theory of Human Rights. The Philosophical Forum, 45(1), 17–47. https://doi.org/10.1111/phil.12025

Arvan, M. (2018). Mental time-travel, semantic flexibility, and A.I. ethics. AI & SOCIETY. https://doi.org/10.1007/s00146-018-0848-2

Atenasio, D. (2018). Co-responsibility for Individualists. Res Publica. https://doi.org/10.1007/s11158-018-09409-w

Audi, R. (2012). Virtue Ethics as a Resource in Business. Business Ethics Quarterly, 22(2), 273–291. https://doi.org/10.5840/beq201222220

Autili, M., Ruscio, D. D. I., Inverardi, P., Pelliccione, P., & Tivoli, M. (2019). A software exoskeleton to protect and support citizen’s ethics and privacy in the digital world. IEEE Access, 7, 62011–62021. https://doi.org/10.1109/ACCESS.2019.2916203

Axtell, G., & Olson, P. (2012). RECENT WORK IN APPLIED VIRTUE ETHICS. American Philosophical Quarterly, 49(3), 183–203. Retrieved from http://www.jstor.org/stable/23213479

Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLOS ONE, 10(7), e0130140. https://doi.org/10.1371/journal.pone.0130140

Balaram, B., Greenham, T., & Leonard, J. (n.d.). Artificial Intelligence: Real Public Engagement. Retrieved from RSA website: https://www.thersa.org/globalassets/pdfs/reports/rsa_artificial-intelligence---real-public-engagement.pdf

Bassily, R., Thakkar, O., & Thakurta, A. (2018). Model-Agnostic Private Learning via Stability. ArXiv:1803.05101 [Cs]. Retrieved from http://arxiv.org/abs/1803.05101

Bauer, W. A. (2018). Virtuous vs. utilitarian artificial moral agents. AI & SOCIETY. https://doi.org/10.1007/s00146-018-0871-3

Baum, S. D. (2017). Social choice ethics in artificial intelligence. AI & SOCIETY. https://doi.org/10.1007/s00146-017-0760-1

BBC Trending. (2018, December 12). Instagram tightens eating disorder filters after BBC investigation. Retrieved from BBC News website: https://www.bbc.co.uk/news/blogs-trending-46505704

Been, K., Rajiv, K., & Oluwasanmi, K. (2016). Examples Are Not Enough, Learn to Criticize! Criticism for Interpretability. Proceedings of the 30th International Conference on Neural Information Processing Systems. Presented at the NIPS’16, Barcelona, Spain. Retrieved from http://dl.acm.org/citation.cfm?id=3157096.3157352

Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147

Bender, E. M., & Friedman, B. (2018). Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6, 587–604. https://doi.org/10.1162/tacl_a_00041

Benghozi, P.-J., & Chevalier, H. (2019). The present vision of AI… or the HAL syndrome. Digital Policy, Regulation and Governance. https://doi.org/10.1108/DPRG-12-2018-0079

Benkler, Y. (2019). Don’t let industry write the rules for AI. Nature, 569(7755), 161–161. https://doi.org/10.1038/d41586-019-01413-1

Berdichevsky, D., & Neuenschwander, E. (1999). Toward an ethics of persuasive technology. Communications of the ACM, 42(5), 51–58. https://doi.org/10.1145/301353.301410

Beretta, E., Santangelo, A., Lepri, B., Vetrò, A., & De Martin, J. C. (2019). The invisible power of fairness. How machine learning shapes democracy. ArXiv:1903.09493 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1903.09493

Bibal, A., & Frénay, B. (2016). Interpretability of Machine Learning Models and Representations: an Introduction.

Billiet, L., Van Huffel, S., & Van Belle, V. (2018). Interval Coded Scoring: A toolbox for interpretable scoring systems. PeerJ Computer Science, 4, e150. https://doi.org/10.7717/peerj-cs.150

Binns, R. (2018a). Algorithmic Accountability and Public Reason. Philosophy & Technology, 31(4), 543–556. https://doi.org/10.1007/s13347-017-0263-5

Binns, R. (2018b). What Can Political Philosophy Teach Us about Algorithmic Fairness? IEEE Security & Privacy, 16(3), 73–80. https://doi.org/10.1109/MSP.2018.2701147

Binns, R. (n.d.). An Overview of the Auditing Framework for Artificial Intelligence and its core components. Retrieved from ICO website: https://ai-auditingframework.blogspot.com/2019/03/an-overview-of-auditing-framework-for_26.html

Binns, R., & Gallo, V. (2019, March 26). An overview of the Auditing Framework for Artificial Intelligence and its core components. Retrieved from Ai Auditing Framework website: https://ai-auditingframework.blogspot.com/2019/03/an-overview-of-auditing-framework-for_26.html

Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ‘It’s Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems — CHI ’18, 1–14. https://doi.org/10.1145/3173574.3173951

Bird, S., Hutchinson, B., Kenthapadi, K., Kıcıman, E., & Mitchell, M. (2019). Fairness-aware machine learning: Practical challenges and lessons learned. The Web Conference 2019 — Companion of the World Wide Web Conference, WWW 2019, 1297–1298. https://doi.org/10.1145/3308560.3320086

Bogosian, K. (2017). Implementation of Moral Uncertainty in Intelligent Machines. Minds and Machines, 27(4), 591–608. https://doi.org/10.1007/s11023-017-9448-z

Bohn, J., Coroamă, V., Langheinrich, M., Mattern, F., & Rohs, M. (2005). Social, Economic, and Ethical Implications of Ambient Intelligence and Ubiquitous Computing. In W. Weber, J. M. Rabaey, & E. Aarts (Eds.), Ambient Intelligence (pp. 5–29). Berlin, Heidelberg: Springer Berlin Heidelberg.

Bolukbasi, T., Chang, K., Zou, J., Saligrama, V., & Kalai. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Presented at the NIPS.

Bonawitz, K., Eichner, H., Grieskamp, W., Huba, D., Ingerman, A., Ivanov, V., … Roselander, J. (2019). Towards Federated Learning at Scale: System Design. ArXiv:1902.01046 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1902.01046

Bonnemains, V., Saurel, C., & Tessier, C. (2018a). Embedded ethics: Some technical and ethical challenges. Ethics and Information Technology, 20(1), 41–58. https://doi.org/10.1007/s10676-018-9444-x

Borenstein, J., & Arkin, R. (2016). Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being. Science and Engineering Ethics, 22(1), 31–46. https://doi.org/10.1007/s11948-015-9636-2

Bostrom, N., & Yudkowsky, E. (2014). The Ethics of Artificial Intelligence. In K. Frankish & W. M. Ramsey (Eds.), The Cambridge handbook of artificial intelligence. Cambridge, UK: Cambridge University Press.

Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227. https://doi.org/10.1007/s10676-013-9321-6

Bradley, R. (2017). Decision theory with a human face. Cambridge New York, NY Melbourne, VIC: Cambridge University Press.

Brännmark, J. (2017). Respect for Persons in Bioethics: Towards a Human Rights-Based Account. Human Rights Review, 18(2), 171–187. https://doi.org/10.1007/s12142-017-0450-x

Brennan, J. (2012). For-Profit Business as Civic Virtue. Journal of Business Ethics, 106(3), 313–324. https://doi.org/10.1007/s10551-011-0998-3

Brewer, C. D., & Himes, G. N. (2015). Weighing the Ethical Considerations of Autonomy and Efficacy With Respect to Mandatory Warning Labels. The American Journal of Bioethics, 15(3), 14–15. https://doi.org/10.1080/15265161.2014.998379

Brey, P. A. E. (2012). Anticipating ethical issues in emerging IT. Ethics and Information Technology, 14(4), 305–317. https://doi.org/10.1007/s10676-012-9293-y

Brown, S. (2019). An agile approach to designing for the consequences of technology. Retrieved from DotEveryone website: https://medium.com/doteveryone/an-agile-approach-to-designing-for-the-consequences-of-technology-18a229de763b

Brownstein, M. (2016). Attributionism and Moral Responsibility for Implicit Bias. Review of Philosophy and Psychology, 7(4), 765–786. https://doi.org/10.1007/s13164-015-0287-7

Bryson, J., & Winfield, A. (2017). Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems. Computer, 50(5), 116–119. https://doi.org/10.1109/MC.2017.154

Buechner, J. (2017). “Where do we come from? What are we? Where are we going?”: Critical Review of Wendell Wallach. A dangerous master: how to keep technology from slipping beyond our control. Basic Books, 2015; viii + 328 pp: ISBN 978–0–465–05862–4. Ethics and Information Technology, 19(3), 221–236. https://doi.org/10.1007/s10676-017-9433-5

Buiten, M. C. (2019). Towards Intelligent Regulation of Artificial Intelligence. European Journal of Risk Regulation, 10(01), 41–59. https://doi.org/10.1017/err.2019.8

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. https://doi.org/10.1177/2053951715622512

Butnaru, C., Benrimoh, D., & Theodorou, A. (n.d.). Humans in AI. Retrieved from http://moralmachine.mit.edu/

Butterworth, M. (2018). The ICO and artificial intelligence: The role of fairness in the GDPR framework. Computer Law & Security Review, 34(2), 257–268. https://doi.org/10.1016/j.clsr.2018.01.004

Calders, T., & Verwer, S. (2010). Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2), 277–292. https://doi.org/10.1007/s10618-010-0190-x

Calders, T., & Žliobaitė, I. (2013). Why Unbiased Computational Processes Can Lead to Discriminative Decision Procedures. In B. Custers, T. Calders, B. Schermer, & T. Zarsky (Eds.), Discrimination and Privacy in the Information Society (Vol. 3, pp. 43–57). https://doi.org/10.1007/978-3-642-30487-3_3

Caldicott, R. (2017). How do you solve a problem like technology? A systems approach to digital regulation. Retrieved from DotEveryone website: https://medium.com/doteveryone/how-do-you-solve-a-problem-like-technology-a-systems-approach-to-digital-regulation-c0c0d8e11bdf

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186. https://doi.org/10.1126/science.aal4230

Calo, R. (2017). Artificial Intelligence Policy: A Roadmap. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3015350

Caplan, A. L. (2014). Why autonomy needs help. Journal of Medical Ethics, 40(5), 301–302. https://doi.org/10.1136/medethics-2012-100492

Caruana, R., Kangarloo, H., Dionisio, J. D., Sinha, U., & Johnson, D. (1999). Case-based explanation of non-case-based learning methods. Proceedings. AMIA Symposium, 212–215.

Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080

Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK approach. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-9901-7

Cath, C., Zimmer, M., Lomborg, S., & Zevenbergen, B. (2018). Association of Internet Researchers (AoIR) Roundtable Summary: Artificial Intelligence and the Good Society Workshop Proceedings. Philosophy & Technology, 31(1), 155–162. https://doi.org/10.1007/s13347-018-0304-8

Cavoukian, A., Taylor, S., & Abrams, M. E. (2010). Privacy by Design: Essential for organizational accountability and strong business practices. Identity in the Information Society, 3(2), 405–413. https://doi.org/10.1007/s12394-010-0053-z

Chan, S. (2018). Principle Versus Profit: Debating Human Rights Sanctions. Human Rights Review, 19(1), 45–71. https://doi.org/10.1007/s12142-017-0484-0

Chen, K. (2017). Public Rights, Private Relations by Jean Thomas: New York and Oxford: Oxford University Press, 2015. Human Rights Review, 18(3), 361–362. https://doi.org/10.1007/s12142-017-0465-3

Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. New York: New York University Press.

Chouldechova, A. (2016). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. ArXiv:1610.07524 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1610.07524

Chowdhury, R. (n.d.). Tackling the challenges of ethics in AI Fairness Tool. Retrieved from Accenture website: https://www.accenture.com/gb-en/blogs/blogs-cogx-tackling-challenge-ethics-ai

Cimakasky, J., & Polansky, R. (2015). Aristotle and Principlism in Bioethics. Diametros, 59–70 Pages. https://doi.org/10.13153/diam.45.2015.796

Citron, D., & Pasquale, F. (2014). The Scored Society: Due process for automated predictions. Washington Law Review, 89(1), 1–33.

Clancey, W. J. (1983). The epistemology of a rule-based expert system — a framework for explanation. Artificial Intelligence, 20(3), 215–251. https://doi.org/10.1016/0004-3702(83)90008-5

Clark, C. D., & Weaver, M. F. (2015). Balancing Beneficence and Autonomy. The American Journal of Bioethics, 15(7), 62–63. https://doi.org/10.1080/15265161.2015.1042717

Codella, N. C. F., Hind, M., Ramamurthy, K. N., Campbell, M., Dhurandhar, A., Varshney, K. R., … Mojsilović, A. (2019). Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning. ArXiv:1906.02299 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1906.02299

Coeckelbergh, M. (2012). Moral Responsibility, Technology, and Experiences of the Tragic: From Kierkegaard to Offshore Engineering. Science and Engineering Ethics, 18(1), 35–48. https://doi.org/10.1007/s11948-010-9233-3

Cohen, A. J. (2004). What Toleration Is. Ethics, 115(1), 68–95. https://doi.org/10.1086/421982

Colburn, B. (2013). Autonomy and liberalism. Place of publication not identified: Routledge.

Collingridge, D. (1980). The social control of technology. New York: St. Martin’s Press.

Cookson, C. (2018, September 6). Artificial intelligence faces public backlash, warns scientist. Financial Times. Retrieved from https://www.ft.com/content/0b301152-b0f8-11e8-99ca-68cf89602132

Copeland, B. J. (2015). Artificial intelligence : a philosophical introduction. Oxford, UK: [Wiley-Blackwell].

Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. ArXiv:1701.08230 [Cs, Stat]. https://doi.org/10.1145/3097983.309809

Corrales, D., Ledezma, A., & Corrales, J. (2018). From Theory to Practice: A Data Quality Framework for Classification Tasks. Symmetry, 10(7), 248. https://doi.org/10.3390/sym10070248

Cowls, J., King, T., Taddeo, M., & Floridi, L. (2019). Cowls, Josh and King, Thomas and Taddeo, Mariarosaria and Floridi, Luciano, Designing AI for Social Good: Seven Essential Factors (May 15, 2019). Available at SSRN: https://ssrn.com/abstract=.

Craven, M., & Shavlik, J. (1996). Extracting Thee-Structured Representations of Thained Networks. Presented at the NIPs.

Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311–313. https://doi.org/10.1038/538311a

Cushman, F. (2015). Deconstructing intent to reconstruct morality. Current Opinion in Psychology, 6, 97–103. https://doi.org/10.1016/j.copsyc.2015.06.003

D’Agostino, M., & Durante, M. (2018). Introduction: The Governance of Algorithms. Philosophy & Technology, 31(4), 499–505. https://doi.org/10.1007/s13347-018-0337-z

Dahya, R., & Morris, A. (2019). Toward a Conceptual Framework for Understanding AI Action and Legal Reaction. In M.-J. Meurs & F. Rudzicz (Eds.), Advances in Artificial Intelligence (Vol. 11489, pp. 453–459). https://doi.org/10.1007/978-3-030-18305-9_44

Dai, W., Yoshigoe, K., & Parsley, W. (2018). Improving Data Quality through Deep Learning and Statistical Models. ArXiv:1810.07132 [Cs], 558, 515–522. https://doi.org/10.1007/978-3-319-54978-1_66

Dameski, A. (2018). A Comprehensive Ethical Framework for AI Entities: Foundations. In M. Iklé, A. Franz, R. Rzepka, & B. Goertzel (Eds.), Artificial General Intelligence (Vol. 10999, pp. 42–51). https://doi.org/10.1007/978-3-319-97676-1_5

Data Ethics Canvas. (2019). Retrieved from Open Data Institute: https://docs.google.com/document/d/1OXSrA2KDMVkHroxs_8SUoQZ5Uv0eRhtNNtIl9g_Q47M/edit

Datta, A., Sen, S., & Zick, Y. (2017). Algorithmic Transparency via Quantitative Input Influence. In T. Cerquitelli, D. Quercia, & F. Pasquale (Eds.), Transparent Data Mining for Big and Small Data (Vol. 32, pp. 71–94). https://doi.org/10.1007/978-3-319-54024-5_4

De Mul, J., & Philosophy Documentation Center. (2010). Moral Machines: ICTs as Mediators of Human Agencies. Techné: Research in Philosophy and Technology, 14(3), 226–236. https://doi.org/10.5840/techne201014323

de Vries, K. (2010). Identity, profiling algorithms and a world of ambient intelligence. Ethics and Information Technology, 12(1), 71–85. https://doi.org/10.1007/s10676-009-9215-9

Debias: trying to make word embeddings less sexist. (n.d.). Retrieved from https://github.com/tolga-b/debiaswe

Demšar, J., & Bosnić, Z. (2018). Detecting concept drift in data streams using model explanation. Expert Systems with Applications, 92, 546–559. https://doi.org/10.1016/j.eswa.2017.10.003

Dennis, L. A., Fisher, M., Lincoln, N. K., Lisitsa, A., & Veres, S. M. (2016). Practical verification of decision-making in agent-based autonomous systems. Automated Software Engineering, 23(3), 305–359. https://doi.org/10.1007/s10515-014-0168-9

Diakopoulos, Nicholas. (2015). Algorithmic Accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411

Diakopoulos, Nicholas, Friedler, S., Arenas, M., Barocas, S., Howe, B., Jagadish, H., … Zevenbergen, B. (n.d.). Principles for Accountable Algorithms and a Social Impact Statement for Algorithms. Retrieved from FAT ML website: http://www.fatml.org/resources/principles-for-accountable-algorithms

Diakopoulos, Nick, Trielli, D., Yang, A., & Gao, A. (n.d.). Algorithm Tips — Resources and Leads for investigating algorithms in society. Retrieved from http://algorithmtips.org/about/

Dignum, V. (2017). Responsible Autonomy. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 4698–4704. https://doi.org/10.24963/ijcai.2017/655

Dillon, R. S. (2010). Respect for persons, identity, and information technology. Ethics and Information Technology, 12(1), 17–28. https://doi.org/10.1007/s10676-009-9188-8

DiMaggio, P., & Garip, F. (2012). Network Effects and Social Inequality. Annual Review of Sociology, 38(1), 93–118. https://doi.org/10.1146/annurev.soc.012809.102545

Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. ArXiv:1702.08608 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1702.08608

Dosilovic, F. K., Brcic, M., & Hlupic, N. (2018). Explainable artificial intelligence: A survey. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 0210–0215. https://doi.org/10.23919/MIPRO.2018.8400040

DotEveryone. (2019). Responsible Tech 2019: The New Normal. Retrieved from DotEveryone website: https://doteveryone.org.uk/responsible-tech-2019/

DotEveryone. (n.d.). The DotEveryone Consequence Scanning Agile Event. Retrieved from https://doteveryone.org.uk/project/consequence-scanning/

Doyle, T. (2010). A Critique of Information Ethics. Knowledge, Technology & Policy, 23(1–2), 163–175. https://doi.org/10.1007/s12130-010-9104-x

Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. https://doi.org/10.1126/sciadv.aao5580

Dubbink, W., & Smith, J. (2011). A Political Account of Corporate Moral Responsibility. Ethical Theory and Moral Practice, 14(2), 223–246. https://doi.org/10.1007/s10677-010-9235-x

Durante, M. (2010a). What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents. Knowledge, Technology & Policy, 23(3–4), 347–366. https://doi.org/10.1007/s12130-010-9118-4

Durante, M. (2010b). What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents. Knowledge, Technology & Policy, 23(3–4), 347–366. https://doi.org/10.1007/s12130-010-9118-4

Durante, M. (2015). The Democratic Governance of Information Societies. A Critique to the Theory of Stakeholders. Philosophy & Technology, 28(1), 11–32. https://doi.org/10.1007/s13347-014-0162-y

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2011). Fairness Through Awareness. ArXiv:1104.3913 [Cs]. Retrieved from http://arxiv.org/abs/1104.3913

Edwards, L., & Veale, M. (2018). Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? IEEE Security & Privacy, 16(3), 46–54. https://doi.org/10.1109/MSP.2018.2701152

Eisenstadt, V., & Althoff, K. (2018). Eisenstadt, V., & Althoff, K. (2018). A Preliminary Survey of Explanation Facilities of AI-Based Design Support Approaches and Tools. LWDA. Presented at the LWDA.

El Bekri, N., Kling, J., & Huber, M. F. (2020). A Study on Trust in Black Box Models and Post-hoc Explanations. In F. Martínez Álvarez, A. Troncoso Lora, J. A. Sáez Muñoz, H. Quintián, & E. Corchado (Eds.), 14th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2019) (Vol. 950, pp. 35–46). https://doi.org/10.1007/978-3-030-20055-8_4

Ellpha. (n.d.). Retrieved from https://www.ellpha.com/

Engen, V., Pickering, J. B., & Walland, P. (2016). Machine Agency in Human-Machine Networks Impacts and Trust Implications.

Enigma. (n.d.). Retrieved from https://enigma.co/

Epstein, Z., Payne, B. H., Shen, J. H., Hong, C. J., Felbo, B., Dubey, A., … Rahwan, I. (2018). TuringBox: An Experimental Platform for the Evaluation of AI Systems. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 5826–5828. https://doi.org/10.24963/ijcai.2018/851

Equity Evaluation Corpus. (n.d.). Retrieved from https://saifmohammad.com/WebPages/Biases-SA.html

Ethics Net. (n.d.). Retrieved from https://www.ethicsnet.com/about

Etzioni, A., & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18(2), 149–156. https://doi.org/10.1007/s10676-016-9400-6

European Commission. (2019). Ethics Guidelines for Trustworthy AI. Retrieved from https://ec.europa.eu/futurium/en/ai-alliance-consultation

Fabiano, N. (2019). Ethics and the protection of personal data. IMCIC 2019–10th International Multi-Conference on Complexity, Informatics and Cybernetics, Proceedings, 1, 159–164. Retrieved from https://www.scopus.com/inward/record.uri?eid=2-s2.0-85066018176&partnerID=40&md5=878d2e61394774fa360a86818ba09918

Fabiano, Nicola. (2019). Robotics, Big Data, Ethics and Data Protection: A Matter of Approach. In M. I. Aldinhas Ferreira, J. Silva Sequeira, G. Singh Virk, M. O. Tokhi, & E. E. Kadar (Eds.), Robotics and Well-Being (Vol. 95, pp. 79–87). https://doi.org/10.1007/978-3-030-12524-0_8

Fast, E., & Horvitz, E. (2016). Long-Term Trends in the Public Perception of Artificial Intelligence. ArXiv:1609.04904 [Cs]. Retrieved from http://arxiv.org/abs/1609.04904

Fei, N., Yang, Y., & Bai, X. (2019). One Core Task of Interpretability in Machine Learning — Expansion of Structural Equation Modeling. International Journal of Pattern Recognition and Artificial Intelligence. https://doi.org/10.1142/S0218001420510015

Feldman, M., Friedler, S., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2014). Certifying and removing disparate impact. ArXiv:1412.3756 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1412.3756

Feurer, M. (n.d.). Efficient and Robust Automated Machine Learning.

Fish, B., Kun, J., & Lelkes, Á. D. (2016). A Confidence-Based Approach for Balancing Fairness and Accuracy. ArXiv:1601.05764 [Cs]. Retrieved from http://arxiv.org/abs/1601.05764

Floridi, L, & Clement-Jones, T. (2019, March 20). The five principles key to any ethical framework for AI. Tech New Statesman. Retrieved from https://tech.newstatesman.com/policy/ai-ethics-framework

Floridi, L., & Strait, A. (Forthcoming). Ethical foresight analysis: what it is and why it is needed.

Floridi, L. (2008). The Method of Levels of Abstraction. Minds and Machines, 18(3), 303–329. https://doi.org/10.1007/s11023-008-9113-7

Floridi, L. (2013a). Distributed Morality in an Information Society. Science and Engineering Ethics, 19(3), 727–743. https://doi.org/10.1007/s11948-012-9413-4

Floridi, L. (2013b). The ethics of information. Oxford: Oxford University Press.

Floridi, L. (2014). Open Data, Data Protection, and Group Privacy. Philosophy & Technology, 27(1), 1–3. https://doi.org/10.1007/s13347-014-0157-8

Floridi, L. (2015). Toleration and the Design of Norms. Science and Engineering Ethics, 21(5), 1095–1123. https://doi.org/10.1007/s11948-014-9589-x

Floridi, L. (2016a). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160112. https://doi.org/10.1098/rsta.2016.0112

Floridi, L. (2016b). On Human Dignity as a Foundation for the Right to Privacy. Philosophy & Technology, 29(4), 307–312. https://doi.org/10.1007/s13347-016-0220-8

Floridi, L. (2016c). Tolerant Paternalism: Pro-ethical Design as a Resolution of the Dilemma of Toleration. Science and Engineering Ethics, 22(6), 1669–1688. https://doi.org/10.1007/s11948-015-9733-2

Floridi, L. (2017a). Digital’s Cleaving Power and Its Consequences. Philosophy & Technology, 30(2), 123–129. https://doi.org/10.1007/s13347-017-0259-1

Floridi, L. (2017b). The Logic of Design as a Conceptual Logic of Information. Minds and Machines, 27(3), 495–519. https://doi.org/10.1007/s11023-017-9438-1

Floridi, L. (2018). Soft ethics, the governance of the digital and the General Data Protection Regulation. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.0081

Floridi, L. (2019a). AI opportunities for healthcare must not be wasted. Health Management, 19.

Floridi, L. (2019b). Establishing the rules for building trustworthy AI. Nature Machine Intelligence. https://doi.org/10.1038/s42256-019-0055-y

Floridi, L. (2019c). Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology, s13347–019–00354–x. https://doi.org/10.1007/s13347-019-00354-x

Floridi, L. (2019d). What the Near Future of Artificial Intelligence Could Be. Philosophy & Technology, 32(1), 1–15. https://doi.org/10.1007/s13347-019-00345-y

Floridi, L, & Cowls, J. (Forthcoming). A Unified Framework of Five Principles for AI in Society.

Floridi, L, Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People — An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Floridi, L, & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Floridi, L, & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160360. https://doi.org/10.1098/rsta.2016.0360

Focquaert, F., & Schermer, M. (2015). Moral Enhancement: Do Means Matter Morally? Neuroethics, 8(2), 139–151. https://doi.org/10.1007/s12152-015-9230-y

Frauenberger, C., Rauhala, M., & Fitzpatrick, G. (2017). In-action ethics. Interacting with Computers, 29(2), 220–236. https://doi.org/10.1093/iwc/iww024

Friedland, J., & Cole, B. M. (2019). From Homo-economicus to Homo-virtus: A System-Theoretic Model for Raising Moral Self-Awareness. Journal of Business Ethics, 155(1), 191–205. https://doi.org/10.1007/s10551-017-3494-6

Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im)possibility of fairness. ArXiv:1609.07236 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1609.07236

Friedman, B., Hendry, D. G., & Borning, A. (2017). A Survey of Value Sensitive Design Methods. Foundations and Trends® in Human–Computer Interaction, 11(2), 63–125. https://doi.org/10.1561/1100000015

Gadepally, V., Goodwin, J., Kepner, J., Reuther, A., Reynolds, H., Samsi, S., … Martinez, D. (2019). AI Enabling Technologies: A Survey. ArXiv:1905.03592 [Cs]. Retrieved from http://arxiv.org/abs/1905.03592

Gavanelli, M., Alberti, M., & Lamma, E. (2018). Accountable Protocols in Abductive Logic Programming. ACM Transactions on Internet Technology, 18(4), 1–20. https://doi.org/10.1145/3107936

Gavish, Y. (n.d.). The Step-By-Step PM Guide to Building Machine Learning Based Products What Product Managers Need to Know About Machine Learning Is Science, but Not Rocket Science. Retrieved from Medium website: https://medium.com/@yaelg/product-manager-pm-step-by-step-tutorial-building-machine-learning-products-ffa7817aa8ab

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumeé III, H., & Crawford, K. (2018). Datasheets for Datasets. ArXiv:1803.09010 [Cs]. Retrieved from http://arxiv.org/abs/1803.09010

Gerdes, A. (2014). What and whose values in design?: The challenge of incorporating ethics into collaborative prototyping. Journal of Information, Communication and Ethics in Society, 12(1), 18–20. https://doi.org/10.1108/JICES-11-2013-0054

Gertz, N. (2016). Autonomy online: Jacques Ellul and the Facebook emotional manipulation study. Research Ethics, 12(1), 55–61. https://doi.org/10.1177/1747016115579534

Gilabert, P. (2016). Justice and beneficence. Critical Review of International Social and Political Philosophy, 19(5), 508–533. https://doi.org/10.1080/13698230.2016.1183749

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning. ArXiv:1806.00069 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1806.00069

Ginkel, W. (n.d.). Retrieved from a3i website: http://a3i.ai/trust-in-ai

Ginsberg, M. L. (1986). Counterfactuals. Artificial Intelligence, 30(1), 35–79. https://doi.org/10.1016/0004-3702(86)90067-6

Glenn, J. (n.d.). Futures Wheel. Retrieved from Ethics Kit website: http://ethicskit.org/futures-wheel.html

Goggin, B. (2019, June 1). Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts. Retrieved from https://www.businessinsider.com/facebook-is-using-ai-to-try-to-predict-if-youre-suicidal-2018-12?r=US&IR=T

Goldstein, A., Kapelner, A., Bleich, J., & Pitkin, E. (2013). Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation. ArXiv:1309.6392 [Stat]. Retrieved from http://arxiv.org/abs/1309.6392

Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a ‘right to explanation’. AI Magazine, 38(3), 50. https://doi.org/10.1609/aimag.v38i3.2741

Google. (n.d.-a). Responsible AI Practices. Retrieved from https://ai.google/education/responsible-ai-practices

Google. (n.d.-b). What if Tool. Retrieved from https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html https://pair-code.github.io/what-if-tool/

Govindarajulu, N. S., & Bringsjord, S. (2017). On Automating the Doctrine of Double Effect. ArXiv:1703.08922 [Cs]. Retrieved from http://arxiv.org/abs/1703.08922

Green, B. P. (2018). Ethical Reflections on Artificial Intelligence. Scientia et Fides, 6(2), 9. https://doi.org/10.12775/SetF.2018.015

Grünloh, C. (2018a). Using technological frames as an analytic tool in value sensitive design. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9459-3

Grünloh, C. (2018b). Using technological frames as an analytic tool in value sensitive design. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9459-3

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, 51(5), 1–42. https://doi.org/10.1145/3236009

Guo, M., Zhang, Q., Liao, X., & Chen, Y. (2019). An interpretable machine learning framework for modelling human decision behavior. ArXiv:1906.01233 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1906.01233

Hagendorff, T. (2019). The Ethics of AI Ethics — An Evaluation of Guidelines. ArXiv:1903.03425 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1903.03425

Hall, P, & Chan, M. (2018). Practical Techniques for Interpreting Machine Learning Models: Introductory Open Source Examples Using Python, H20, and XGBoost. Presented at the FAT conference Mountain View, CA.

Hall, P, & Gill, N. (n.d.). H2O.ai Machine Learning Interpretability Resources. Retrieved from https://github.com/h2oai/mli-resources/blob/master/notebooks/mono_xgboost.ipynb

Hall, Patrick. (2019a). Guidelines for Responsible and Human-Centered Use of Explainable Machine Learning. ArXiv:1906.03533 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1906.03533

Hall, Patrick. (2019b). Guidelines for Responsible and Human-Centered Use of Explainable Machine Learning. ArXiv:1906.03533 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1906.03533

Halpern, J., & Kleiman-Weiner, M. (2018). Towards formal definitions of blameworthiness, intention, and moral responsibility. Presented at the 32nd AAAI Conference on Artificial Intelligence, AAAI 2018.

Hamacher, W., Ng, J., & DePaul University. (2017). The One Right No One Ever Has: Philosophy Today, 61(4), 947–962. https://doi.org/10.5840/philtoday2017614181

Harbers, M. (2018). Using agent-based simulations to address value tensions in design. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9462-8

Hardt, M. (2014, September 26). How big data is unfair Understanding unintended sources of unfairness in data driven decision making. Retrieved from Medium website: https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de

Hardt, M., Price, E., & Srebro, N. (2016). Equality of Opportunity in Supervised Learning. ArXiv:1610.02413 [Cs]. Retrieved from http://arxiv.org/abs/1610.02413

Hastie, T., Tibshirani, R., & Friedman, J. (2017). The Elements of Statistical Learning Data Mining, Inference, and Prediction (12th ed.). Retrieved from https://web.stanford.edu/~hastie/ElemStatLearn/printings/ESLII_print12.pdf

Hazy. (n.d.). Retrieved from https://hazy.com/

Hebbar, A. (2017). Augmented intelligence: Enhancing human capabilities. 2017-December, 251–254. https://doi.org/10.1109/ICRCICN.2017.8234515

Heersmink, R., van den Hoven, J., van Eck, N. J., & van den Berg, J. (2011). Bibliometric mapping of computer and information ethics. Ethics and Information Technology, 13(3), 241–249. https://doi.org/10.1007/s10676-011-9273-7

Heger, O., Niehaves, B., & Kampling, H. (2018). The value declaration: A method for integrating human values into design-oriented research projects. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9464-6

Hein, A. M., & Condat, H. (2018). Can machines design? An artificial general intelligence approach.

Henning, T. (2015). From Choice to Chance? Saving People, Fairness, and Lotteries. Philosophical Review, 124(2), 169–206. https://doi.org/10.1215/00318108-2842176

Herd, S., Urland, G., Mingus, B., & O’Reilly, R. (2011). Human-artificial-intelligence hybrid learning systems.

Hernandez, J. G. (2015). Human Value, Dignity, and the Presence of Others. HEC Forum, 27(3), 249–263. https://doi.org/10.1007/s10730-015-9271-y

Hesketh, P. (n.d.). Ethics Cards. Retrieved from Ethics Kit website: http://ethicskit.org/ethics-cards.html

Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics, 21(3), 619–630. https://doi.org/10.1007/s11948-014-9565-5

Hew, P. C. (2014). Artificial moral agents are infeasible with foreseeable technologies. Ethics and Information Technology, 16(3), 197–206. https://doi.org/10.1007/s10676-014-9345-6

Ho, A. (2019). Deep Ethical Learning: Taking the Interplay of Human and Artificial Intelligence Seriously. Hastings Center Report, 49(1), 36–39. https://doi.org/10.1002/hast.977

Holland, S., Hosny, A., Newman, S., Joseph, J., & Chmielinski, K. (2018). The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards. ArXiv:1805.03677 [Cs]. Retrieved from http://arxiv.org/abs/1805.03677

Holm, E. A. (2019). In defense of the black box. Science, 364(6435), 26–27. https://doi.org/10.1126/science.aax0162

Holroyd, J. (2012). Responsibility for Implicit Bias: Responsibility for Implicit Bias. Journal of Social Philosophy, 43(3), 274–306. https://doi.org/10.1111/j.1467-9833.2012.01565.x

Holstein, K., Vaughan, J. W., Daumé III, H., Dudík, M., & Wallach, H. (2018). Improving fairness in machine learning systems: What do industry practitioners need? ArXiv:1812.05239 [Cs]. https://doi.org/10.1145/3290605.3300830

Holzinger, A. (2018). From Machine Learning to Explainable AI. 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA), 55–66. https://doi.org/10.1109/DISA.2018.8490530

Huang, P.-H. (2018). Moral Enhancement, Self-Governance, and Resistance. The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine, 43(5), 547–567. https://doi.org/10.1093/jmp/jhy023

Humans in AI: HAI Trello board to tackle Agile Ethics in AI one step at a time. (n.d.). Retrieved from https://humansinai.com/ https://trello.com/b/SarLFYOd/agile-ethics-for-ai-hai

Hunter, A. (2016). Information Hiding: Ethics and Safeguards for Beneficial Intelligence: Proceedings of the 8th International Conference on Agents and Artificial Intelligence, 546–551. https://doi.org/10.5220/0005826805460551

ICO. (2017). Big data, artificial intelligence, machine learning and data protection. Retrieved from ICO website: https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf

ICO. (n.d.-a). Anonymisation: managing data protection riskcode of practice.

ICO. (n.d.-b). Guide to the General Data Protection Regulation (GDPR). Retrieved from https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/

ICO, & Alan Turing Institute. (2019). Project explAIn: Interim Report.

Ideo.org. (n.d.). The field guide to human-centred design. Retrieved from http://www.designkit.org/resources/1

IEEE. (n.d.). Artificial Intelligence and Ethics in Design Course Program. Retrieved from https://innovationatwork.ieee.org/courses/artificial-intelligence-and-ethics-in-design/

IEEE Standards Association. (n.d.). The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Retrieved from https://standards.ieee.org/industry-connections/ec/autonomous-systems.html

Iliadis, A., & Pedersen, I. (2018). The fabric of digital life: Uncovering sociotechnical tradeoffs in embodied computing through metadata. Journal of Information, Communication and Ethics in Society, 16(3), 311–327. https://doi.org/10.1108/JICES-03-2018-0022

Involve, & DeepMind. (n.d.). How to stimulate effective public engagement on the ethics of Artificial Intelligence. Retrieved from https://www.involve.org.uk/sites/default/files/field/attachemnt/How%20to%20stimulate%20effective%20public%20debate%20on%20the%20ethics%20of%20artificial%20intelligence%20.pdf

Irving, G., & Askell, A. (2019). AI Safety Needs Social Scientists. Distill, 4(2), 10.23915/distill.00014. https://doi.org/10.23915/distill.00014

Islam, S. R., Eberle, W., Bundy, S., & Ghafoor, S. K. (2019). Infusing domain knowledge in AI-based ‘black box’ models for better explainability with application in bankruptcy prediction. ArXiv:1905.11474 [Cs]. Retrieved from http://arxiv.org/abs/1905.11474

Jacob, D. (2015). Every Vote Counts: Equality, Autonomy, and the Moral Value of Democratic Decision-Making. Res Publica, 21(1), 61–75. https://doi.org/10.1007/s11158-014-9262-x

Jacobs, N., & Huldtgren, A. (2018). Why value sensitive design needs ethical commitments. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9467-3

Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance. Government Information Quarterly, 33(3), 371–377. https://doi.org/10.1016/j.giq.2016.08.011

Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial Intelligence: The global landscape of ethics guidelines. ArXiv:1906.11668 [Cs]. Retrieved from http://arxiv.org/abs/1906.11668

Johansson, F. D., Shalit, U., & Sontag, D. (2016). Learning Representations for Counterfactual Inference. ArXiv:1605.03661 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1605.03661

Jones, M. L., Kaufman, E., & Edenberg, E. (2018). AI and the Ethics of Automating Consent. IEEE Security & Privacy, 16(3), 64–72. https://doi.org/10.1109/MSP.2018.2701155

Jordan, M. C. (2010). Bioethics and ‘Human Dignity’. Journal of Medicine and Philosophy, 35(2), 180–196. https://doi.org/10.1093/jmp/jhq010

Joseph, M., Kearns, M., Morgenstern, J., Neel, S., & Roth, A. (2016). Fair Algorithms for Infinite and Contextual Bandits. ArXiv:1610.09559 [Cs]. Retrieved from http://arxiv.org/abs/1610.09559

Joshi, C., Kaloskampis, I., & Nolan, L. (2019). Generative Adversarial Networks (GANs) for synthetic dataset generation with binary classes. Retrieved from https://datasciencecampus.ons.gov.uk/projects/generative-adversarial-networks-gans-for-synthetic-dataset-generation-with-binary-classes/

Kamishima, T., Akaho, S., Asoh, H., & Sakuma, J. (2012). Fairness-Aware Classifier with Prejudice Remover Regularizer. In P. A. Flach, T. De Bie, & N. Cristianini (Eds.), Machine Learning and Knowledge Discovery in Databases (Vol. 7524, pp. 35–50). https://doi.org/10.1007/978-3-642-33486-3_3

Kekes, J. (2011). The Dangerous Ideal of Autonomy. Criminal Justice Ethics, 30(2), 192–204. https://doi.org/10.1080/0731129X.2011.592676

Kemper, J., & Kolkman, D. (2018). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, 1–16. https://doi.org/10.1080/1369118X.2018.1477967

Killmister, S. (2010). Dignity: Not such a useless concept. Journal of Medical Ethics, 36(3), 160–164. https://doi.org/10.1136/jme.2009.031393

Kious, B. M. (2015). Autonomy and Values: Why the Conventional Theory of Autonomy is Not Value-Neutral. Philosophy, Psychiatry, & Psychology, 22(1), 1–12. https://doi.org/10.1353/ppp.2015.0002

Kiritchenko, S., & Mohammad, S. M. (2018). Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. ArXiv:1805.04508 [Cs]. Retrieved from http://arxiv.org/abs/1805.04508

Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). Human Decisions and Machine Predictions*. The Quarterly Journal of Economics. https://doi.org/10.1093/qje/qjx032

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent Trade-Offs in the Fair Determination of Risk Scores. ArXiv:1609.05807 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1609.05807

Klumpp, M., & Zijm, H. (2019). Logistics Innovation and Social Sustainability: How to Prevent an Artificial Divide in Human-Computer Interaction. Journal of Business Logistics. https://doi.org/10.1111/jbl.12198

Knight, W. (2019). Why does Beijing suddenly care about AI ethics? MIT Technology Review. Retrieved from https://www.technologyreview.com/s/613610/why-does-china-suddenly-care-about-ai-ethics-and-privacy/

Koepsell, D. (2010). On Genies and Bottles: Scientists’ Moral Responsibility and Dangerous Technology R&D. Science and Engineering Ethics, 16(1), 119–133. https://doi.org/10.1007/s11948-009-9158-x

Kohjima, M., Matsubayashi, T., & Sawada, H. (2017). What-if prediction via inverse reinforcement learning. Presented at the Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference.

Kolter, Z., & Madry, A. (n.d.). Materials for tutorial Adversarial Robustness: Theory and Practice. Retrieved from https://adversarial-ml-tutorial.org/

Koppelman, A., & Gregg, B. (2014). Human Rights as Social Construction. Contemporary Political Theory, 13(4), 380–386. https://doi.org/10.1057/cpt.2014.10

Korb, K. B. (1998). The Frame Problem: An AI Fairy Tale. Minds and Machines, 8(3), 317–351. https://doi.org/10.1023/A:1008286921835

Kraemer, F., van Overveld, K., & Peterson, M. (2011). Is there an ethics of algorithms? Ethics and Information Technology, 13(3), 251–260. https://doi.org/10.1007/s10676-010-9233-7

Kraemer, U. A. F., Overveld, van C. W. A. M., & Peterson, M. B. (2011). Is there an ethics of algorithms? Ethics and Information Technology, 13(3), 251–260. https://doi.org/urn:nbn:nl:ui:25-d038ec53-0552-4c58-b854-29ec08f8f47b

Krausová, A. (2019). Eu competition law and artificial intelligence: Reflections on antitrust and consumer protection issues. Lawyer Quarterly, 1(1), 79–84. Retrieved from https://www.scopus.com/inward/record.uri?eid=2-s2.0-85066990641&partnerID=40&md5=32a00448721fe9efd7ef7b5f4d3a30b2

Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180084. https://doi.org/10.1098/rsta.2018.0084

Kroll, J. A., Huey, J., Barocas, S., Felten, E., Reidenberg, J., Robinson, D., & Yu, H. (2017). Accountable Algorithms. University of Pennyslvania Law Review, 165.

Kuosmanen, J. (2016). Human Rights, Public Budgets, and Epistemic Challenges. Human Rights Review, 17(2), 247–267. https://doi.org/10.1007/s12142-016-0403-9

Kusner, M. J., Loftus, J. R., Russell, C., & Silva, R. (2017). Counterfactual Fairness. ArXiv:1703.06856 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1703.06856

La Fors, K., Custers, B., & Keymolen, E. (2019). Reassessing values for emerging big data technologies: Integrating design-based and application-based approaches. Ethics and Information Technology. https://doi.org/10.1007/s10676-019-09503-4

Lafont, C. (2010). Accountability and global governance: Challenging the state-centric conception of human rights. Ethics & Global Politics, 3(3), 193–215. https://doi.org/10.3402/egp.v3i3.5507

Lakkaraju, H., Kleinberg, J., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining — KDD ’17, 275–284. https://doi.org/10.1145/3097983.3098066

Lara, F., & Deckers, J. (2019). Artificial Intelligence as a Socratic Assistant for Moral Enhancement. Neuroethics. https://doi.org/10.1007/s12152-019-09401-y

Laukyte, M. (2017). Artificial agents among us: Should we recognize them as agents proper? Ethics and Information Technology, 19(1), 1–17. https://doi.org/10.1007/s10676-016-9411-3

Laurence, N. (n.d.). What is Machine Learning? Retrieved from Inverse Probability website: http://inverseprobability.com/2017/07/17/what-is-machine-learning

Lauterbach, A. (2019). Artificial intelligence and policy: Quo vadis? Digital Policy, Regulation and Governance, DPRG-09–2018–0054. https://doi.org/10.1108/DPRG-09-2018-0054

Lawford-Smith, H. (2015). Unethical Consumption and Obligations to Signal. Ethics & International Affairs, 29(3), 315–330. https://doi.org/10.1017/S089267941500026X

Lemmens, P. (2017). Social Autonomy and Heteronomy in the Age of ICT: The Digital Pharmakon and the (Dis)Empowerment of the General Intellect. Foundations of Science, 22(2), 287–296. https://doi.org/10.1007/s10699-015-9468-1

Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges. Philosophy & Technology, 31(4), 611–627. https://doi.org/10.1007/s13347-017-0279-x

Lepri, B., Staiano, J., Sangokoya, D., Letouzé, E., & Oliver, N. (2016). The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good. ArXiv:1612.00323 [Physics]. Retrieved from http://arxiv.org/abs/1612.00323

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. https://doi.org/10.5281/zenodo.3240529

Lessig, L., & Lessig, L. (2006). Code (Version 2.0). New York: Basic Books.

Levy, N. (2012). Skepticism and Sanction: The Benefits of Rejecting Moral Responsibility. Law and Philosophy, 31(5), 477–493. https://doi.org/10.1007/s10982-012-9128-3

Li, G., Su, X., & Wang, Y. (2019). A Privacy Protection Method for Learning Artificial Neural Network on Vertically Distributed Data. In K. Deng, Z. Yu, S. Patnaik, & J. Wang (Eds.), Recent Developments in Mechatronics and Intelligent Robotics (Vol. 856, pp. 1159–1167). https://doi.org/10.1007/978-3-030-00214-5_142

Li, O., Liu, H., Chen, C., & Rudin, C. (2017). Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions. ArXiv:1710.04806 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1710.04806

Lieto, A., Bhatt, M., Oltramari, A., & Vernon, D. (2018). The role of cognitive architectures in general artificial intelligence. Cognitive Systems Research, 48, 1–3. https://doi.org/10.1016/j.cogsys.2017.08.003

Limerick, H., Coyle, D., & Moore, J. W. (2014). The experience of agency in human-computer interactions: a review. Frontiers in Human Neuroscience, 8, 643–643. https://doi.org/10.3389/fnhum.2014.00643

Lipton, Z. C. (2016). The Mythos of Model Interpretability. ArXiv:1606.03490 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1606.03490

Livingston, S., & Risse, M. (2019). The Future Impact of Artificial Intelligence on Humans and Human Rights. Ethics and International Affairs, 33(2), 141–158. https://doi.org/10.1017/S089267941900011X

Loi, D. (2019). Ten Guidelines for Intelligent Systems Futures. In K. Arai, R. Bhatia, & S. Kapoor (Eds.), Proceedings of the Future Technologies Conference (FTC) 2018 (Vol. 880, pp. 788–805). https://doi.org/10.1007/978-3-030-02686-8_59

Loi, D., Lodato, T., Wolf, C. T., Arar, R., & Blomberg, J. (2018). PD manifesto for AI futures. Proceedings of the 15th Participatory Design Conference on Short Papers, Situated Actions, Workshops and Tutorial — PDC ’18, 1–4. https://doi.org/10.1145/3210604.3210614

Loosemore, R. P. W. (2014). The Maverick Nanny with a dopamine drip: Debunking fallacies in the theory of AI motivation. SS-14–03, 31–36.

Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2013). Accurate intelligible models with pairwise interactions. Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining — KDD ’13, 623. https://doi.org/10.1145/2487575.2487579

Lundberg, S., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. ArXiv:1705.07874 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1705.07874

M. Powers, T. (2017). Philosophy and computing: Essays in epistemology, philosophy of mind, logic, and ethics. New York, NY: Springer Berlin Heidelberg.

Madras, D., Creager, E., Pitassi, T., & Zemel, R. (2018). Learning Adversarially Fair and Transferable Representations. ArXiv:1802.06309 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1802.06309

Madva, A. (2018). Implicit Bias, Moods, and Moral Responsibility: Implicit Bias, Moods, and Moral Responsibility. Pacific Philosophical Quarterly, 99, 53–78. https://doi.org/10.1111/papq.12212

Mahieu, R., van Eck, N. J., van Putten, D., & van den Hoven, J. (2018). From dignity to security protocols: A scientometric analysis of digital ethics. Ethics and Information Technology, 20(3), 175–187. https://doi.org/10.1007/s10676-018-9457-5

Makri, E.-L., & Lambrinoudakis, C. (2015). Privacy Principles: Towards a Common Privacy Audit Methodology. In S. Fischer-Hübner, C. Lambrinoudakis, & J. López (Eds.), Trust, Privacy and Security in Digital Business (Vol. 9264, pp. 219–234). https://doi.org/10.1007/978-3-319-22906-5_17

Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A Theory of Blame. Psychological Inquiry, 25(2), 147–186. https://doi.org/10.1080/1047840X.2014.877340

Manders-Huits, N. (2010). Practical versus moral identities in identity management. Ethics and Information Technology, 12(1), 43–55. https://doi.org/10.1007/s10676-010-9216-8

Marinakis, Y., Harms, R., Milne, B. T., & Walsh, S. T. (2018). Cyborged ecosystems: Scenario planning and Participatory Technology Assessment of a potentially Rosennean-complex technology. Ecological Complexity, 35, 98–105. https://doi.org/10.1016/j.ecocom.2017.10.005

Marmor, A. (2015). What Is the Right to Privacy?: What Is the Right to Privacy? Philosophy & Public Affairs, 43(1), 3–26. https://doi.org/10.1111/papa.12040

Martin, K. (2012). Information technology and privacy: Conceptual muddles or privacy vacuums? Ethics and Information Technology, 14(4), 267–284. https://doi.org/10.1007/s10676-012-9300-3

Matzner, T. (2014). Why privacy is not enough privacy in the context of “ubiquitous computing” and “big data”. Journal of Information, Communication and Ethics in Society, 12(2), 93–106. https://doi.org/10.1108/JICES-08-2013-0030

McAuley, J., & Leskovec, J. (2013). Hidden factors and hidden topics: Understanding rating dimensions with review text. Proceedings of the 7th ACM Conference on Recommender Systems — RecSys ’13, 165–172. https://doi.org/10.1145/2507157.2507163

McGregor, L., Murray, D., & Ng, V. (2019). INTERNATIONAL HUMAN RIGHTS LAW AS A FRAMEWORK FOR ALGORITHMIC ACCOUNTABILITY. International and Comparative Law Quarterly, 68(2), 309–343. https://doi.org/10.1017/S0020589319000046

McMurtry, J. (2011). Human Rights versus Corporate Rights: Life Value, the Civil Commons and Social Justice. Studies in Social Justice, 5(1), 11–61. https://doi.org/10.26522/ssj.v5i1.991

Mermet, B., & Simon, G. (2016). Formal verification of ethical properties in multiagent systems. Presented at the CEUR Workshop Proceedings.

Michelfelder, D. P. (2010). Philosophy, privacy, and pervasive computing. AI & SOCIETY, 25(1), 61–70. https://doi.org/10.1007/s00146-009-0233-2

Microsoft. (n.d.). InterpretML — Alpha Release. Retrieved from GitHub website: https://github.com/Microsoft/interpret

Mikhailov, D. (2019). A new method for ethical data science. Retrieved from Medium website: https://medium.com/wellcome-data-labs/a-new-method-for-ethical-data-science-edb59e400ae9

Miller, C., & Coldicott, R. (2019). People, Power and Technology: The Tech Workers’ View. Retrieved from Doteveryone website: https://doteveryone.org.uk/report/workersview/

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007

Mintz-Woo, K. (2019). Principled Utility Discounting Under Risk. Moral Philosophy and Politics, 6(1), 89–112. https://doi.org/10.1515/mopp-2018-0060

MIT. (n.d.). Moral Machines. Retrieved from http://moralmachine.mit.edu/

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., … Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency — FAT* ’19, 220–229. https://doi.org/10.1145/3287560.3287596

Mittelstadt, B. (2019). AI Ethics — Too Principled to Fail? Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679

Mittelstadt, B., & Floridi, L. (2016). The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts. Science and Engineering Ethics, 22(2), 303–341. https://doi.org/10.1007/s11948-015-9652-2

Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining Explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency — FAT* ’19, 279–288. https://doi.org/10.1145/3287560.3287574

Mohammad, A. A., Saidi, F., & Abdulkarim, M. E. (2019). Towards an integrative view of AIS: Using integrated business processes approach to framework the paradigm shift of AIS. International Journal of Business Process Integration and Management, 9(2), 63–75. https://doi.org/10.1504/IJBPIM.2019.099868

Molnar, C. (n.d.). Retrieved from GitHub Books website: https://christophm.github.io/interpretable-ml-book/

Moret, C., Hurst, S. A., & Mauron, A. (2015). Variants of Unknown Significance and Their Impact on Autonomy. The American Journal of Bioethics, 15(7), 26–28. https://doi.org/10.1080/15265161.2015.1039727

Mulgan, G., & Straub, V. (2019, February 21). The new ecosystem of trust. Retrieved from Nesta website: https://www.nesta.org.uk/blog/new-ecosystem-trust/

Murphy, M. H. (2017). Algorithmic surveillance: The collection conundrum. International Review of Law, Computers & Technology, 31(2), 225–242. https://doi.org/10.1080/13600869.2017.1298497

New Economy Impact Model. (n.d.). Retrieved from The Federation website: http://ethicskit.org/downloads/economy-impact-model.pdf

Nguyen, L. (2018, January 26). Machine Learning in practice, what are the steps?

Nicolae, M.-I., Sinn, M., Tran, M. N., Rawat, A., Wistuba, M., Zantedeschi, V., … Edwards, B. (2018). Adversarial Robustness Toolbox v0.4.0. ArXiv:1807.01069 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1807.01069

Nissenbaum, H. (2004). Privacy as Contextual Integrity. WASHINGTON LAW REVIEW.

Nuffield Council of Bioethcs. (2015). The collection, linking and use of data in biomedical research and health care: ethical issues. Retrieved from http://nuffieldbioethics.org/wp-content/uploads/Biological_and_health_data_web.pdf

Nys, T. (2015). Autonomy, Trust, and Respect. Journal of Medicine and Philosophy, jhv036. https://doi.org/10.1093/jmp/jhv036

ODI. (n.d.). Data Ethics Canvas User Guide. Retrieved from https://docs.google.com/document/d/1MkvoAP86CwimbBD0dxySVCO0zeVOput_bu1A6kHV73M/edit

OECD. (2019). Forty-two countries adopt new OECD Principles on Artificial Intelligence. Retrieved from https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm

OECD. (n.d.). Recommendation of the Council on Artificial Intelligence. Retrieved from https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

Oetzel, M. C., & Spiekermann, S. (2014). A systematic methodology for privacy impact assessments: A design science approach. European Journal of Information Systems, 23(2), 126–150. https://doi.org/10.1057/ejis.2013.18

ONS. (n.d.). The ONS Methodology working paper on Synthetic Data. Retrieved from https://www.ons.gov.uk/methodology/methodologicalpublications/generalmethodology/onsworkingpaperseries/onsmethodologyworkingpaperseriesnumber16syntheticdatapilot

OpenMined. (n.d.). Retrieved from https://www.openmined.org/

Orcutt, M. (2017). Personal AI Privacy Watchdog Could Help You Regain Control of Your Data. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/607830/personal-ai-privacy-watchdog-could-help-you-regain-control-of-your-data/

O’Sullivan, S., Nevejans, N., Allen, C., Blyth, A., Leonard, S., Pagallo, U., … Ashrafian, H. (2019). Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. The International Journal of Medical Robotics and Computer Assisted Surgery, 15(1), e1968. https://doi.org/10.1002/rcs.1968

Overdorf, R., Kulynych, B., Balsa, E., Troncoso, C., & Gürses, S. (2018). Questioning the assumptions behind fairness solutions. ArXiv:1811.11293 [Cs]. Retrieved from http://arxiv.org/abs/1811.11293

Oxborough, C., Cameron, E., Rao, A., Birchall, A., Townsend, A., & Westermann, C. (n.d.). Explainable AI: Driving Business Value through Greater Understanding. Retrieved from PWC website: https://www.pwc.co.uk/audit-assurance/assets/explainable-ai.pdf

Páez, A. (2019). The Pragmatic Turn in Explainable Artificial Intelligence (XAI). Minds and Machines. https://doi.org/10.1007/s11023-019-09502-w

Pagallo, U. (2012). Cracking down on autonomy: Three challenges to design in IT Law. Ethics and Information Technology, 14(4), 319–328. https://doi.org/10.1007/s10676-012-9295-9

Page, K. (2012). The four principles: Can they be measured and do they predict ethical decision making? BMC Medical Ethics, 13(1), 10. https://doi.org/10.1186/1472-6939-13-10

Pan, Y. (2016). Heading toward Artificial Intelligence 2.0. Engineering, 2(4), 409–413. https://doi.org/10.1016/J.ENG.2016.04.018

Panichas, G. E. (2014). An Intrusion Theory of Privacy. Res Publica, 20(2), 145–161. https://doi.org/10.1007/s11158-014-9240-3

Papernot, N., Abadi, M., Erlingsson, Ú., Goodfellow, I., & Talwar, K. (2016). Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data. ArXiv:1610.05755 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1610.05755

Papernot, N., Song, S., Mironov, I., Raghunathan, A., Talwar, K., & Erlingsson, Ú. (2018). Scalable Private Learning with PATE. ArXiv:1802.08908 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1802.08908

Parthemore, J., & Whitby, B. (2013). WHAT MAKES ANY AGENT A MORAL AGENT? REFLECTIONS ON MACHINE CONSCIOUSNESS AND MORAL AGENCY. International Journal of Machine Consciousness, 05(02), 105–129. https://doi.org/10.1142/S1793843013500017

Peters, D., & Calvo, R. A. (2019, May 2). Beyond principles: A process for responsible tech. Retrieved from Medium website: https://medium.com/ethics-of-digital-experience/beyond-principles-a-process-for-responsible-tech-aefc921f7317

Peters, D., Calvo, R. A., & Ryan, R. M. (2018). Designing for Motivation, Engagement and Wellbeing in Digital Experience. Frontiers in Psychology, 9, 797. https://doi.org/10.3389/fpsyg.2018.00797

Pickering, J. B., Engen, V., & Walland, P. (2017). The Interplay between Human and Machine Agency. ArXiv:1702.04537 [Cs]. Retrieved from http://arxiv.org/abs/1702.04537

Pinar Alper, Becker, R., Venkata Satagopam, Grouès, V., Lebioda, J., Yohan Jarosz, … Schneider, R. (2018). Provenance-enabled stewardship of human data in the GDPR era. https://doi.org/10.7490/f1000research.1115768.1

Pineau, J. (2019). The Machine Learning Reproducibility Checklist. Retrieved from https://www.cs.mcgill.ca/~jpineau/ReproducibilityChecklist.pdf

Plumb, G., Al-Shedivat, M., Xing, E., & Talwalkar, A. (2019). Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version). ArXiv:1906.01431 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1906.01431

Polykalas, S. E., & Prezerakos, G. N. (2019). When the mobile app is free, the product is your personal data. Digital Policy, Regulation and Governance, 21(2), 89–101. https://doi.org/10.1108/DPRG-11-2018-0068

Pommeranz, A., Detweiler, C., Wiggers, P., & Jonker, C. (2012). Elicitation of situated values: Need for tools to help stakeholders and designers to reflect and communicate. Ethics and Information Technology, 14(4), 285–303. https://doi.org/10.1007/s10676-011-9282-6

Potapov, A., & Rodionov, S. (2014). Universal empathy and ethical bias for artificial general intelligence. Journal of Experimental and Theoretical Artificial Intelligence, 26(3), 405–416. https://doi.org/10.1080/0952813X.2014.895112

Poursabzi-Sangdeh, F., Goldstein, D. G., Hofman, J. M., Vaughan, J. W., & Wallach, H. (2018). Manipulating and Measuring Model Interpretability. ArXiv:1802.07810 [Cs]. Retrieved from http://arxiv.org/abs/1802.07810

Priestman, W., Collins, R., Vigne, H., Sridharan, S., Seamer, L., Bowen, D., & Sebire, N. J. (2019). Lessons learned from a comprehensive electronic patient record procurement process — implications for healthcare organisations. BMJ Health & Care Informatics, 26(1), e000020. https://doi.org/10.1136/bmjhci-2019-000020

Pugh, J. (2019). Moral Bio-enhancement, Freedom, Value and the Parity Principle. Topoi, 38(1), 73–86. https://doi.org/10.1007/s11245-017-9482-8

PWC. (n.d.). The PwC Responsible AI Framework. Retrieved from https://www.pwc.co.uk/services/audit-assurance/risk-assurance/services/technology-risk/technology-risk-insights/accelerating-innovation-through-responsible-ai.html

Rahwan, I. (2018). Society-in-the-Loop: Programming the Algorithmic Social Contract. Ethics and Information Technology, 20(1), 5–14. https://doi.org/10.1007/s10676-017-9430-8

Ramesh, S. (2017). A checklist to protect human rights in artificial-intelligence research. Nature, 552(7685), 334–334. https://doi.org/10.1038/d41586-017-08875-1

Rath, P. (2011). ‘Social Issue Is Business Issue’: The New Agenda of Lattice 2010. Journal of Human Values, 17(2), 171–183. https://doi.org/10.1177/097168581101700206

Raveh, A. R., & Tamir, B. (2018). From homo sapiens to robo sapiens: The evolution of intelligence. Information (Switzerland), 10(1). https://doi.org/10.3390/info10010002

Reductions for Fair Machine Learning. (n.d.). Retrieved from https://github.com/Microsoft/fairlearn

Reed, C. (2018). How should we regulate artificial intelligence? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170360. https://doi.org/10.1098/rsta.2017.0360

Rehg, W. (2015). Discourse ethics for computer ethics: A heuristic for engaged dialogical reflection. Ethics and Information Technology, 17(1), 27–39. https://doi.org/10.1007/s10676-014-9359-0

Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. Retrieved from AINow website: https://ainowinstitute.org/aiareport2018.pdf

Responsible AI Licenses. (n.d.). Retrieved from https://www.licenses.ai/about

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016a). ‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier. ArXiv:1602.04938 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1602.04938

Ribeiro, Singh, S., & Guestrin, C. (2016b, August 12). Local Interpretable Model-Agnostic Explanations (LIME): An Introduction A technique to explain the predictions of any machine learning classifier. Retrieved from https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime

Rivera, L. (2011). Harmful Beneficence. Journal of Moral Philosophy, 8(2), 197–222. https://doi.org/10.1163/174552411X563565

Roessler, B., & Mokrosinska, D. (2013). Privacy and social interaction. Philosophy & Social Criticism, 39(8), 771–791. https://doi.org/10.1177/0191453713494968

Roff, H. M. (2019). Artificial Intelligence: Power to the People. Ethics and International Affairs, 33(2), 127–140. https://doi.org/10.1017/S0892679419000121

Rönnegard, D., & Philosophy Documentation Center. (2013). How Autonomy Alone Debunks Corporate Moral Agency: Business and Professional Ethics Journal, 32(2), 77–107. https://doi.org/10.5840/bpej2013321/24

Ronzhyn, A., & Wimmer, M. (2019, February 17). Literature Review of Ethical Concerns in the Use of Disruptive Technologies in Government 3.0. Presented at the ICDS 2019, The Thirteenth International Conference on Digtial Society and eGovernments. Retrieved from https://thinkmind.org/index.php?view=article&articleid=icds_2019_5_10_18003

Roselli, D., Matthews, J., & Talagala, N. (2019). Managing bias in AI. The Web Conference 2019 — Companion of the World Wide Web Conference, WWW 2019, 539–544. https://doi.org/10.1145/3308560.3317590

Royakkers, L., Timmer, J., Kool, L., & van Est, R. (2018). Societal and ethical issues of digitization. Ethics and Information Technology, 20(2), 127–142. https://doi.org/10.1007/s10676-018-9452-x

Royal Society, & British Academy. (n.d.). Data Management and Use: Governance in the 21st Century. Retrieved from https://royalsociety.org/~/media/policy/projects/data-governance/data-management-governance.pdf

RRI Toolkit: Built with and for the Community of Practice. (n.d.). Retrieved from RRI Toolkit website: https://www.rri-tools.eu/search-engine#keywords=@filterOption=40105@order=@page=1

Rudin, C. (2018). Please Stop Explaining Black Box Models for High Stakes Decisions. ArXiv:1811.10154 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1811.10154

Russell, C., Kusner, M. J., Loftus, J., & Silva, R. (2017). When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 6414–6423). Retrieved from http://papers.nips.cc/paper/7220-when-worlds-collide-integrating-different-counterfactual-assumptions-in-fairness.pdf

Russell, S. J., Norvig, P., Davis, E., & Edwards, D. (2016). Artificial intelligence: A modern approach (Third edition, Global edition). Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam, Cape Town Dubai London Madrid Milan Munich Paris Montreal Toronto Delhi Mexico City Sao Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo: Pearson.

Ryffel, T., Trask, A., Dahl, M., Wagner, B., Mancuso, J., Rueckert, D., & Passerat-Palmbach, J. (2018). A generic framework for privacy preserving deep learning. ArXiv:1811.04017 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1811.04017

Saleiro, P., Kuester, B., Stevens, A., Anisfeld, A., Hinkson, L., London, J., & Ghani, R. (2018). Aequitas: A Bias and Fairness Audit Toolkit. ArXiv:1811.05577 [Cs]. Retrieved from http://arxiv.org/abs/1811.05577

Saltz, J. S., & Dewar, N. (2019). Data science ethical considerations: A systematic literature review and proposed project framework. Ethics and Information Technology. https://doi.org/10.1007/s10676-019-09502-5

Sampson, O., & Chapman, M. (2019, May 8). AI Needs an Ethical Compass. This Tool Can Help. Retrieved from Ideo website: https://www.ideo.com/blog/ai-needs-an-ethical-compass-this-tool-can-help

Samuel, A. L. (1960). Some Moral and Technical Consequences of Automation — A Refutation. Science, 132(3429), 741–742. https://doi.org/10.1126/science.132.3429.741

Sandbu, M. E. (2012). Stakeholder Duties: On the Moral Responsibility of Corporate Investors. Journal of Business Ethics, 109(1), 97–107. https://doi.org/10.1007/s10551-012-1382-7

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms. Presented at the Data and Discrimination: Converting Critical Concerns into Productive Inquiry” a preconference at the 64th Annual Meeting of the International Communication Association Seattle, WA, USA.

Saunders, J. (2019). Kant and Degrees of Responsibility. Journal of Applied Philosophy, 36(1), 137–154. https://doi.org/10.1111/japp.12293

Schaefer, G. O., Kahane, G., & Savulescu, J. (2014). Autonomy and Enhancement. Neuroethics, 7(2), 123–136. https://doi.org/10.1007/s12152-013-9189-5

Schermer, B. W., Custers, B., & van der Hof, S. (2014). The crisis of consent: How stronger legal protection may lead to weaker consent in data protection. Ethics and Information Technology. https://doi.org/10.1007/s10676-014-9343-8

Scheuring, S. T., & Agah, A. (2014). An emotion theory approach to artificial emotion systems for robots and intelligent systems: Survey and classification. Journal of Intelligent Systems, 23(3), 325–343. https://doi.org/10.1515/jisys-2013-0069

Schicktanz, S., & Schweda, M. (2012). The Diversity of Responsibility: The Value of Explication and Pluralization. Medicine Studies, 3(3), 131–145. https://doi.org/10.1007/s12376-011-0070-8

Schmidt, J. A. (2014). Changing the Paradigm for Engineering Ethics. Science and Engineering Ethics, 20(4), 985–1010. https://doi.org/10.1007/s11948-013-9491-y

Seddon, R. F. J. (2013). Getting ‘virtual’ wrongs right. Ethics and Information Technology, 15(1), 1–11. https://doi.org/10.1007/s10676-012-9304-z

Sekiguchi, K., & Hori, K. (2018). Organic and dynamic tool for use with knowledge base of AI ethics for promoting engineers’ practice of ethical AI design. AI & SOCIETY. https://doi.org/10.1007/s00146-018-0867-z

Selbst, A. D. (2017). Disparate Impact in Big Data Policing. Georgia Law Review, 52(1), 109–196. Retrieved from https://heinonline.org/HOL/P?h=hein.journals/geolr52&i=121.

Seldon.io. (n.d.). Alibi. Retrieved from GitHub website: https://github.com/SeldonIO/alibi

Sepinwall, A. J. (2016). Corporate Moral Responsibility: Corporate Moral Responsibility. Philosophy Compass, 11(1), 3–13. https://doi.org/10.1111/phc3.12293

Serpico, D., & Frixione, M. (2018). Can the g factor play a role in artificial general intelligence research? Presented at the Proceedings of AISB Annual Convention 2018.

Serrano, J. I., & Del Castillo, M. D. (2011). Do artificial general intelligent systems really need to be conscious? 1, 674–676.

Seth, S. (2017). Machine Learning and Artificial Intelligence. Economic and Political Weekly, Vol. 52(Issue №51).

Seymour, W. (2018). Detecting bias: Does an algorithm have to be transparent in order to be fair? Presented at the CEUR Workshop Proceedings.

Shaffer, M. J. (2009). Decision Theory, Intelligent Planning and Counterfactuals. Minds and Machines, 19(1), 61–92. https://doi.org/10.1007/s11023-008-9126-2

Shoemaker, D. W. (2010a). Self-exposure and exposure of the self: Informational privacy and the presentation of identity. Ethics and Information Technology, 12(1), 3–15. https://doi.org/10.1007/s10676-009-9186-x

Shoemaker, D. W. (2010b). Self-exposure and exposure of the self: Informational privacy and the presentation of identity. Ethics and Information Technology, 12(1), 3–15. https://doi.org/10.1007/s10676-009-9186-x

Shrikumar, A., Greenside, P., & Kundaje, A. (2017). Learning Important Features Through Propagating Activation Differences. ArXiv:1704.02685 [Cs]. Retrieved from http://arxiv.org/abs/1704.02685

Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. ArXiv:1312.6034 [Cs]. Retrieved from http://arxiv.org/abs/1312.6034

Skelton, A. (2016). Introduction to the symposium on The Most Good You Can Do. Journal of Global Ethics, 12(2), 127–131. https://doi.org/10.1080/17449626.2016.1193553

Sokol, K., & Flach, P. (2018). Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 5868–5870. https://doi.org/10.24963/ijcai.2018/865

Sorell, T. (2011). The Limits of Principlism and Recourse to Theory: The Example of Telecare. Ethical Theory and Moral Practice, 14(4), 369–382. https://doi.org/10.1007/s10677-011-9292-9

Soule, E., Hedahl, M., & Dienhart, J. (2009). Principles of Managerial Moral Responsibility. Business Ethics Quarterly, 19(4), 529–552. https://doi.org/10.5840/beq200919431

Spence, E. H. (2011). Information, knowledge and wisdom: Groundwork for the normative evaluation of digital information and its relation to the good life. Ethics and Information Technology, 13(3), 261–275. https://doi.org/10.1007/s10676-011-9265-7

Srinivas, R., Sireesha, K. A., & Vahida, S. (2017). Preserving Privacy in Vertically Partitioned Distributed Data Using Hierarchical and Ring Models. In S. S. Dash, K. Vijayakumar, B. K. Panigrahi, & S. Das (Eds.), Artificial Intelligence and Evolutionary Computations in Engineering Systems (Vol. 517, pp. 585–596). https://doi.org/10.1007/978-981-10-3174-8_49

Stahl, B. C., & Ess, C. M. (2015). 20 years of ETHICOMP: time to celebrate? Journal of Information, Communication and Ethics in Society, 13(3/4), 166–175. https://doi.org/10.1108/JICES-05-2015-0015

Stahl, B. C., & Wright, D. (2018). Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation. IEEE Security & Privacy, 16(3), 26–33. https://doi.org/10.1109/MSP.2018.2701164

Stamatellos, G. (2011). Virtue, Privacy and Self-Determination: A Plotinian Approach to the Problem of Information Privacy. International Journal of Cyber Ethics in Education, 1(4), 35–41. https://doi.org/10.4018/ijcee.2011100104

Stammers, T. (2015). The Evolution of Autonomy. The New Bioethics, 21(2), 155–163. https://doi.org/10.1179/2050287715Z.00000000070

Stanford Center for AI Safety. (n.d.). MIRI. Why AI Safety Retrieved. Retrieved from http://aisafety.stanford.edu/

Stark, L., & Hoffmann, A. L. (2019). Data Is the New What? Popular Metaphors & Professional Ethics in Emerging Data Culture. Journal of Cultural Analytics. https://doi.org/10.22148/16.036

Steier, A. M., & Belew, R. K. (1994). Talking about AI: socially-defined linguistic subcontexts in AI. 1, 715–720.

Stoyanovich, J., Howe, B., Abiteboul, S., Miklau, G., Sahuguet, A., & Weikum, G. (2017). Fides: Towards a Platform for Responsible Data Science. Proceedings of the 29th International Conference on Scientific and Statistical Database Management — SSDBM ’17, 1–6. https://doi.org/10.1145/3085504.3085530

Straehle, C. (2017). Vulnerability, autonomy, and applied ethics. Retrieved from http://www.myilibrary.com?id=959322

Sulkunen, P. (2011). Autonomy against Intimacy: On the Problem of Governing Lifestyle-Related Risks. Telos, 2011(156), 99–112. https://doi.org/10.3817/0911156099

Sun, W., Nasraoui, O., Khenissi, S., & Shafto, P. (2019). Debiasing the human-recommender system feedback loop in collaborative filtering. The Web Conference 2019 — Companion of the World Wide Web Conference, WWW 2019, 645–651. https://doi.org/10.1145/3308560.3317303

Suphakul, T., & Senivongse, T. (2017). Development of privacy design patterns based on privacy principles and UML. 2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 369–375. https://doi.org/10.1109/SNPD.2017.8022748

Surprenant, C. W. (2010). Kant’s contribution to moral education: The relevance of catechistics. Journal of Moral Education, 39(2), 165–174. https://doi.org/10.1080/03057241003754898

Sutrop, M. (2011). Changing Ethical Frameworks: From Individual Rights to the Common Good? Cambridge Quarterly of Healthcare Ethics, 20(4), 533–545. https://doi.org/10.1017/S0963180111000272

Swingler, K. (2011). The perils of ignoring data suitability: The suitability of data used to train neural networks deserves more attention. Presented at the NCTA 2011 — Proceedings of the International Conference on Neural Computation Theory and Applications. Retrieved from http://hdl.handle.net/1893/3950

Taddeo, M. (2009). Defining Trust and E-Trust: From Old Theories to New Problems. International Journal of Technology and Human Interaction, 5(2), 23–35. https://doi.org/10.4018/jthi.2009040102

Taddeo, M. (2010). Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust. Minds and Machines, 20(2), 243–257. https://doi.org/10.1007/s11023-010-9201-3

Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. https://doi.org/10.1126/science.aat5991

TensorFlow Privacy. (n.d.). Retrieved from https://github.com/tensorflow/privacy

Tesfay, W. B., Hofmann, P., Nakamura, T., Kiyomoto, S., & Serna, J. (2018). PrivacyGuide: Towards an Implementation of the EU GDPR on Internet Privacy Policy Evaluation. Proceedings of the Fourth ACM International Workshop on Security and Privacy Analytics — IWSPA ’18, 15–21. https://doi.org/10.1145/3180445.3180447

Thapa, R. K., Iakovleva, T., & Foss, L. (2019). Responsible research and innovation: A systematic review of the literature and its applications to regional studies. European Planning Studies, 1–21. https://doi.org/10.1080/09654313.2019.1625871

The Turing Way. (n.d.). Retrieved from https://github.com/alan-turing-institute/the-turing-way

Tickle, A. B., Andrews, R., Golea, M., & Diederich, J. (1998). The truth will come to light: Directions and challenges in extracting the knowledge embedded within trained artificial neural networks. IEEE Transactions on Neural Networks, 9(6), 1057–1068. https://doi.org/10.1109/72.728352

Tudor-Locke, C., Craig, C. L., Brown, W. J., Clemes, S. A., De Cocker, K., Giles-Corti, B., … Blair, S. N. (2011). How many steps/day are enough? for adults. International Journal of Behavioral Nutrition and Physical Activity, 8(1), 79. https://doi.org/10.1186/1479-5868-8-79

Turilli, M. (2007). Ethical protocols design. Ethics and Information Technology, 9(1), 49–62. https://doi.org/10.1007/s10676-006-9128-9

Turilli, M. (2008). Ethics and the practice of software design. In A. Briggle, P. Brey, & K. Waelbers (Eds.), Current issues in computing and philosophy. Amsterdam: IOS Press.

Turilli, M., & Floridi, L. (2009). The ethics of information transparency. Ethics and Information Technology, 11(2), 105–112. https://doi.org/10.1007/s10676-009-9187-9

Twomey, M. (2015). Why Worry about Autonomy? Ethics and Social Welfare, 9(3), 255–268. https://doi.org/10.1080/17496535.2015.1024154

Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27–40. https://doi.org/10.1007/s10676-017-9440-6

van de Poel, I. (2016). An Ethical Framework for Evaluating Experimental Technology. Science and Engineering Ethics, 22(3), 667–686. https://doi.org/10.1007/s11948-015-9724-3

van de Poel, I. (2018). Design for value change. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9461-9

van den Hoven, J., Vermaas, P. E., & van de Poel, I. (2015). Design for Values: An Introduction. In J. van den Hoven, P. E. Vermaas, & I. van de Poel (Eds.), Handbook of Ethics, Values, and Technological Design (pp. 1–7). https://doi.org/10.1007/978-94-007-6970-0_40

van Dijk, N. (2010). Property, privacy and personhood in a world of ambient intelligence. Ethics and Information Technology, 12(1), 57–69. https://doi.org/10.1007/s10676-009-9211-0

Varshney, K. R. (2018). Introducing AI Fairness 360. Retrieved from IBM website: https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/ https://aif360.mybluemix.net/

Vaughan, J., & Wallach, H. (2016). The inescapability of Uncertainty: AI, Uncertainty, and Why You Should Vote No Matter What Predictions Say. Retrieved 4 July 2019, from Points. Data Society website: https://points.datasociety.net/uncertainty-edd5caf8981b

Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180083. https://doi.org/10.1098/rsta.2018.0083

Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems — CHI ’18, 1–14. https://doi.org/10.1145/3173574.3174014

Villanova, D., Ince, E. C., & Bagchi, R. (2019). To explain or not: How process explanations impact assessments of predictors. Journal of Experimental Psychology: Applied. https://doi.org/10.1037/xap0000233

Volkman, R. (2010). WHY INFORMATION ETHICS MUST BEGIN WITH VIRTUE ETHICS. Metaphilosophy, 41(3), 380–401. https://doi.org/10.1111/j.1467-9973.2010.01638.x

Volpe, R. L., Levi, B. H., Blackall, G. F., & Green, M. J. (2012). Exploring the Limits of Autonomy. Hastings Center Report, 42(3), 16–18. https://doi.org/10.1002/hast.46

von Schomberg, R. (2008). From the ethics of technology towards an ethics of knowledge policy: Implications for robotics. AI & SOCIETY, 22(3), 331–348. https://doi.org/10.1007/s00146-007-0152-z

Wachter, S., & Mittelstadt, B. (2019). A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI (September 13,2018). Columbia Business Law Review, Forthcoming. Retrieved from https://ssrn.com/abstract=3248829

Wachter, S., Mittelstadt, B., & Floridi, L. (2017a). Transparent, explainable, and accountable AI for robotics. Science Robotics, 2(6), eaan6080. https://doi.org/10.1126/scirobotics.aan6080

Wachter, S., Mittelstadt, B., & Floridi, L. (2017b). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005

Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. ArXiv:1711.00399 [Cs]. Retrieved from http://arxiv.org/abs/1711.00399

Wallach, W., Allen, C., & Franklin, S. (2011). Consciousness and ethics: Artificially conscious moral agents. International Journal of Machine Consciousness, 3(1), 177–192. https://doi.org/10.1142/S1793843011000674

Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. Topics in Cognitive Science, 2(3), 454–485. https://doi.org/10.1111/j.1756-8765.2010.01095.x

Walt, S., & Schwartzman, M. (2017). Morality, Ontology, and Corporate Rights. The Law & Ethics of Human Rights, 11(1), 1–29. https://doi.org/10.1515/lehr-2017-0002

Walton, P. (2018). Artificial intelligence and the limitations of information. Information (Switzerland), 9(12). https://doi.org/10.3390/info9120332

Wang, P., Liu, K., & Dougherty, Q. (2018). Conceptions of artificial intelligence and singularity. Information (Switzerland), 9(4). https://doi.org/10.3390/info9040079

Wang, T., Zhao, J., Yu, H., Liu, J., Yang, X., Ren, X., & Shi, S. (2019). Privacy-preserving Crowd-guided AI Decision-making in Ethical Dilemmas. ArXiv:1906.01562 [Cs]. Retrieved from http://arxiv.org/abs/1906.01562

Wang, X., Shi, W., Kim, R., Oh, Y., Yang, S., Zhang, J., & Yu, Z. (2019). Persuasion for Good: Towards a Personalized Persuasive Dialogue System for Social Good. ArXiv:1906.06725 [Cs]. Retrieved from http://arxiv.org/abs/1906.06725

Warnecke, A., Arp, D., Wressnegger, C., & Rieck, K. (2019). Don’t Paint It Black: White-Box Explanations for Deep Learning in Computer Security. ArXiv:1906.02108 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1906.02108

Waser, M. R. (2009). What is artificial general intelligence? Clarifying the goal for engineering and evaluation. 186–191.

Webb, H., Patel, M., Rovatsos, M., Davoust, A., Ceppi, S., Koene, A., … Cano, M. (n.d.). “It would be pretty immoral to choose a random algorithm”: Opening up algorithmic interpretability and transparency. Journal of Information, Communication and Ethics in Society, 0(0), null. https://doi.org/10.1108/JICES-11-2018-0092

Wexler, J. (2018). The What-If Tool: Code-Free Probing of Machine. Retrieved from https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html. https://pair-code.github.io/what-if-tool/

Wiener, N. (1960). Some Moral and Technical Consequences of Automation. Science, 131(3410), 1355–1358. https://doi.org/10.1126/science.131.3410.1355

Wiener, Norbert. (2007). Cybernetics or control and communication in the animal and the machine (2. ed., 14. print). Cambridge, Mass: MIT Press.

Wightman, D. H., Jurkovic, L. G., & Chan, Y. E. (2005). Technology to facilitate ethical action: A proposed design. AI & SOCIETY, 19(3), 250–264. https://doi.org/10.1007/s00146-005-0336-3

Wilk, A. (2019). Teaching AI, Ethics, Law and Policy. ArXiv:1904.12470 [Cs]. Retrieved from http://arxiv.org/abs/1904.12470

Wilks, Y. (2019). Moral Orthoses: A New Approach to Human and Machine Ethics. AI Magazine, 40(1), 33–34. https://doi.org/10.1609/aimag.v40i1.2854

Williams, G. (2013). Sharing Responsibility and Holding Responsible: Sharing Responsibility and Holding Responsible. Journal of Applied Philosophy, 30(4), 351–364. https://doi.org/10.1111/japp.12019

Wilson, C. (2018). Auditing Algorithms @ Northeastern. Retrieved from http://personalization.ccs.neu.edu/

Winfield, A. F. T., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180085. https://doi.org/10.1098/rsta.2018.0085

Winkler, T., & Spiekermann, S. (2018). Twenty years of value sensitive design: A review of methodological practices in VSD projects. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9476-2

Wolff, R., & Journal of Philosophy Inc. (2015). Emergent Privacy: Journal of Philosophy, 112(3), 141–158. https://doi.org/10.5840/jphil201511238

Wong, P.-H. (2019). Democratizing Algorithmic Fairness. Philosophy & Technology. https://doi.org/10.1007/s13347-019-00355-w

Wood, D. A., Choubineh, A., & Vaferi, B. (2018). Transparent open-box learning network provides auditable predictions: Pool boiling heat transfer coefficient for alumina-water-based nanofluids. Journal of Thermal Analysis and Calorimetry. https://doi.org/10.1007/s10973-018-7722-9

Wright, D. (2011). A framework for the ethical impact assessment of information technology. Ethics and Information Technology, 13(3), 199–226. https://doi.org/10.1007/s10676-010-9242-6

XAI Library. (n.d.). Retrieved from https://github.com/EthicalML/awesome-machine-learning-operations

Yampolskiy, R. V. (2019). Unpredictability of AI. ArXiv:1905.13053 [Cs]. Retrieved from http://arxiv.org/abs/1905.13053

Young, M., Magassa, L., & Friedman, B. (2019). Toward inclusive tech policy design: A method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology, 21(2), 89–103. https://doi.org/10.1007/s10676-019-09497-z

Yufeng, G. (2017, August 31). The 7 Steps of Machine Learning. Retrieved from Towards Data Science website: https://towardsdatascience.com/the-7-steps-of-machine-learning-2877d7e5548e

Zafar, M. B., Valera, I., Rodriguez, M. G., & Gummadi, K. P. (2015). Fairness Constraints: Mechanisms for Fair Classification. ArXiv:1507.05259 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1507.05259

Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013). Learning Fair Representations. In S. Dasgupta & D. McAllester (Eds.), Proceedings of the 30th International Conference on Machine Learning (pp. 325–333). Retrieved from http://proceedings.mlr.press/v28/zemel13.html

Zhang, J., & Bareinboim, E. (2018). Fairness in decision-making the causal explanation formula. Presented at the 32nd AAAI Conference on Artificial Intelligence, AAAI 2018.

Zhang, Q., & Zhu, S. (2018). Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27–39. https://doi.org/10.1631/FITEE.1700808

Zhao, W.-W. (2018). Improving Social Responsibility of Artificial Intelligence by Using ISO 26000. IOP Conference Series: Materials Science and Engineering, 428, 012049. https://doi.org/10.1088/1757-899X/428/1/012049

Zheng, N. N., Liu, Z. Y., Ren, P. J., Ma, Y. Q., Chen, S. T., Yu, S. Y., … Wang, F. Y. (2017). Hybrid-augmented intelligence: collaboration and cognition. Frontiers of Information Technology and Electronic Engineering, 18(2), 153–179. https://doi.org/10.1631/FITEE.1700053

Ziosi, M. (2018). The three worlds of AGI: Popper’s theory of the three worlds applied to artificial general intelligence. Presented at the Proceedings of AISB Annual Convention 2018.

Zivi, K. (2018). The Promise is in the Practice. Human Rights Review, 19(3), 395–398. https://doi.org/10.1007/s12142-018-0524-4

Zong, C., Wang, B., Sun, J., & Yang, X. (2014). Minimizing Explanations of Why-Not Questions. In W.-S. Han, M. L. Lee, A. Muliantara, N. A. Sanjaya, B. Thalheim, & S. Zhou (Eds.), Database Systems for Advanced Applications (Vol. 8505, pp. 230–242). https://doi.org/10.1007/978-3-662-43984-5_17

Zook, M., Barocas, S., boyd, danah, Crawford, K., Keller, E., Gangadharan, S. P., … Pasquale, F. (2017). Ten simple rules for responsible big data research. PLOS Computational Biology, 13(3), e1005399. https://doi.org/10.1371/journal.pcbi.1005399

Żuradzki, T. (2018). The normative significance of identifiability. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9487-z

Zyskind, G., Nathan, O., & Pentland, A. (2015). Enigma: Decentralized Computation Platform with Guaranteed Privacy. ArXiv:1506.03471 [Cs]. Retrieved from http://arxiv.org/abs/1506.03471

--

--

Jessica Rose Morley

AI Lead for DHSC, MSc Student at the OII, Tech for Good enthusiast and volunteer for One HealthTech