Towards an Ethical Approach to AI

Rick Qiu
Toward an Ethical Approach to AI
5 min readJan 3, 2020

1. Societal Well-being and Environmental Sustainability

The development and use of Artificial Intelligence System (AIS) should promote the well-being of people and the planet, ensure the sustainability of the environment (Montréal Declaration, 2018). AIS can be used to enable self-realisation and enhance social skills, but the overuse or misuse of AIS can equally contribute to their deterioration (Floridi et al., 2018). AIS must ensure the essential preconditions for life on our planet, the prosperity for humankind and preservation of a sustainable environment for the future of generations (EGE, 2018). HLEG (2019) states “the use of AI systems should be given careful consideration particularly in situations relating to the democratic process, including not only political decision-making but also electoral contexts”. The effects of these systems must be carefully monitored and considered. In general, the use of AI must be beneficial to the society, culture, economy, politics and environment of the planet.

2. Respect for Autonomy

Individual autonomy is the “freedom of the individual” which means the individual has the fundamental rights to be one’s person, to make one’s personal choice and to live one’s own life. “Self-nudging” preserves autonomy and dignity, and the “compos mentis” intrinsic motivations make a person behave in socially preferable ways. The predictive power and constant optimisation of AIS pose a risk to human self-determination. Montréal Declaration (2018) articulates that “AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings”. Floridi et al. (2018) assert “striking a balance between the decision-making power we retain for ourselves and that which we delegate to artificial agents”.

3. Justice and Non-discrimination

AIS should promote justice, equality and seek to eliminate all sorts of discriminations. The development and use of AI should correct past wrongs, such as removing unfair discrimination, ensuring the benefits that are shared by everyone and preventing the creation of new harms to society (Floridi et al., 2018). Google AI (2019) states “AI algorithms and datasets can reflect, reinforce, or reduce unfair biases”, while HLEG (2019) argues that “In an AI context, equality entails that the system’s operations cannot generate unfairly biased outputs”. HLEG (2019) also states “adequate respect for potentially vulnerable persons and groups, such as workers, women, persons with disabilities, ethnic minorities, children, consumers or others at risk of exclusion”. We need to ensure AI systems are fair in their impact on people’s lives.

4. Privacy and Security

The deployment and use of AI must not infringe on personal privacy that is one of the fundamental human rights. AIS must be designed to protect privacy and intimacy from intrusion and data acquisition. AIS should not be used to suppress the democratic process through mass surveillance or digital evaluation of the people. The privacy of thoughts and emotions and lifestyle choices must be strictly protected from AIS. People must always have the right to disconnect from the networks to live their preferable lifestyle. AIS must guarantee the confidentiality and anonymity of personal data (Montréal Declaration, 2018). GDPR is the EU’s privacy laws to protect and empower all EU citizens data privacy and to reshape the way organisations in EU approach data privacy. In handling personal data, organisations in the EU need to comply with the GDPR. Any security breach to AIS not only causes reputation, economic, or even physical harm to the users but also threatens the existence of a business. AIS must be secure “by-design” to prevent hackers from accessing them. Personal data must be stored in safe and distributed systems with firewall protection. The systems must be regularly audited and timely updated to close any vulnerability that can be explored by the hackers and emerging malicious viruses.

5. Robustness and Safety

HLEG (2019) defines “robustness and safety” scope as “including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility”. AI safety technically has three areas: specification, robustness, and assurance. A specification defines the goal of the system and ensures that AIS behaviours as the operator wants. The specification of AIS must ensure that the system optimises to the true goal and behave by the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal. Robustness is concerned with designing AIS to withstand perturbations and ensuring that the system continues to operate within safe limits. Considerations of robustness include avoiding risks (prevention) or self-stabilisation and graceful degradation (recovery). Safety problems resulting from the distributional shift, adversarial inputs, and unsafe exploration can be classified as robustness problems. Assurance aims to monitor and control system activity, ensuring that we can understand and control AIS during operations. If there is a self-learning or autonomous AIS, you must have a “stop button” to safely abort a process where needed.

6. Explainability

Explainable AI (XAI) refers to those techniques in AI which can be trusted and easily understood by humans. Explainability refers not only to whether the decisions model outputs are interpretable but also the whole process and intention surrounding the model are also interpretable. Designers of AIS need to explain 1) how the system benefits or impacts the concerned parties 2) what data sources are used and how you use them 3) how the inputs in a model lead to outputs. Floridi et al. (2018) promote “engagement, openness, and contestability” as an ethical approach to AI.

7. Accountability

In the ethical sense, accountability is a question about who is responsible for the way it works. Floridi et al. (2018) point out “for AI to be just, we must ensure that the technology or, more accurately, the people and organisations developing and deploying it are held accountable in the event of a negative outcome, which would require in turn some understanding of why this outcome arose”. To ensure responsibility and accountability for AIS, HLEG (2019) outlines four requirements: “auditability, minimisation & reporting of negative impacts, trade-offs and redress”.

References

Montréal Declaration (2018), Montréal Declaration for a Responsible Development of Artificial Intelligence, Université de Montréal. [online] Available at: https://www.montrealdeclaration-responsibleai.com/the-declaration [Accessed 3 Jan. 2020].

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P. and Vayena, E. (2018). AI4People — An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, Minds and Machines. [online] Springer.com. Available at: https://link.springer.com/article/10.1007/s11023-018-9482-5 [Accessed 3 Jan. 2020].

European Group on Ethics (EGE) in Science and New Technologies (2018). Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems, European Commission. Available at: http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf [Accessed 3 Jan 2020].

High-Level Expert Group (HLEG) (2019), Ethics guidelines for trustworthy AI. Digital Single Market. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai [accessed 3 Jan. 2020].

Google AI. (2020). Our Principles — Google AI. [online] Available at: https://ai.google/principles/ [Accessed 3 Jan. 2020].

--

--