Ethical Challenges of Artificial Intelligence: An Interview with Merve Hickok

RMDS Lab
RMDS Lab
Published in
6 min readMay 25, 2021

Along with the growing advancements in the applications of artificial intelligence are the ethical impacts that must be considered. RMDS Lab was happy to discuss the greatest ethical challenges AI specialists face and what can be done about them with Merve Hickok, founder of AIEthicist.

Tell us a little about yourself as well as your past and current research interests.

I am the founder of AIEthicist and Lighthouse Career Consulting. I am an independent AI ethics consultant, lecturer and speaker. I am also a member of the IEEE Work Groups that set global standards for autonomous systems; an instructor at RMDS Lab providing training on AI & Ethics; a ForHumanity Fellow contributing to an independent audit of AI systems; one of the founding editorial board members of Springer Nature AI & Ethics journal; a regional lead for Women in AI Ethics Collective, and a technical/policy expert for AI Policy Exchange. I am a certified HIPAA privacy & security expert (CHPSE). I worked for more than 15 years at senior level in Fortune 100 companies, mainly focused on HR, HR Technology and Diversity. My research and work interests are AI ethics and bias and its implications on individual & society. I work to create awareness, build capacity through training, and advocate for ethical and responsible use of AI. I collaborate with several national and international organizations to build governance methods applicable to any AI system, and also have some more specific work addressing the implications of AI on HR systems and future of work.

What are the main biases that exist in AI?

Almost any data that is created by humans, we collect about humans, or selected by humans is biased by the very nature of humans and society. AI specialists need to acknowledge that the bias in the systems can reinforce and amplify the inequalities and discrimination that exist in the society. “I am just an engineer” or “I don’t make the last decision” is not enough. We all have a responsibility in reimagining our world and making it better. Biases that can leak into the system are too many to name here but a few to kick off the thinking process are sunk cost bias, automation bias, representation bias, measurement and selection biases, framing effect, stereotyping, availability bias etc…

How can these biases best be addressed?

We need more applied ethics training as part of our education, but also digital literacy as part of the mainstream society. Without understanding the implications of big data and how it is processed, and how bias can be included in an AI system, or just as seriously, how AI can be used to exploit human biases, we cannot effectively address the issues created by it. The developers and users of AI systems need to understand their intentional contribution to the creation of further issues in social justice.

What are some of the greatest ethical challenges facing AI specialists?

The greatest challenge is how not to be part of a system or tools/services that exploit human dignity and autonomy or well-being. There is always a decision to be made. Should I prioritize revenue or deadlines over responsible debates and development of the AI tool? Is AI even the best solution to this problem? Do I know my dataset enough and the context of that data? Have I voiced my concerns about the issues in the system? Have I empowered the team enough to voice concerns and take responsible action?

What steps can companies take to ensure they are using AI ethically?

  • Ethical and responsible work starts with C-level commitment first and the leaders being models to the rest of the employees in their actions & priorities. This is not specific to AI but to all work associated with the organization.
  • It is then about embedding diversity and the organization’s values and principles into the culture of the organization, into the recruitment practices and incentives mechanisms, and into the project management process, full lifecycle of product development and deployment, policies and procedures.
  • Another very critical step is creating the space for people to constructively bring up their ideas and/or concerns so that everyone is expected to think about how to improve the product/service for the consumer, company, society — and have the ability to bring their thoughts forward.

What ethical aspects of AI are overlooked and need to be considered more?

Ethics and values are culture and context dependent. We need to be aware that we are not forcing our own values and priorities upon others, especially in a world without digital borders. What AI product you launch today has the potential to be available worldwide immediately. What we are overlooking are non-Western ethical values and perspectives, and also how use of AI is impacting the power relationships within a society (between individual, corporations and government) and between different countries.

What ethical challenges do you face when applying AI to social justice issues?

There is an enthusiasm to apply AI tools to every single problem we have around us, without actually diving deeper to the root causes and structural issues with that problem (for example policing or welfare benefit eligibility). We need to move away from that techno-solutionism mindset first. If after looking at context and history of the issue, and deliberating with the stakeholders who have been involved in fighting a particular social justice issue we decide that AI might be a solution, we definitely need to be extra diligent. The outcomes an AI solution brings have impact on human lives in substantial ways. Knowing that any data about humans is biased due to its nature, AI systems might magnify and accelerate the inequality in society and create further obstacles for people to access resources and opportunities.

What ethical challenges do you face when applying AI to helping those with intellectual disabilities?

Following up from the previous comment on techno-solutionism, we need to ensure that we are not falling into the ableism trap. In certain cases, when AI solutions are used with people with intellectual disabilities (or any disability for that matter), there is a tendency to use able/typical body as a norm and trying to move the person with a disability towards that norm. Everyone have their own skills and bring diversity to the world. The tools created and offered should use the insights of the people who are impacted and work towards making their lives easier, not assuming that developers have the solutions. The inclusive design approach has been born out of disability justice work (Nothing About Us Without Us), and AI design should be no different. Also, AI practitioners need to ensure that their solutions are not biased and are not creating extra hurdles for people with disabilities or evaluating them as “outliers” in data or results. One billion people, or 15% of the world’s population, experience some form of disability. So we need to move away from treating typical body and mind as the norm.

What is the best way to alleviate these challenges?

Best way is to understand that ethics is not a constraint on your innovation, an extra expense or a delay in your products. If fully and consistently integrated, it is actually a way to differentiate your product, have better insights with regards to risks and opportunities. In order to alleviate these challenges, those in decision-making roles need to ensure that inclusive design and an ethical, auditable framework is in place in their development and implementation processes and policies. An ethical framework consistently requires the teams to ask crucial questions on inclusion, outcomes, accuracy, metrics, feedback etc.

What are some of the most interesting developments in the relationship between AI and intellectual disabilities?

Nearly 6.5 million people in the United States have some level of intellectual disabilities. AI can have the ability to create individualized applications for people with intellectual disabilities, and assist them acquire and maintain adaptive behavior and enhance their linguistic diversity. There should always be human oversight to ensure that the wellbeing of the person is protected and he/she is not being exploited or abused with these technologies. I do want to flip this question a bit though and say for all our talk about artificial ‘intelligence’, we do not know what intelligence is, how humans learn, etc. So we need to be diligent

What resources would you recommend to people who want to learn more about ethics in AI and data science?

I actually have created a huge repository of resources for EXACTLY that reason and keep it current. I hope you enjoy everything in www.AIethicist.org and let me know if I am missing any major work.

If you want to take your AI knowledge to the next level, check out Merve’s course, AI & Ethics: Bias, Diversity and Ethical Decision Making with AI Systems, which can be found at https://learn.grmds.org/

Join Merve and RMDS for our “Ask the Expert” webinar recording here: https://learn.grmds.org/course/view.php?id=45

--

--