Important Deep Learning Ethical Issues

Albert Energy
May 17 · 7 min read
Source: Google Images; Shutterstock.com

Here I summarize how to identify and address ethical issues of deep learning from the book “Deep learning for Coders with fastai and PyTorch” by Jeremy Howard & Syvain Gugger.

A four step process is recommended to identify and address ethical issues of deep learning:

  1. Analyze a project you are working on.
  2. Implement processes at your company to find and address ethical risks.
  3. Increase diversity.
  4. Support good policy.

1. Analyze a Project You Are Working On

To not miss important issues when considering ethical implications of your work, you and your team must ask yourselvesthe right questions. Rachel Thomas recommends considering the following questions throughout the development of a data project:

  • Should we even be doing this?
  • What bias is in the data we are collecting and storing?
  • Can the code and data be audited?
  • What are the error rates for different sub-groups?
  • What is the accuracy of a simple rule-based alternative?
  • What processes are in place to handle appeals or mistakes?
  • How diverse is the team that built it?

These questions may be able to help you identify outstanding issues, and possible alternatives that are easier to understand and control. In addition to asking the right questions, it’s also important to consider practices and processes to implement.

2. Implement processes at your company to find and address ethical risks.

Implement processes at your company, including regularly scheduled sweeps to proactively search for ethical risks (in a manner similar to cybersecurity penetration testing), expanding the ethical circle to include the perspectives of a variety of stakeholders, and considering the terrible people (how could bad actors abuse, steal, misinterpret, hack, destroy, or weaponize what you are building?).

Here are several question sets for you and your team to ask to find and address ethical risks:

Even if you don’t have a diverse team, you can still try to pro-actively include the perspectives of a wider group, considering questions such as these (provided by the Markkula Center):

  • Whose interests, desires, skills, experiences, and values have we simply assumed, rather than actually consulted?
  • Who are all the stakeholders who will be directly affected by our product? How have their interests been protected? How do we know what their interests really are — have we asked?
  • Who/which groups and individuals will be indirectly affected in significant ways?
  • Who might use this product that we didn’t expect to use it, or for purposes we didn’t initially intend?

Another useful resource from the Markkula Center considers how different foundational ethical lenses can help identify concrete issues, and lays out the following approaches and key questions:

  • The rights approach: Which option best respects the rights of all who have a stake?
  • The justice approach: Which option treats people equally or proportionately?
  • The utilitarian approach: Which option will produce the most good and do the least harm?
  • The common good approach: Which option best serves the community as a whole, not just some members?
  • The virtue approach: Which option leads me to act as the sort of person I want to be?

Markkula’s recommendations include a deeper dive into each of these perspectives, including looking at a project through the lenses of its consequences:

  • Who will be directly affected by this project? Who will be indirectly affected?
  • Will the effects in aggregate likely create more good than harm, and what types of good and harm?
  • Are we thinking about all relevant types of harm/benefit (psychological, political, environmental, moral, cognitive, emotional, institutional, cultural)?
  • How might future generations be affected by this project?
  • Do the risks of harm from this project fall disproportionately on the least powerful in society? Will the benefits go disproportionately to the well-off?
  • Have we adequately considered “dual-use”?

The alternative lens to this is the deontological perspective, which focuses on basic concepts of right and wrong:

  • What rights of others and duties to others must we respect?
  • How might the dignity and autonomy of each stakeholder be impacted by this project?
  • What considerations of trust and of justice are relevant to this design/project?
  • Does this project involve any conflicting moral duties to others, or conflicting stakeholder rights? How can we prioritize these?

One of the best ways to help come up with complete and thoughtful answers to questions like these is to ensure that the people asking the questions are diverse.

3. Increase Diversity

Currently, less than 12% of AI researchers are women, according to a study from Element AI. The statistics are similarly dire when it comes to race and age. When everybody on a team has similar backgrounds, they are likely to have similar blindspots around ethical risks. The Harvard Business Review (HBR) has published a number of studies showing many benefits of diverse teams, including:

Diversity can lead to problems being identified earlier, and a wider range of solutions being considered. For instance, Tracy Chou was an early engineer at Quora. She wrote of her experiences, describing how she advocated internally for adding a feature that would allow trolls and other bad actors to be blocked. Chou recounts, “I was eager to work on the feature because I personally felt antagonized and abused on the site (gender isn’t an unlikely reason as to why)… But if I hadn’t had that personal perspective, it’s possible that the Quora team wouldn’t have prioritized building a block button so early in its existence.” Harassment often drives people from marginalized groups off online platforms, so this functionality has been important for maintaining the health of Quora’s community.

A crucial aspect to understand is that women leave the tech industry at over twice the rate that men do, according to the Harvard Business Review (41% of women working in tech leave, compared to 17% of men). An analysis of over 200 books, white papers, and articles found that the reason they leave is that “they’re treated unfairly; underpaid, less likely to be fast-tracked than their male colleagues, and unable to advance.”

Studies have confirmed a number of the factors that make it harder for women to advance in the workplace. Women receive more vague feedback and personality criticism in performance evaluations, whereas men receive actionable advice tied to business outcomes (which is more useful). Women frequently experience being excluded from more creative and innovative roles, and not receiving high-visibility “stretch” assignments that are helpful in getting promoted. One study found that men’s voices are perceived as more persuasive, fact-based, and logical than women’s voices, even when reading identical scripts.

Receiving mentorship has been statistically shown to help men advance, but not women. The reason behind this is that when women receive mentorship, it’s advice on how they should change and gain more self-knowledge. When men receive mentorship, it’s public endorsement of their authority. Guess which is more useful in getting promoted?

As long as qualified women keep dropping out of tech, teaching more girls to code will not solve the diversity issues plaguing the field. Diversity initiatives often end up focusing primarily on white women, even though women of color face many additional barriers. In interviews with 60 women of color who work in STEM research, 100% had experienced discrimination.

The hiring process is particularly broken in tech. One study indicative of the dysfunction comes from Triplebyte, a company that helps place software engineers in companies, conducting a standardized technical interview as part of this process. They have a fascinating dataset: the results of how over 300 engineers did on their exam, coupled with the results of how those engineers did during the interview process for a variety of companies. The number one finding from Triplebyte’s research is that “the types of programmers that each company looks for often have little to do with what the company needs or does. Rather, they reflect company culture and the backgrounds of the founders.”

This is a challenge for those trying to break into the world of deep learning, since most companies’ deep learning groups today were founded by academics. These groups tend to look for people “like them” — that is, people that can solve complex math problems and understand dense jargon. They don’t always know how to spot people who are actually good at solving real problems using deep learning.

This leaves a big opportunity for companies that are ready to look beyond status and pedigree, and focus on results!

4. Support Good Policy

Although there were no specific mentions on supporting good policy on deep learning ethical issues, it was indirectly implied through on three levels. To support good ethical policy on deep learning, the following conditions must exist:

1. Individuals must be educated on the issues and thought process of ethical issues on deep learning and people must make it a habit to evolve their awareness and skills on being ethical

2. Companies must create incentives on being ethical and create transparent and checks and balances internally and externally to keep a diverse group of people mutually responsible on a frequent enough basis.

3. Government must create effective policies and laws to incentivize people and companies to be ethical.

However, even with these in place, we may not see any significant changes in the ethical practices of deep learning applications until the underlying profit incentives change.

As a subject for continued discussion, how could we simplify the process for identifying and solving ethical issues in deep learning to the point that just about anyone would be able to learn the ethical basics in a matter of minutes? Happy to hear any thoughts and feedback! Happy coding!

unpackAI

Deep Learning education as accessible as possible.

unpackAI

unpackAI is a nonprofit organization that makes AI and Deep Learning education as accessible as possible by offering free virtual bootcamps with community-driven learning experience and guidance of professional mentors. Follow us: https://www.linkedin.com/company/14590931/

Albert Energy

Written by

American living in Shanghai, China for 20 years. My passion is empowering people to turn their innovative ideas into meaningful and sustainable businesses!

unpackAI

unpackAI is a nonprofit organization that makes AI and Deep Learning education as accessible as possible by offering free virtual bootcamps with community-driven learning experience and guidance of professional mentors. Follow us: https://www.linkedin.com/company/14590931/