AI Ethics Jobs Landscape — Part 1

Sudha Jamthe
BusinessSchoolofAI
Published in
5 min readJun 6, 2021

What have data scientists done about bias in data and unfair AI algorithms

This is part 1 of a multi-part series. My current plan is to cover this in 3 parts.

updated: Feb 26th with my Artificial Intelligence for Product and Business Managers at Stanford Continuing Studies (from April 25th 2022 for 5 weeks online) open for registration to anyone globally.

I am getting ready to teach an AI Ethics course “CareerPivot to AI Ethics” as a short 3 week, total of 6 hours online course along with my friend and AI Ethicist Susanna Raj. This post is an analysis of AI Ethics jobspace from my work analyzing 140 real open jobs, all a result of AI Ethics becoming more mainstream, and how all of these jobs touch on AI Ethics even when the job title does not specifically reference “ethics”.

What is AI Ethics?

AI Ethics can be broadly listed as the need to bring ethical practices in data, policy, and design of Artificial Intelligence to ensure fairness, trust and transparency in AI Products and Solutions. To read more about this, see my past post about The How and Why of AI Ethics.

Incorporating AI ethics when building AI in a technology company is different than bringing AI ethics to an Applied AI solution in industry, where AI is modeled utilizing enterprise data using a technology vendor’s AI product.

The challenge in bringing Ethics into AI

All AI is trained with data and this data is biased, reflecting and exacerbating biases that exists in society. There has been push back by several AI Ethics organizations and AI Ethics leaders for policy reform, AI Governance across industries, and Responsible AI frameworks for technological development. This has created several AI Ethics jobs across industries, globally.

This year I taught AI Ethics to three masters programs at Barcelona Technology School and gave a recent guest talk about AI Ethics to a group of ML Engineers at Fourthbrain. I had to re-create my curriculum 3 different times for the Engineers/data science students, digital transformation leadership students and UX designer students.

The lesson from this is that people in different technological roles comprehend and apply AI ethics differently, and want to contribute to different perspectives of their roles in removing bias from data.

Now we are creating a fourth version of AI Ethics course with a focus on CareerPivoting from various job roles and developing transferable skills.

I am on a quest to demystify the AI Ethics landscape and guide my students, professional adults with personal leadership, how to careerpivot to these exciting new roles by identifying transferrable skills that my students can apply to AI ethics within their companies.

The challenge is that most of these positions do not refer to AI Ethics in the job title. On the contrary, it also means that many jobs have AI ethics expectations baked into them as a job requirements, as AI becomes more and more pervasive in every industry and every functional area.

Why do need many different jobs in AI Ethics?

I am not seeing a Chief Data Ethics jobs out there. Instead there are several jobs that incorporate data privacy, bias identification, AI governance, AI policy management into product and program role.

This is because AI is built by a team of people in the Machine Learning (ML) lifecycle and there is bias introduced at every stage and bias mitigation is needs to be incorporated with a design lens, with explainability to reduce business risk and governance as the AI continues to learn from customer interaction.

What have data scientists done about bias in data and about unfair AI algorithms

Data scientists and ML engineers have been involved in building AI, sometimes alone as researchers and sometimes in partnership with product and business managers who help narrow down the business use cases. They have spotted extensive bias in many of these AI algorithms.

  1. Search engines are the most common AI that gets used daily, as their algorithms are trained with data. Microsoft Bing, for example, decided to hide results for the word “tank man” that leads to this Wikipedia page about the Tiananmen Square massacre. This type of information-control cannot come accidentally only by biased data. I have no specific knowledge of how this was done inside a large technology company because their bias mitigation process is not transparent, but I am sure this cannot be done solely by a data scientist. In other words, business leaders and product managers necessarily have an impact on how these algorithms function.
  2. Google search has many examples of how its search glorifies or vilifies certain languages and cultures. Leon Derczynski has many examples in his tweet thread here.

“Authoritative derogation of a few million people based on subjective opinions. This is a bad and harmful application of AI — it fuels marginalisation and bullying — but Google has no effective oversight or complaint mechanism for this. — @LeonDerczynski

Google has acknowledged as much in this research paper about “Debiasing Embeddings for Fairer Text Classification.” They need AI ethicists to build effective oversight and complaint management mechanism, enacted by people with program management and operations background.

3. Pharmaceutical companies do not have sufficient diversity of data representing all races nor enough women in their clinical trials.

Research papers have been written about this.

5. Police use predictive algorithms developed with AI that are biased to target communities to supposedly prevent crime before it happens. Anyone reminded of the movie “minority report”? Researchers have done Randomized Controlled Field Trials of Predictive Policing and written about it in these AI here and here.

1400 mathematicians wrote a letter condemning this bias. Now PredPol, the most widely used Police predictive AI algorithm, which was developed by LAPD and a UCLA professor, has been discontinued.

AI Ethics Jobs Landscape for Datascientists

Here are open DataScience/Researcher Jobs globally. If you are interested in any of these jobs and cannot find them easily on the company sites, let me know and I am happy to share the details with you.

In Part 2 of this series we will see what ML engineers and Product and Program managers in the ML Product development have done about AI Ethics. If you are interested in CareerPivot to AI Ethics, feel free to check my upcoming course at BusinessSchoolofAI.com/aiethicscourse.

References:

Randomized Controlled Field Trials of Predictive Policing Mohler, G. O. et al. J. Am. Stat. Assoc. 110, 1399–1411 (2015). doi: https://doi.org/10.1038/d41586-020-01874-9.

Gilmore-Bykovskyi A, Jackson JD, Wilkins CH. The Urgency of Justice in Research: Beyond COVID-19. Trends Mol Med. 2021 Feb;27(2):97–100. doi: 10.1016/j.molmed.2020.11.004. Epub 2020 Nov 17. PMID: 33277159; PMCID: PMC7855410.

Brantingham, P. J., Valasik, M. & Mohler, G. O. Stat. Public Policy 5, 1–6 (2017). https://doi.org/10.1080/2330443X.2018.1438940

1400 mathematicians wrote a letter condemning this bias.https://www.nature.com/articles/d41586-020-01874-9

Debiasing Embeddings for Fairer Text Classification, Flavien Prost, Nithum Thain andTolga Bolukbasi, (2019). https://research.google/pubs/pub48410/

Sudha Jamthe is a Technology Futurist and AI instructor at Stanford Continuing Studies and online at Business School of AI. Checkout her Artificial Intelligence for Product and Business Managers at Stanford Continuing Studies (from April 25th 2022 for 5 weeks online) or series ofAI Ethics Inclusive AI and Responsible AI courses at BusinessSchoolofAI.com

--

--

Sudha Jamthe
BusinessSchoolofAI

Passion drives me: People, AI & Autonomous Vehicles Business @StanfordCSP, BusinessSchoolofSchool.com Vegetarian. Aspiration: a limitless world.