WAI Ethical Advisory Expert Group discusses privacy, power, fairness, and gendered AI with Ivana Bartoletti

Iva Simon Bubalo
WomeninAI
Published in
6 min readApr 19, 2021

On 7th March, WAI Ethics committee hosted Ivana Bartoletti, Technical Director working on Privacy at Deloitte, an Author of An Artificial Revolution: On Power, Politics and AI. Ivana is also a co-founder of Women Leading in AI Network and an internationally recognised thought leader in the field of responsible technology, focusing on privacy at the intersection with new tech and digital trade.

The discussion with Ivana centered around the issues of data privacy and collection, Ethics in practice, definition of fairness, and the problematic surrounding gendered voice assistant technologies and robotics.

Image source: UCD Centre for Ethics in Public Life (CEPL)

Challenges of data moving freely across the globe

Ivana Bartoletti shared her Interest with the group in the issue of data sharing across the globe. How come capital can move freely globally, but when it comes to data everything assumes a more localised and a nationalistic approach? What kind of capital data is today?

Across the globe currently there is no alignment on data sharing, even though there is increasing alignment on data protection. If an individual is based in the EU their data has to be treated according to the laws of the EU. Very important example for Europe is GDPR, which addresses the transfer of personal data outside the EU and EEA areas.

The legal and ethical challenges present themselves in the form of intrusion coming from the state, or different states, surveillance from the companies, are some of the current problematic related to data sharing and protection. These challenges would be enhanced, not reduced, should data protectionism prevail over data protection.

Data collection: Ethics of consent

The group has raised the following question: Are the minorities better off sharing and giving consent to their data knowing that they are underrepresented and what underrepresentation in AI entails? The discussion moved on to point out to inequality of the majority represented group being able to afford to not to consent sharing their data, while for the minorities it means that the algorithm will not necessarily work for their benefit. Do minority groups have an ethical duty to share their data for training purposes of AI, knowing how scarce their data is to contribute to designing fairer models? The problem with consent is that is hardly ever freely given, as dark patterns, deception around the perceived value exchange seem to be the main elements today.

“Data can be collected for reasons to surveil, to punish in the case of criminal actions, or to include for representation, model robustness, and fairness.”
— Ivana Bartoletti

Definition of Fairness and Ethics in practice

Ivana pointed out that currently we operate on not well defined definitions of fairness across disciplines. In privacy law, it seems to be more about power allocation between a controller and a data subject, while in discrimination law fairness seems to focus on outcomes and has issues of applicability due to the opacity of algorithmic systems.

Image source: Canva

Court cases are starting to emerge. Deliveroo in Bologna had an algorithm which they were using for riders to book their shift. The algorithm that was used to evaluate the credibility and trustworthiness of the riders. The algorithm was judged to be discriminatory by the court in Bologna, evaluating a rider on their trustworthiness the same whether they were sick or simply choosing not to work.

Dutch court ruled that the System Risk Indication (SyRI) algorithm system used by the Dutch government for detecting welfare fraud in areas such as benefits, allowances, and taxes, was in violation of human rights.

Amsterdam’s Algorithm Register, a transparent overview of the artificial intelligence systems and algorithms used by the City of Amsterdam, is an example of ethics in practice. The registry lists datasets used to train AI, descriptions of how and where an algorithm is used, and its potential risks and biases. There is also an option for citizens to give their feedback on the use of AI technology in their city.

Gendered AI in voice technologies and robotics

Throughout history most artists and authors originating from a narrow demographic group were in a position to define the standard of female beauty and demeanor. Today high fashion industry designers influence what looks and measures are considered appealing and worth striving for for all genders. Are we facing another such scenario of imposing gender standards with Sophia the Robot, Alexa, Siri, and assistant technologies using female voice? Is a non-gendered and culture neutral language even possible for voice technologies and robotics?

An example of a controversial response from Siri was when a human user told ‘her’, “Hey Siri, you’re a bi***.” Siri responded “I’d blush if I could”. UNESCO’s initiative then used the title I’d blush if I could for publication of strategies to close gender divides in digital skills through education.

“Siri’s ‘female’ obsequiousness — and the servility expressed by so many other digital assistants projected as young women — provides a powerful illustration of gender biases coded into technology products”
— UNESCO 2030 Digital Fasttrack Studios (DFS)

Image source: CIO Bulletin

Today children are growing up giving orders to Siri, Alexa and other voice assistants. We do not know to what extent this is done in a condescending way but these voice assistants most commonly are used with a female voice and perceived to have a female gender. Ivana pointed out it would be interesting to look at the long term impact on children as they interact with gendered technology.

In conclusion, the group discussed some open questions of anthropomorphisation of AI in robotics, especially assuming female gender. Is it currently done in a socially responsible and ethical way, and is it how we want to move forward? The challenge for robotics designers is to understand how gender becomes embodied in robots through a series of design decisions, and how to create digital personalities with gender in a socially responsible way.

Women in AI Ethics Advisory Expert Group

In March 2021 Women in AI (WAI) formed its Ethics Advisory Expert Group, consisting of 16 expert representatives of local WAI branches, France, Germany, Ireland, Netherlands, Sweden, Italy, Austria, US, Australia and India, chaired by Emoke Peter.

The group’s mission is to educate each WAI country branch about the current most important ethical issues approached from the perspective of law and legislation, policy, philosophy, and technical practice.

Throughout 2021, WAI Ethical Advisory Expert Group meets some of the most prominent Ethics Advisors on AI on a monthly basis to learn from and help shape the discussion of ethics in AI.

Women in AI is a non-profit organisation and the first global community of women in Artificial Intelligence with over 5,500 members in 100+ countries. We aim at shaping gender-inclusive AI that benefits global society by increasing female representation and participation in AI. Our global platform empowers female AI visionaries and practitioners by providing access to educational resources, networking opportunities, and hands-on support, while accelerating a community of global pioneers in the field of Artificial Intelligence.

Visit us: www.womeninai.co

Visit us on LinkedIn!
Follow us on Twitter!
Find us on Facebook!

--

--