Building a Fair and Accountable Future: Unveiling the Social Dimensions of AI Systems

danielequercia
SocialDynamics
Published in
3 min readJun 17, 2023

As an urban computing and responsible AI researcher, I recently had the pleasure of attending the FAccT conference — a groundbreaking gathering that unites minds from diverse disciplines to explore the realms of Fairness, Accountability, and Transparency in socio-technical systems. This critical event served as a pivotal platform for researchers and practitioners to tackle the pressing issues surrounding AI systems’ social implications.

Overrepresented countries in red, underrepresented ones in blue.

In our presentation, we delved into our team’s latest research on WEIRD FAccTs (Western, Educated, Industrialized, Rich, and Democratic countries), revealing a startling disparity in the literature. Despite comprising less than 12% of the global population (more in the research manuscript), Western countries dominated the discourse on Fairness, Accountability, and Transparency. This revelation begs a profound question: Are our AI systems genuinely inclusive and globally conscious?

One concept that profoundly resonated with me was “Caring and Correcting: The Path to the Future of AI Systems,” eloquently articulated by the keynote speaker Payal Arora of Erasmus University. It highlights the transformative power of infusing care, compassion, and attentiveness into the very fabric of our AI systems. By intertwining these values, we can cultivate sustainable efficiency that not only benefits all stakeholders but also preserves vital resources for future generations.

Moreover, the conference brought attention to the critical need for redressal mechanisms within AI systems. While our pursuit of creating ideal and responsible AI is commendable, we must acknowledge that no system can achieve absolute perfection. Therefore, we must shift our focus towards establishing robust mechanisms that facilitate contestation and rectification of shortcomings. This commitment to transparency and accountability is pivotal for building public trust.

The conference also showcased compelling insights. For instance, the revelation of the underground usage of LLMs (large language models) among MTurk workers shed light on the gap between regulations and actual practices. Additionally, the exploration of justifiable falsehoods in AI systems underscored the ethical challenges we face, emphasizing the importance of ethical decision-making and accountability.

As a scientist, I firmly believe in the empowering role academics can play in shaping AI policy. By conducting comprehensive literature reviews and disseminating their findings through accessible platforms like @techpolicypress, researchers can bridge the gap between academic research and practical policy implementation. This collaboration ensures that policymakers possess the necessary insights to make informed decisions and navigate the AI landscape responsibly.

Furthermore, the conference uncovered the biases and stereotypes entrenched within AI systems. Notably, the Google study on harmful stereotypes perpetuated by LLMs towards the disability community, presented by Vinitha Gadiraju, underscored the urgency of addressing these biases. It is imperative that we employ more diverse and representative training data, design inclusive evaluation metrics, and develop transparent and interpretable models to rectify these systemic injustices.

In our quest for progress, Irene Solaiman of Hugging Face reminded us that we must confront the increasing closure of LLMs and the potential consequences it carries. Striking a delicate balance between openness and proprietary control is paramount to ensure equitable access and foster innovation.

Large Language Models with < 6 billion parameters have been open, but more powerful models, especially from large companies, are increasingly closed.

Moreover, the exploration of AI fairness at LinkedIn posed thought-provoking questions about equal treatment versus equitable outcomes, urging us to seek solutions that address disparities and provide tailored support where needed.

Regulation emerged as another central theme at the conference. The EU AI Act and discussions surrounding the regulation of ChatGPT and large generative AI models shed light on the intricate dance between regulation and innovation. Striking the right balance is essential to avoid stifling creativity and burdening small and medium-sized enterprises while upholding ethical standards and minimizing potential harms.

Lastly, a Stanford study highlighted the organizational challenges in prioritizing AI ethics. From grappling with prioritization during product launches to quantifying the long-term impact of AI systems on society, organizations face formidable hurdles. However, prioritizing ethics is not just a moral imperative but also a strategic one, as it nurtures public trust and fosters innovation in an ethically conscious environment.

The FAccT conference reignited the urgency to address the social dimensions of AI systems. Let’s work together to shape AI systems that empower and uplift humanity while safeguarding our shared values.

--

--