Impact and Ethics of AI: Insights from CHI 2024, Questions Multiply

danielequercia
SocialDynamics
Published in
4 min readJun 26, 2024

From enhancing communication to ensuring AI’s wise application, here are key takeaways and insights from CHI 2024, the main human-computer interaction conference, which left me with more questions than answers.

1. Assessing AI Harms

FarSight: Collaborative Approach to AI Harms (pdf, short video, video). FarSight emphasizes the importance of identifying and assessing AI risks early. By embedding negative news articles and envisioning potential use cases and misuses, they aim to foresee harms through a collaborative approach. Are we ready to handle the ethical complexities that come with such an approach?

The Importance of Human Collaboration in AI Harm Assessment — missed the title of the article ;) A collaborative approach to assessing AI harms, involving diverse perspectives and contextual understanding, improves the accuracy and relevance of harm assessments. This method encourages the consideration of a broader range of potential harms and mitigations. However, it also raises a critical issue: Can we achieve genuine collaboration and understanding across such diverse viewpoints, or will conflicting interests dilute the effectiveness of these assessments?

A Scoping Study of Evaluation Practices for Responsible AI Tools (pdf, video). A study proposed new evaluation methods for Responsible AI (RAI) tools, moving beyond traditional usability tests. This approach aims to ensure that RAI tools are not only functional but also ethical and effective in real-world applications. But can we truly measure the ethical impact of AI, or are we merely scratching the surface?

2. People’s perceptions of AI

Perceptions are different and may be influence by tech-savyness, “moral code”, and culture.

Users’ Perception of AI-Driven Communication Tools (pdf, short video, full video). An insightful study resulting in many questions:

  1. Will these tools strip away the human element, making our interactions overly mechanical?
  2. While AI tools can improve communication across cultures, do they risk homogenizing cultural nuances and expressions?
  3. Participants found AI useful for both formal and informal high-stakes interactions, serving as communication optimizers and coaches. Are we ready to trust AI with our most sensitive and important communications?
  4. The danger of overreliance on tools is significant. Are we creating a society that is overly dependent on AI, risking a decline in critical thinking and personal interaction skills?

Moral Foundations and AI Judgments (pdf, video). Research exploring the association between moral foundations and judgments of AI behavior highlighted several findings:

  1. Moral foundation theory helps predict how people might judge AI actions.
  2. The more agency attributed to AI, especially by non-tech-savvy individuals, the more negatively it is judged.
  3. People’s judgments of AI often do not align with the results of moral foundation questionnaires.

Are our moral and ethical frameworks equipped to handle the complexities of AI behavior? Do participants understand the complexities behind moral foundation dimensions?

Cultural Influences on AI Expectations (pdf, video). Research indicates that cultural backgrounds significantly shape people’s expectations and perceptions of AI. For instance, Chinese respondents prioritize connecting with AI over controlling it, reflecting different cultural values. This highlights a crucial debate: Should AI development prioritize universal standards, or should it cater to diverse cultural expectations?

3. AI as Personal Assistant

In the future, interactions based on textual prompting will be replaced by graphical interactions, and AI will help in brainstorming, writing, fitness — but is all that an illusion?

DirectGPT: Enhancing Visual Interaction with AI (pdf, short video, video). DirectGPT aims to transform the way we interact with chat-based LLMs by making it more visual and intuitive. This innovation could democratize AI usage, but it also raises concerns about accessibility and dependency. Are we making AI too easy to use without understanding its complexities, leading to overreliance and potential misuse?

AI-Augmented Brainwriting: Investigating the use of LLMs in group ideation (pdf, video) . Using large language models (LLMs) in group ideation processes has shown promise in enhancing both the divergence and convergence stages of idea generation. Are we inadvertently creating a dependency that stifles our own innovative capacities?

LegalWriter: Intelligent Writing Support for Law Students (pdf, video). A study on LegalWriter demonstrated that students receiving AI-based writing support crafted more structured and persuasive legal arguments. This tool’s positive impact on writing quality is undeniable, but does it undermine the development of critical thinking skills? Are we outsourcing too much intellectual effort to machines?

The Potential Harm of AI-Generated Labels (pdf, video). Google’s research indicates that labels like “created by AI” could be more harmful than helpful. These labels might create biases, influence perceptions, and erode trust, potentially leading to a society where AI-generated content is automatically viewed with suspicion or dismissed outright. This raises a crucial question: Are we inadvertently stoking fear and misunderstanding about AI’s role in content creation?

AI-Driven Healthcare and Fitness Narratives (pdf, short video, video). Innovations like Talaria and studies on LLMs for smoking cessation and fitness tracking have shown how AI can support healthcare and personal well-being. By providing tailored, reflective feedback, these tools help users make informed decisions about their health and habits. However, is there a risk of becoming overly dependent on AI for personal health decisions, potentially leading to a decline in personal accountability and understanding?

The Illusion of Artificial Inclusion (pdf, video). A scoping review by Kevin McKee at Google investigated the implications of replacing human participants with LLMs in studies. While LLMs offer speed and scale, challenges like hallucinations, bias, and the undermining of representation and inclusion values persist.

A few other papers

  • Designing for Human-Agent Alignment (pdf, video)
  • Human-AI Collaboration in Thematic Analysis using ChatGPT (pdf, video)
  • ChatTL;DR — You Really Ought to Check What the LLM Said on Your Behalf (idea of attention checks, pdf, video)
  • Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks (pdf, video).
  • Human-AI Collaboration in Thematic Analysis using ChatGPT: A User Study and Design Recommendations (pdf, video).

--

--