LLMs and Explainable AI in Computational Social Science — Highlights from ICWSM 2024

Adam Zhou
SocialDynamics
Published in
4 min readJun 26, 2024

I recently attended the International Conference on Web and Social Media (ICWSM) 2024 in Buffalo. I presented our research on fake news targeting companies. In general, the conference offered a wealth of insights into the evolving landscape of computational social science. Beyond the usual topics like election and politics-related studies, misinformation, hate speech, and social media analysis, several papers caught my eye, particularly those focused on Large Language Models (LLMs), model explanations, and intriguing datasets.

Dr. Dayi Yang presenting her keynote on ICWSM 2024 sharing her insights on how Language Language Models are transforming computational social science.

The Surge of LLMs:

Can Large Language Models Transform Computational Social Science?

One of the highlights was the keynote by Diyi Yang, which posed a pivotal question: Can LLMs transform computational social science? Her talk offered a comprehensive analysis of how LLMs can be leveraged in this field across three aspects: measurement, experimentation, and intervention.

From the perspective of measurement, it’s intriguing to see her efforts in performing meta-evaluation of 13 zero-shot LLMs across 24 computational social science benchmarks. The results showed promising outcomes in using LLMs for computational social science problems such as emotion detection and misinformation classification. However, there are also limitations in applying LLMs to tasks where even humans disagree with each other, such as empathy and toxicity classification. Human-AI co-annotation is proposed to partially address this.

Additionally, she shed light on intervention steps by examining her recent work on social skill training via LLMs. It’s interesting to see how AI mentors and partners can be developed using LLMs to assist in the challenging task of training users in soft social skills. She concluded her keynote by advocating for steering LLMs towards improving human-AI interaction and ensuring equitable access to LLMs for open science collaborations.

The Persuasive Power of Large Language Models

Another interesting paper by our favorite Luca Aiello and his team is “The Persuasive Power of Large Language Models”. The study delved into the ability of LLMs to act as persuasive social agents and explored whether LLMs could generate compelling arguments capable of influencing public opinion and simulating human-like persuasion dynamics. The results were intriguing: arguments rich in factual knowledge, trust markers, supportive expressions, and status signals were deemed most persuasive. This research not only underscores the potential of LLMs in shaping online discourse but also provides a framework for future studies on opinion dynamics using artificial agents, which is amazing!

While these two work were particularly captivating, several other LLM papers were interesting, making significant contributions to the field:

  • Evaluating and Improving Value Judgments in AI: A Scenario-Based Study on Large Language Models’ Depiction of Social Conventions
  • Look Ahead Text Understanding and LLM Stitching
  • Tec: A Novel Method for Text Clustering with Large Language Models Guidance and Weakly-Supervised Contrastive Learning
  • Landscape of Large Language Models in Global English News: Topics, Sentiments, and Spatiotemporal Analysis
  • On the Role of Large Language Models in Crowdsourcing Misinformation Assessment
  • Watch Your Language: Investigating Content Moderation with Large Language Models
  • Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites

The Importance of Model Explanations:

In addition to LLMs, model explanations are considered increasingly important, providing valuable insights into enhancing transparency and user trust in AI systems. Studies, such as those focusing on AI explanations accompanying misinformation warnings, underscore the critical role of system reliability in fostering informed decision-making.

AI Explanations and Reliability Matters!

This study investigated how AI explanations affect users’ ability to discern misinformation. It highlighted a framing effect where warnings accompanied by AI explanations could increase suspicion of errors made by the AI system. Interestingly, the research emphasized the crucial role of AI system reliability in fostering trust and informed decision-making among users navigating landscapes of fake news and real information.

Auditing Algorithmic Explanations of Social Media Feeds

Researchers audited algorithmic explanations provided by TikTok for recommended videos. Using a dataset collected via automated accounts, they evaluated the accuracy and comprehensiveness of these explanations. Findings indicated that while generic reasons were often included (“This video is popular in your country”), many explanations were incompatible with user behavior, highlighting the need for more precise and user-centric explanations in social media algorithms.

Interesting Datasets:

There are also some datasets that are useful for my research: EnronSR, MetaHate and MonoTED. EnronSR offered a benchmark for evaluating AI-generated email responses against human counterparts, revealing significant differences and guiding improvements in communication models. MetaHate consolidated diverse hate speech datasets, enabling unified efforts to develop robust detection systems across languages and contexts. Meanwhile, the “Fair or Fare?” study introduced the MonoTED corpus, which scrutinized transcription errors in social media and video conferencing platforms, highlighting biases in automated systems and advancing strategies for more inclusive technology.

Overall, ICWSM 2024 was incredibly stimulating and offered profound insights into how LLMs and AI explanations can transform Computational Social Science. It was a truly enriching experience.

--

--