Unlocking the Dynamics of Human-AI Collaboration: Lessons from a Multi-Player Collaborative Game Study
This blog post summarizes the paper “Decision Making Strategies and Team Efficacy in Human-AI Teams” about the impact of decision making styles in human-AI teams. This paper will be presented at the 26th ACM Conference on Computer-Supported Cooperative Work and Social Computing. It will also be published in the journal Proceedings of the ACM (PACM).
With the growing discourse about human-AI interaction, it is important to understand user preferences and decision-making patterns. A recent paper titled Decision Making Strategies and Team Efficacy in Human-AI Teams presented at the Conference for Computer-Supported Cooperative Work (#CSCW2023), explores how decision-making styles impact user behavior and perceptions. Through a multi-player collaborative game which allowed for the manipulation of AI decision-making styles and AI identity disclosure, a study was conducted in which recruited participants engaged in the word association game ‘Guess the Word’ and later provided feedback through a survey about their experience with their partners during game play.
The following decision making styles were examined:
Laissez-faire: Team members allowed others to make decisions and always followed the user’s clues for final submission.
Autocratic: Team members made decisions based on their input and ignored the user’s submission, always selecting their own clues for final submission.
Democratic: Team members collaborated with the group and randomly selected either the user’s clues or their own at an equal rate.
The research’s findings underscore the influence of decision-making styles on user behavior and perceptions. Participants adapted their decision-making strategies based on their teammates’ styles, including a greater inclination to follow autocratic AI partners’ recommendations compared to their laissez-faire AI counterparts. The impact of AI identity disclosure on user perceptions was also significant. Participants rated team efficacy lower when interacting with human partners exhibiting autocratic decision-making, as opposed to AI partners displaying similar behavior. This finding emphasizes the need for transparently disclosing AI agent identities to manage user expectations.
Drawing from these findings, some design implications for practitioners can be offered:
Transparent AI Identity Disclosure: Developers and practitioners should prioritize AI identity disclosure for more satisfying collaborative experiences.
Fostering Collaborative Decision-Making: Encouraging democratic decision-making styles within AI systems can lead to positive user experiences and more successful outcomes in human-AI collaboration
This #CSCW2023 paper presents an exploration of user preferences in human-AI collaboration through a multi-player collaborative game. The decision-making styles examined — laissez-faire, autocratic, and democratic — can have relevance in a wide spectrum of other human-AI collaboration contexts. While the study’s focus was centered on a multi-player collaborative game, the insights gained can extend beyond the gaming domain.
Imani Munyaka, Zahra Ashktorab, Casey Dugan, J. Johnson, and Qian Pan. 2023. Decision Making Strategies and Team Efficacy in Human-AI Teams. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 43 (April 2023), 24 pages. https://doi.org/10.1145/3579476