Unlocking the Dynamics of Human-AI Collaboration: Lessons from a Multi-Player Collaborative Game Study

Zahra Ashktorab
3 min readSep 20, 2023

--

Man plays board game with robots.
Created via Midjourney

This blog post summarizes the paper “Decision Making Strategies and Team Efficacy in Human-AI Teams” about the impact of decision making styles in human-AI teams. This paper will be presented at the 26th ACM Conference on Computer-Supported Cooperative Work and Social Computing. It will also be published in the journal Proceedings of the ACM (PACM).

With the growing discourse about human-AI interaction, it is important to understand user preferences and decision-making patterns. A recent paper titled Decision Making Strategies and Team Efficacy in Human-AI Teams presented at the Conference for Computer-Supported Cooperative Work (#CSCW2023), explores how decision-making styles impact user behavior and perceptions. Through a multi-player collaborative game which allowed for the manipulation of AI decision-making styles and AI identity disclosure, a study was conducted in which recruited participants engaged in the word association game ‘Guess the Word’ and later provided feedback through a survey about their experience with their partners during game play.

The following decision making styles were examined:

Laissez-faire: Team members allowed others to make decisions and always followed the user’s clues for final submission.

Autocratic: Team members made decisions based on their input and ignored the user’s submission, always selecting their own clues for final submission.

Democratic: Team members collaborated with the group and randomly selected either the user’s clues or their own at an equal rate.

Word game UI during gameplay where the participant is playing with a human using the laissez-faire decision strategy and thus only selecting the participants clues.
This is an example of gameplay with two AI agents. A) Shows identity of “giver team”, which in this condition consists of a human Player 1 and an AI agent Player 2. B) Shows the target word to be guessed in the guessing game. C) Shows the identity of the decision-maker, which in this particular round is Player 2 D) Shows the identity of Player 2 (in this case AI) . E) Shows the history of word suggestions from the Giver Team and selections (indicated by a dot to the left of the word) by the decision-maker. In this example, Player 2 is an AI agent that consistently selects its own word as the hint provided to the guesser. F) Shows the deliberation stage. Here, Player 2 is making the final hint selection to be presented to the AI “guesser”.
Word game UI during gameplay where the participant is playing with an AI agent and the participant is the decision-maker.
Example of Game play with one AI agent and two humans (the participant and Player 2. A) Shows
identity of “giver team”, which in this condition consists of two humans B) Shows the target word. C) Shows
identity of the decision-maker, which in this particular round is Player 1 D) shows the identity of Player 2 (in
this case Human). E) Shows history of word suggestions by the Giver Team and selections (indicated by a
dot to the left of the word) by the decision maker. The player in this example has opted to choose their own
responses every turn.F) Shows the deliberation stage. Here, the user is making the final hint selection to be
presented to the AI “guesser”While the study’s focus was centered on a multi-player collaborative game, the insights gained extend beyond the gaming domain. The decision-making styles examined — laissez-faire, autocratic, and democratic — have relevance in a wide spectrum of human-AI interactions, from enhancing virtual assistants’ productivity to AI-driven partnerships in diverse collaborative settings.

The research’s findings underscore the influence of decision-making styles on user behavior and perceptions. Participants adapted their decision-making strategies based on their teammates’ styles, including a greater inclination to follow autocratic AI partners’ recommendations compared to their laissez-faire AI counterparts. The impact of AI identity disclosure on user perceptions was also significant. Participants rated team efficacy lower when interacting with human partners exhibiting autocratic decision-making, as opposed to AI partners displaying similar behavior. This finding emphasizes the need for transparently disclosing AI agent identities to manage user expectations.

Dot plot with errors bars of team efficacy scores to show that team efficacy scores were lowest when the participant played with a human using the autocratic decision strategy
The impact of Player 2’s decision-making style on Team Efficacy scores across the partner identities
and decision making styles. People judged the efficacy of the team significantly more harshly when their
partner was an autocratic human.

Drawing from these findings, some design implications for practitioners can be offered:

Transparent AI Identity Disclosure: Developers and practitioners should prioritize AI identity disclosure for more satisfying collaborative experiences.

Fostering Collaborative Decision-Making: Encouraging democratic decision-making styles within AI systems can lead to positive user experiences and more successful outcomes in human-AI collaboration

This #CSCW2023 paper presents an exploration of user preferences in human-AI collaboration through a multi-player collaborative game. The decision-making styles examined — laissez-faire, autocratic, and democratic — can have relevance in a wide spectrum of other human-AI collaboration contexts. While the study’s focus was centered on a multi-player collaborative game, the insights gained can extend beyond the gaming domain.

Imani Munyaka, Zahra Ashktorab, Casey Dugan, J. Johnson, and Qian Pan. 2023. Decision Making Strategies and Team Efficacy in Human-AI Teams. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 43 (April 2023), 24 pages. https://doi.org/10.1145/3579476

--

--