GenAICHI 2023: 2nd Workshop on Generative AI and HCI

by Michael Muller (IBM Research, US), Lydia B. Chilton (Columbia University, US), Anna Kantosalo (Siili Solutions & University of Helsinki, Finland), Q. Vera Liao (Microsoft Research, Canada), Mary Lou Maher (University of North Carolina at Charlotte, US), Charles Martin (Australia National University, Australia), and Greg Walsh (University of Baltimore, US)

michael_muller
Human-Centered AI
6 min readAug 7, 2023

--

Representation of workshop attendees discussing generative AI. Generated with Stable Diffusion.

Recent advances in generative AI technologies have given people powerful tools for creating images from textual descriptions, conducting coherent conversations, providing writing suggestions, and writing source code. We have also seen controversial claims that these technologies can simulate conversations with a historical person or complete a composer’s unfinished work. What all of these examples have in common is that the AI does not simply categorize data or interpret text, but instead, it creates something new. This capability for producing novel artifacts shifts the role of AI away from “decision maker” and toward “supporter of human creative endeavors.”

The second workshop on Generative AI and HCI (GenAICHI), held at CHI 2023, brought together over fifty researchers and practitioners from the disciplines of HCI & AI and across five continents to build community and share ideas and insights around the opportunities and challenges posed by generative AI technologies. The workshop was organized around five key topics and included a discussion at the end.

Editor’s Note: Not all of the papers presented at this workshop were published online. We provide links to those papers that were published in the workshop’s proceedings.

Session 1: Analysis

Anna Kantosalo moderated the first session on Analysis. We explored HCI issues in creative domains of AI work, including text, design, and art. Presentations in these sessions considered the views of artists and designers using AI-based tools by analyzing topics related to human-AI co-creativity.

Ani Withöft led the session (and the workshop) with an integrative view in “A Multidisciplinary Approach of Exploring the Synergy of Human Creativity and Artificial Intelligence.” Withöft suggested analyzing co-creativity from a lay person’s view by exploring different contexts of creativity: everyday creativity, creativity with other humans, creativity with computer-based creative systems, and co-creation with AI with the help of focus groups.

Nabila Chowdhury and Anastasia Kuzminykh conducted an analysis of output biases through the lenses of system-driven and user-driven biases in their work, “An Exploration of Prompt Based Biases in AI Art Generated Tools.”

In “LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments,” Tae Soo Kim and colleagues suggested looking at how user interface design may constrain generative AI use.

Alexa Steinbrückand and Aeneas Stankowski examined how professional creatives consider the potential negative aspects of generative AI for artists’ abilities to control their reputations and incomes when large models scrape their images, in “Creative ownership and control for generative AI in art and design.” Related concerns were reflected in Hanlin Li’s, “Will it substitute us: Understanding artists’ reactions to generative AI models” This work examined artists’ reactions to generative AI based on publicly-posted comments in Reddit.

Closing out the session, Teresa Datta suggested that large language models (LLMs) should be evaluated from the perspective of human-centeredness in her work, “Who’s Thinking: A Push for Human-Centered Evaluation of LLMs using the XAI Playbook.” Such evaluations would include aspects including utility and cognitive engagement.

Session 2: Specific Applications

Vera Liao led the second session, Specific Applications, which took a deeper dive into the domains from the first session. We learned about AI in product design, news media, knowledge work, and for people on the Autism spectrum.

Matthew Hong discussed an orientation toward applied generative AI for designers in “Generative AI for Product Design: Getting the Right Design and the Design Right.” In a different domain, Sashita Nishal’s “Envisioning the Applications and Implications of Generative AI for News Media” pursued generative AI in journalism.

In “Exploring outlooks towards generative AI-based assistive technologies for people with Autism,” Deepak Giri examined how generative AI could be used for assistance to people on the autism spectrum. This work opened a larger discussion on the relationship between generative AI and accessibility.

Sian Lindley and colleagues considered implications for knowledge work and knowledge workers in “Content Repurposing in Knowledge Work: Implications for Generative AI.” Their paper emphasized re-use of content through LLMs, and provided an interesting contrast to the risks discussed for creative workers by Alexa Steinbrückand and Aeneas Stankowski in the previous session.

Session 3: Support for Broader Application Domains

Mary Lou Maher conducted the third session, Support for Broader Application Domains, which focused on diverse human needs in customer care, design, peer feedback, computational workflows, and artistic treatments of text.

Hyung-Kwon Ko described generative support for customer support representatives in “ChatGPT for Moderating Customer Inquiries and Responses to Alleviate Stress and Reduce Emotional Dissonance of Customer Service Representative.” This work adopted a strong human-centric concern for alleviating stress.

Next, Tianying Chen took up themes of human and AI collaboration in workplaces with “Leveraging Generative AI and Human Collaboration in Peer-Feedback.” Garrett Allen conducted a similar exploration of human-AI collaboration in “Power-up! What Can Generative Models Do for Human Computation Workflows?” and Lorenzo Corti’s “ARTIST: ARTificial Intelligence for Simplified Text” examined similar themes in co-writing.

Brandon Harwood extended the theme of domain-specific human-AI work in “CHAI-DT: Prompting Conversational Generative AI Agents to Actively Participate in Co-creation,” which showed how tuned prompts could contribute to enterprise design thinking workshops.

Session 4: Ethics

We switched our emphasis in the fourth session to Ethics, moderated by Greg Walsh. We learned about ethical issues in structured elicitation methods and structured exploratory testing, as well as the ethical aspects of play: storytelling with children and adults, and play in interactions.

Anna-Kaisa Kaila described ethical concerns and contributions in music generation in “Ethically Aligned Stakeholder Elicitation (EASE): Case Study in Music-AI.” This methodological work was created to sense and respond to differences in power among team members. Anthony Jameson’s, “Structured Collaborative Trial and Error as an Interaction Paradigm for Generative AI” reported a similar analysis of collaboration, focusing on iterations as a means of exploration.

Elizabeth Bonsignore analyzed ethics of children’s work with AI in “AI storytelling with children: Entertainment Platform or Creative Collaborator? ‘You are who you choose to be.’ How children conceive of collaboration with a generative AI system.” Her work critically examined concepts of “entertainment,” blurring the boundary between applications for passive entertainment vs. opportunities for children to create in those applications. Caiseal Beardow’sExploring the Role of Play in Human-Generative AI Interactions” took up similar themes in adults’ play with generative systems. Finally, Jose Ma. Santiago III and colleagues extended these analyses to more structured game-playing and story-telling and asked about the consequences on cognitive load and immersive play in “Rolling the Dice: Imagining Generative AI as a Dungeons & Dragons Storytelling Companion.”

Session 5: Futures

The fifth session, conducted by Mary Lou Maher, pointed us toward AI Futures. We discussed art as a creative approach to novel generative AI applications and the use of design fictions to examine the implications of hidden layers and parameters in generative models.

Federico Bomba and colleagues presented a longitudinal case study of an artist working with GPT-2 in “And We Thought. Art Through, With or For Generative AI.” This work raised questions of authorship and the entangling of initiatives and responsibilities in human-AI creative work.

The final paper in the workshop, by Imke Grabe and Jichen Zhu, used the method of design fictions to explore a possible future capability in “Hidden Layer Interaction: A Co-Creative Design Fiction for Generative Models.” They speculated about forms of direct access to hidden neurons in complex models and suggested new insights that people could gain through visualizing and manipulating these intermediated layers.

Get involved

--

--