Understanding Design Collaboration Between Designers and Artificial Intelligence

Tian Gao
ACM CSCW
Published in
9 min readSep 19, 2023

This blog post is related to the paper “Understanding Design Collaboration Between Designers and Artificial Intelligence: A Systematic Literature Review” which analyzed 93 research papers to understand how designers and AI assist each other for design tasks. This paper will be presented at the 26th ACM Conference on Computer-Supported Cooperative Work and Social Computing.

The two hands pointing to each other were drawn by Midjourney

A Special User Group Interacting with AI

In recent years, AI-driven ideas and techniques dedicated to design fields have surged steadily. Many academic studies that leverage AI to amplify designers’ knowledge and improve their skills have been widely recognized (e.g., [1] and [2]), not to mention the enormous successes of Midjourney and Stable Diffusion in the wild. Compared to various user groups interacting with AI such as data scientists and children, designers can be considered as a special user group: they not only consume the results of AI, but also co-create with it [3].

Although empowering design with AI shows a promising prospect, current studies focused more on designing for AI — — for example, some HCI researchers regard AI as a design material [4] — — rather than using AI for design. As a result, little is known about how designers and AI can augment each other’s complementary strengths in design collaboration. To fill this gap, we contributed a literature review on AI for design to reveal overall patterns in AI for design research, explore the collaboration between designers and AI, and understand the characteristics of designer-AI collaboration.

The Overall Patterns in AI for Design Research

Firstly, we screened 2,574 ACM papers and included 93 papers in our final corpus. Based on this corpus, we conducted a quantitative analysis to reveal overall patterns in this field, including when and where researchers have focused their attention and what they have focused on.

Number of papers in our dataset published each other

Specifically, the collected papers were published in core HCI venues from 2007 to 2022. There was a steep increase in the number of papers (n = 79, 84.9%) since 2016, which could be explained by the breakthrough of AI techniques at that time (e.g., GAN in 2014). Regarding the application fields, papers in graphic design (n = 32, 34.4%) and UI/UX design (n = 31, 33.3%) account for the majority, echoing the huge demand in the market nowadays. We also investigated the contribution types of these papers. The result shows the primary contribution type is algorithm (n = 62, 66.7%), following by application (n = 31, 33.3%). In contrast, types of theory (n = 3, 3.2%) and user evaluation (n = 2, 2.2%) were found relatively rare.

Such a quantitative analysis provides a bird’s eye view, which can help sensitize HCI researchers to the increasing breadth and depth of research in this area.

Designer-AI Collaboration

To gain a deeper understanding of the collaboration between designers and AI, we applied a qualitative analysis and brought insight into the mutual help between them:

For AI, it can assist designers with four abilities.

Discovering: AI can help designers to discover what is potentially important to users by interpreting their comments and analyzing their behaviors. To understand vague requirements (e.g. “a more ‘vivid’ poster design”), designers often found challenges in synthesizing insightful statements from what users’ said or did. To address the issue, AI can structure and highlight what users said, or extract underlying patterns from their behaviors. For example, with a two-stage machine learning approach, Zhang et al. [5] derived five personas from 3.5 million clicks gathered from 2,400 users of an actual product.

Visualizing: AI can help designers to visualize their hard-to-express ideas by collecting and curating various references. As an important way to draw inspiration, references can be used to find patterns that fit the current design context, which make rough ideas more tangible and accessible to apply. However, designers can only review a limited number of references, and must carefully select them to avoid negative situations like design fixation. To overcome these challenges, AI can search for references from a wide range of online sources or create them by itself, providing both targeted and serendipitous inspirations for designers.

Creating: AI can assist designers to creat their works. More academically speaking, AI can externalize designers’ ideas and transform them into presentable forms. To create high-quality designs, advanced design knowledge and skills are required. Designers may sometimes not be able to realize their ideas efficiently and accurately constrained by their physical capabilities. Thanks to the recent advances in deep learning, AI can initiate a starting point by generating drafts based on designers’ ideas within seconds. Also, AI can suggest the parts in a design that need further improvement and then automatically refine them.

Testing: AI can test designs by predicting human judgements and preferences, helping designers to understand their works from the perspective of users. Traditional test methods like co-creation session are usually time-consuming, costly, and more importantly, fail to provide instant feedback. To enable fast iterations, AI can predict user behavior patterns in terms of both experience and usability, and test how specific designs align with these patterns. For example, the model proposed by Pan et al. [6] can “predict how customers across different market segments perceive aesthetics designs” of car designs.

For designers, they can augment AI in two ways.

Training: Designers can help AI to gain design knowledge and perform assigned design tasks. In that effort, designers created huge training datasets which perform as the cornerstone of AI capabilities. Specifically, designers can use the datasets to train AI to learn both the visual content and semantic context of designs. In terms of visual content, designers can provide high-quality designs to build up datasets. For semantic context, designers can add semantic information to datasets to deepen AI’s comprehensibility of designs. For example, for each 3D model in the Fusion 360 Gallery [7], designers provide a design sequence that documents how it was created through sketching and extrusion operations. In this way, AI can learn the necessary operations to construct 3D designs rather than only recognize them.

Regulating: designers can also regulate AI’s role and behaviors to fit in existing design workflows. Current AI-infused systems may behave unpredictably or follow a rigid process [8], which requires designers to change their behaviors to cater to AI. To help AI better fit in design workflows, designers’ behavior patterns can be extracted to guide AI on when and how to perform specific tasks in design collaboration. For example, Chung et al. [9] investigated seven types of support relationship in the artist’s support network, and reflected on how AI-driven tools can mesh with it. The result shows an AI-driven tool that has subcontract relationship may be more easily accepted by artists, as it would not disrupt high-level artistic ideas but realize the ideas with implementation support.

The above two dimensions describe how AI can enhance what designers do best and how designers can most effectively augment AI. These themes integrate disparate threads of prior research on AI across different design fields for the first time.

Characterizing Designer-AI Collaboration

As we have mentioned above, designers and AI actively enhance each other’s complementary strengths through design collaboration. Although these strengths are extended in different ways, designer-AI collaborations show common characteristics. We highlight five characteristics among them.

  • Scope: Scope defines the range of design workflow in which the collaboration between designers and AI can cover. The degree of scope can be reflected in two aspects, namely, the coverage of phases of a specific design workflow and the diversity of design workflow regarding different design fields.
  • Access: Access defines the level of design expertise required to be involved in the collaboration. The collaboration that only includes experts has lower access than the one that involves novices. Specifically, the degree of access depends on two aspects: dependency on design knowledge (e.g., Gestalt Law) and design skills (e.g., 3D modeling).
  • Agency: Agency refers to who dominates the interplay between designers and AI when performing design tasks. A designer-driven approach relies heavily on designers to control while an AI-driven approach depends on AI to automate design tasks with a high degree of freedom. The degree of agency is related to two factors, including how much two actors contribute to the design work and how the behaviors of the two actors are intervened in by each other.
  • Flexibility: Flexibility describes how one actor responds to the changes conducted by the other one in the collaboration. Such responsiveness can take the form of single-turn interaction, where AI generates a new output each time based on designers’ input. It can also take the form of multi-turn interaction, allowing AI to progressively modify the output generated in the previous turn when receiving designers’ additional input.
  • The degree of flexibility can be described from two aspects: if designers can examine real-time AI responses to their changes, and if designers can retrieve intermediate states of a design (e.g., retrieving to a draft that was once hesitated in the early stages).
  • Visibility: Visibility refers to how easily the collaboration between AI and designers can be perceived. A typical way to present an explicit designer-AI collaboration is visualizing AI as anthropomorphic entities (e.g., smart assistances like “Siri”). The degree of visibility depends on how AI features can be triggered — — explicit designer-AI collaboration usually triggers AI features using clear commands — — and if the impact caused by AI is clearly annotated.

In addition to the designer-AI collaboration and its characteristic, we also discuss the implications of these findings in terms of increasing the explainability, ethicality, adaptability of AI, and understanding who the user is for specific AI tools in design contexts. Check out the details in our full paper!

References:

[1] Sean Bell and Kavita Bala. 2015. Learning visual similarity for product design with convolutional neural networks. ACM transactions on graphics (TOG) 34, 4 (2015), 1–10.

[2] Shunan Guo, Zhuochen Jin, Fuling Sun, Jingwen Li, Zhaorui Li, Yang Shi, and Nan Cao. 2021. Vinci: an intelligent graphic design system for generating advertising posters. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.

[3] Jichen Zhu, Antonios Liapis, Sebastian Risi, Rafael Bidarra, and G Michael Youngblood. 2018. Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation. In 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 1–8.

[4] Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX design innovation: Challenges for working with machine learning as a design material. In Proceedings of the 2017 chi conference on human factors in computing systems. 278–288.

[5] Xiang Zhang, Hans-Frederick Brown, and Anil Shankar. 2016. Data-driven personas: Constructing archetypal users with clickstreams and user telemetry. In Proceedings of the 2016 CHI conference on human factors in computing systems. 350–5359.

[6] Yanxin Pan, Alexander Burnap, Jeffrey Hartley, Richard Gonzalez, and Panos Y Papalambros. 2017. Deep design: Product aesthetics for heterogeneous markets. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1961–1970.

[7] Karl Willis, Yewen Pu, Jieliang Luo, Hang Chu, Tao Du, Joseph Lambourne, Armando Solar-Lezama, and Wojciech Matusik. 2020. Fusion 360 gallery: A dataset and environment for programmatic cad reconstruction. (2020).

[8] Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–13.

[9] John Joon Young Chung, Shiqing He, and Eytan Adar. 2022. Artist Support Networks: Implications for Future Creativity Support Tools. (2022).

--

--

Tian Gao
ACM CSCW
Writer for

Zhejiang Uni | Tongji Uni | Alibaba, HCI | DataVis