What if the goal of Artificial General Intelligence (AGI) is to enhance human creativity?

by Mary Lou Maher (University of North Carolina, Charlotte, US)

Mary Lou Maher
Human-Centered AI

--

Abstract representation of creativity. Produced with Stable Diffusion.

Creativity is foundational to human learning, social and scientific innovation, quality of life, and economic prosperity. Creativity is practiced in context by all of us in our work lives, education, and social interactions. What if we re-align the goals of Artificial General Intelligence with a focus on interaction with AI that augments human creativity in all of its forms, from common to extraordinary acts of creativity (Weisberg 1993)? A vibrant research community is studying ways to build creative AI systems, and more recently, has focused on developing co-creative systems (e.g., Davis et al. 2015b; Goel et al. 2015; Maher 2012; Muller et al. 2020; Truesdell et al. 2021). What if the advances in large language models — accurately predicting the next word and using reinforcement learning to improve conversational exchanges — are rewarded for enhancing human creativity?

Creative ideas are characterized as satisfying a duality of novelty (or surprise) and usefulness. The advances in AI, and in large language models more recently, have focused on prediction accuracy as a proxy for the usefulness of the AI response. ChatGPT, a pre-trained language model that is based on the transformer architecture, implements the idea of self-attention, which allows the model to attend to different parts of the input sequence during processing. The GPT models are trained on vast amounts of data using unsupervised learning techniques with the basic goal of predicting the next word. Reinforcement learning from human feedback is used to improve conversational agents like ChatGPT. The goals of prediction accuracy and conversational coherence are mechanisms for improving usefulness. An alternative value alignment for AGI is to include novelty as equally important to usefulness in training the model to improve its conversational performance. By including novelty as a training objective, it signals that an important role that AGI can play is to enhance human creativity, including the personal creativity we experience in learning something new, as well as historical creativity in generating ideas that are transformative in a field (Boden, 2004).

There are numerous concerns about the recent release of LLMs such as ChatGPT and its potential negative impact on human control as AI acquires more knowledge. Stuart Russel recently gave a presentation called “How to not to destroy the world with AI.” In his presentation, he spoke about how we can redefine the objectives of AI. From a technical perspective, an objective of AI is accuracy, where accuracy is a metric for measuring the performance of AI models in tasks such as classification or prediction. In classification tasks, accuracy is the proportion of correctly classified data items compared to the total number in the dataset. Other metrics like precision, recall, or F1 score may also be relevant. If we broaden the objectives of AI to align with shared human values, we go beyond technical performance to a socio-technical performance evaluation. Shared human values refer to fundamental ideals that are considered shared across all human beings, regardless of cultural or societal differences. Examples of shared human values are empathy, fairness, and creativity. Redefining, or extending, the objective of AI from accuracy to shared human values is informed by the distinction between common goals and indexical goals in Russell’s presentation. Common goals, such as, “to mitigate climate change,” are of value no matter who achieves them. Indexical goals, such as “to drink coffee,” are only valuable to the person that achieves the goal. As he points out, developing a robot to drink coffee for us is not very useful. Establishing objectives for AGI that are based on common goals means that the achievement of the goal is of value to all of us.

This concern about the value alignment for AI is also argued in Brian Christian’s book, “The Alignment Problem: How Can Artificial Intelligence Learn Human Values?” (Christian, 2020). This book argues that current AI models are very good at prediction, but are biased by the data set used to train the model. In essence, AI models effectively propagate historical biases present in our society without regard for our current or future values. Christian argues for a better alignment with human values, including diversity, inclusion, and curiosity.

For instruction-tuned large language models such as ChatGPT, if the reinforcement learning phase includes a dual objective of usefulness and novelty of the AI’s response, we can begin to enhance human everyday creativity, including encouraging human learning, rather than only seeing a future in which AGI replaces humans. We have an opportunity to measure novelty in language models generated by deep learning with the mapping from natural language tokens to numerical vectors. An alternative to prediction as a metric for the response to a prompt is measured novelty in which the response encourages human ideation (Kim and Maher, 2023). This is one way to operationalize novelty and shift the goal of AI from prediction accuracy to enhancing human creativity.

References

  • ​​Boden, M.A. 2004. The creative mind: Myths and mechanisms. Psychology Press.
  • Davis, N., Hsiao, C.-P., Popova, Y., and Magerko, B. 2015a. An enactive model of creativity for computational collaboration and co-creation. In: Creativity in the Digital Age. Springer, 109–133.
  • Kim, J. And Maher, M.L. (2023). The Effect of AI-based Inspiration on Human Design Ideation, International Journal of Design Creativity and Innovation, 11:1.
  • Goel, A., Creeden, B., Kumble, M., Salunke, S., Shetty, A., and Wiltgen, B. 2015. Using Watson for enhancing human-computer co-creativity. 2015 AAAI Fall Symposium Series.
  • Maher, M.L. 2012. Computational and Collective Creativity: Who’s Being Creative? International Conference on Computational Creativity.
  • Muller, M., Ross, S., Houde, S., et al. 2022. Drinking Chai with Your (AI) Programming Partner: A Design Fiction about Generative AI for Software Engineering. Joint International Conference on Intelligent User Interfaces Workshops: HAI-GEN.
  • Truesdell, E.J., Smith, J.B., Mathew, S., et al. 2021. Supporting Computational Music Remixing with a Co-Creative Learning Companion. ICCC, 113–121.
  • Weisberg, R. 1993. Creativity: Beyond the myth of genius. WH Freeman.

--

--

Mary Lou Maher
Human-Centered AI

Mary Lou Maher is a Professor at UNC Charlotte in the College of Computing and Informatics. Her research is on human-centered AI and co-creativity.