Calling to Design in Artificial Intelligence

Pablo Bariola
7 min readDec 6, 2023

--

How Design can help us create useful, respectful and responsible AI systems.

“Designing with Artificial Intelligence”, generated with Adobe Firefly.

ChatGPT is said to be the fastest-growing consumer app in history, now grossing tens of millions of dollars in recurring revenue one year after its launch.[1] OpenAI, perhaps unintentionally, brought a tsunami of interest, excitement, and fear from many. Technologists responded with proportional enthusiasm, giving rise to a proliferation of AI chat-based applications.[2]

Some began to imagine a world without keyboards or touchscreens: the chat paradigm became almost a de-facto user interface analogy. Chatbots make sense when operating Large Language Models (or LLMs, the technology behind ChatGPT), considering their probabilistic behavior and the need to iterate their instructions. Even more so, chat provides a mechanism to manage the inherent ambiguity that exists in natural language. Imagine how often I had to clarify my instructions to a LLM when I was conducting research-type surveys, not the land-measuring ones done by civil engineers.

Simultaneously, the chat paradigm is terribly inefficient for many tasks we currently use computers for.[3] A classic example is the photo editing scene in Blade Runner (1982), where the character is zooming in, panning and enhancing the image by dictating coordinates. Of course, it can be excused given the state of computer usability at the time. Interestingly, the 2017 follow-up film Blade Runner 2049 (2017) expertly reimagines user interfaces with multi-modal input (weird consoles, holograms, and fancy controllers), while retaining the voice operations in several scenes where they made sense. These still seem a bit quirky, but they portray a more efficient use that feels futuristic today.

Another risk of AI, also discussed in the Blade Runner films, is anthropomorphization.[4] LLMs function by predicting a written text response. Evidently, that operation doesn’t equate to simulating human thinking. These computational models lack the cognitive abilities humans have, such as learning through physical interaction, experiencing emotions, or understanding ourselves and others.[5] LLM’s memory is inhuman, static and vast. Chatbots try to overcome this with rudimentary mechanisms that approximate short-term memory. In a long conversation with ChatGPT, you might notice it will be losing track of key ideas, sometimes resetting the dialogue to its start. I found these reminiscent of conversations with my father, a retired professor with years of cognitive decline. Ironically, there is a measure of humanity in these forgetful moments.

Prior research discusses how LLMs do not have Cartesian Creativity, the capacity to imagine something from a standstill, or based on self-awareness. In that sense, a chatbot can’t invent.[6] Also, neuroscience studies suggest that the mechanisms used by the brain to form language are very different from those in LLMs. The fact that the models can process data that would be incomprehensible to humans also reinforces that idea. Even more so, researchers have shown that AI generated and human-written text show different “linguistic structures”.[7] Lastly, Large Language Models seem to lack empathy.[8] All this evidence suggests that current and near-future AI technologies are pretty far from becoming accurate human simulations. At the same time, they are mimicking us well enough to deceive us, or at the very least, to cause alarm.

I wonder if creating simulated humans is potentially dishonest. An obvious criminal example is the use of voice synthesis in scams.[9] But what about artificial romantic partners or an interactive effigy? I’d love to have a chat with Alan Turing. That said, I will propose that the more immediate concern lies in the discourse itself that AI is becoming human-like. I find that problematic because it distracts us from the near-term and material impact of LLMs, which is the automation of tasks currently done by people. Worth mentioning, LLMs can be quite effective at some of these when prompted correctly, often performing above the average population and close to a human expert.[10] [11] [12]

New technologies can have an impact on society, labor markets, our stability, and our well-being as individuals. Historically, that has been made evident by the plow, press, steam engine, and electricity. An interesting example of this happened about a hundred years ago, when electronic switches replaced most telephone operators. That trade job no longer exists today. AI technologies are now expanding the boundaries of computing to the realm of unstructured data, and we’re just getting started in discovering how they can be used. On the other hand, unlike telephone switches and operators, LLMs supervised by humans often outperform LLMs alone.[13] [14]

Information systems control huge parts of our production. Traditionally rigid, we can now redesign our software to be more subtle and sensible, perhaps even more “analog” or intentionally ambiguous.[15] I suspect that the impact of LLMs will be lasting and structural, and now it is our responsibility to make it a constructive change. This is where I make a call to Designers, along with all technologists working in AI, to incorporate Design into their AI projects.

The new AI will benefit from the Designers’ expert approach to research, ideation, conceptualization and prototyping, so we can ensure new platforms solve meaningful problems. In particular, tech startups can benefit greatly by using more of these skills, especially when the problem to solve is not obvious, or when the space is foreign to the founders. Designers natively work with non-deterministic and ambiguous materials, which are key characteristics of Language Models. It won’t be hard for Designers to learn how to create with these constraints.

We know that the User Experience profession has a branding problem regarding the assumed focus on form or aesthetics and the association with screen user interfaces. In that imagined future where user interfaces are entirely verbal, the demand for screen design diminishes significantly. In reality, we are just discovering what we can do with LLMs, which is why we need Designers inventing new multimodal interface paradigms and patterns[16]. For one, we need better alternatives to chat, or to reshape it into a more versatile tool that supports simple and complex tasks. We also need flows that leverage the capabilities of LLMs, considering their limitations, and enabling thoughtful human supervision. Voice, touch and new hardware can be used in tandem.

Additionally, I believe that the Design profession cares, which is how it may help shape the ethics of AI. Good intentions will not be enough to articulate these new challenges. We need experts in people, teams and societies. Among other things, Design may help us decide how human-like we need machines to be.

The future well-designed AI needs to be multimodal, context-aware, respectful, adaptive, environmentally conscious and conducive to building a positive society, all of which require a nuanced understanding of ourselves.[17] [18] I believe that it is now our collective responsibility to learn the craft of AI, so we can build expertly and responsibly.


Pablo’s works at the intersection of design and software engineering. More about his work at https://pablobariola.com. This article was written by a human (Pablo), but LLMs were used to edit text and process references.

[1] Reuters (2023) ChatGPT sets record for fastest-growing user base — analyst note.
https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
[2] Alex Tamkin and Deep Ganguli (2023). How Large Language Models Will Transform Science, Society, and AI. Stanford HAI.
https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-ai
[3] S. N. M. N. D. K. Arambepola. Usability of voice-activated interfaces: A comprehensive literature review (2020). Software Engineering Teaching Unit, Faculty of Science, University of Kelaniya, Sri Lanka.
[4] Ameet Deshpande, Tanmay Rajpurohit, Karthik Narasimhan, Ashwin Kalyan (2023). Anthropomorphization of AI: Opportunities and Risks.
https://arxiv.org/abs/2305.14784
[5] Chemero, A. (2023). LLMs differ from human cognition because they are not embodied. Nature Human Behaviour, 1–2.
https://www.nature.com/articles/s41562-023-01723-5
[6] Moro, A., Greco, M., & Cappa, S. F. (2023). Large languages, impossible languages and human brains. Cortex, Scuola Universitaria Superiore IUSS, Pavia, Italy; IRCCS Mondino Foundation, Pavia, Italy.
https://www.sciencedirect.com/science/article/abs/pii/S0010945223001752?via%3Dihub
[7] Herbold, S., Hautli-Janisz, A., Heuer, U., Kikteva, Z., & Trautsch, A. (2023). A large-scale comparison of human-written versus ChatGPT-generated essays. Scientific Reports.
https://www.nature.com/articles/s41598-023-45644-9
[8] Trott, S., Jones, C., Chang, T., Michaelov, J., & Bergen, B. (2023). Do Large Language Models Know What Humans Know?
https://onlinelibrary.wiley.com/doi/pdf/10.1111/cogs.13309
[9] Scammers use AI to mimic voices of loved ones in distress (2023).
https://www.cbsnews.com/news/scammers-ai-mimic-voices-loved-ones-in-distress/
[10] Katz, D. M., Bommarito, M. J., Gao, S., & Arredondo, P. (2023). Gpt-4 passes the bar exam.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233
[11] Ali, S., Shahab, O., Al Shabeeb, R., Ladak, F., Yang, J. O., Nadkarni, G., El Kurdi, B. (2023). General purpose large language models match human performance on gastroenterology board exam self-assessments.
https://www.medrxiv.org/content/10.1101/2023.09.21.23295918v1.full.pdf
[12] OpenAI (2023). GPT-4 Technical Report.
https://arxiv.org/pdf/2303.08774.pdf
[13] Qian, M. (2023, September). Performance Evaluation on Human-Machine Teaming Augmented Machine Translation Enabled by GPT-4. In Proceedings of the First Workshop on NLP Tools and Resources for Translation and Interpreting Applications (pp. 20–31).
https://aclanthology.org/2023.nlp4tia-1.4/
[14] Clusmann, J., Kolbinger, F.R., Muti, H.S. et al. The future landscape of large language models in medicine. Commun Med 3, 141 (2023).
https://doi.org/10.1038/s43856-023-00370-1
[15] Yildirim, N., Kass, A., Tung, T., Upton, C., Costello, D., Giusti, R., … & Zimmerman, J. (2022, April). How Experienced Designers of Enterprise Applications Engage AI as a Design Material. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1–13).
https://dl.acm.org/doi/abs/10.1145/3491102.3517491
[16] Harper, R. H. (2019). The Role of HCI in the Age of AI. International Journal of Human–Computer Interaction, 35(15), 1331–1344.
https://www.tandfonline.com/doi/abs/10.1080/10447318.2019.1631527
[17] Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2023). Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. International Journal of Human–Computer Interaction, 39(3), 494–518.
https://www.tandfonline.com/doi/abs/10.1080/10447318.2022.2041900
[18] Gonçalves, D. J. V. (2001, May). Ubiquitous computing and AI towards an inclusive society. In Proceedings of the 2001 EC/NSF workshop on Universal accessibility of ubiquitous computing: providing for the elderly (pp. 37–40).
https://dl.acm.org/doi/abs/10.1145/564526.564538

--

--

No responses yet