Prompt Generators Just Get In The Way

Peter Dresslar
Kinomoto.Mag AI
Published in
6 min readApr 29, 2024

Why We Need Direct Interaction With AI

Daniel Sorano Theater. October 15, 1984. Close-up front shot of the actor Maurice Sarrazin (on the left) playing Cyrano de Bergerac. An actress (unknown) playing Roxanne. Archives municipales de Toulouse.

RAGUENEAU (sobbing):
Ah! how they laughed!
CYRANO:
Look you, it was my life
To be the prompter every one forgets!

-Edmond Rostand, Cyrano De Bergerac : a Heroic Comedy in Five Acts.

As we at Hawai‘i Center for AI engage with people through our workshops, we occasionally get asked about one prompt generator or another — and why we don’t say more about them. For those who aren’t familiar, prompt generators are software tools, usually web-apps or browser plug-ins, designed to suggest the best possible prompt language to command specific behavior out of AIs. While there are a variety of these generators available, they generally share common traits, like a library of pre-written snippets, filters and inputs, and tools to copy and paste right into the prompt input for your favorite AI chatbot, image generator, or similar generative AI tools.

We don’t love them. We see these tools as counterproductive to the goal of helping people make great things happen by engaging with AIs. Those great things can only happen with direct interaction between a person and the chatbot, since — just as Cyrano discovers in the end — there’s no way to build a good partnership without talking with your partner.

To be fair, as libraries of examples or learning resources go, prompt generators could have their uses. For some of the more tricky generative AI interfaces out there in the world of image and sound generation, the technical parameters and domain language required to operate the model can be daunting without help. Even with chatbots (generative AI tools built with Large Language Models), sometimes getting started on a complex query can seem difficult.

Indeed, reviewing, testing, and sharing prompts is a major part of our process as we introduce dozens of people to AI every month at our workshops, and we are almost always asked to share the work to our attendees for their future adaptation and use. But, these prompts work best, and indeed our outreach is most successful, when we accompany our interactions between AI and human with a larger narrative. That discussion does far more to flesh out the problem and ground it within the context of a larger human or social situation as it does to parse the specific syntax of the prompts. Through this narrative is where our workshop attendees can connect and truly learn.

From our experience introducing people to AI — specifically AI chatbots — once a typical user starts exclusively cutting and pasting prompts from an ever-present clipboard, we would expect their productivity to stagnate. Instead, as we work with classes, we are looking for a far more interactive path to success. Ideally, there are several steps, something like:

  1. Humans begin to address a problem; share an initial perspective with AI(s)
  2. AIs respond in ways that do not adequately address the problem
  3. Humans realize that their contextualization of the problem is somehow flawed or insufficient. Humans respond to AIs, frequently with conversations about sub-context
  4. AIs begin to better organize the problem
  5. Humans recapitulate the problem, adopting and improving the suggested organization from the AI
  6. Progress toward the problem. Human understanding of the problem and the overall context improved

The process of directed and organized learning for the human involved might be the most beneficial work product of the majority of today’s AI interactions. Even in the disappointing situations where a chatbot’s output must be entirely re-authored for style or precision, the side effect of improved human understanding prevails, along with a “better luck next time” improvement for future engagement with the same AI.

These benefits may be a temporary condition particular to 2024‘s cohort of AI models — who, despite being a clever batch, need equally large inputs of human creativity and prudence to successfully operate. However, the benefits of patient, progressive interaction with chatbots are likely to endure. Humans getting smarter about how to observe and organize the world around them in the space of an hour or three with a chatbot is a latent superpower with AI, one which we are only beginning to tap into.

Another reason to wonder about prompt generators is that the AIs themselves — or at least the generative AI chatbots that we are discussing here — tend to be more effective when working with multiple prompts, rather than one optimized command from on high.¹² While the reasons for this are the subject of active investigation, there is evidence that AIs respond to a series of prompts (rather than a singleton) with additional understanding called “meta-learning.”³ Meta-learning might be seen as a the underlying model for an AI “learning to learn” from the changes from multiple prompts. In other words, “getting it right the first time” with prompting… isn’t always getting it right the best.

While the research suggests that AIs benefit from iterative prompting, we might also consider how this dynamic plays out in the context of human learning and adaptation to these new technologies. Here, we don’t have any formal scientific evidence to cite; there is little discussion in academic literature about ancillary tools and chatbot adoption by people.⁴

Still, we can theorize that using prompt generators might inhibit user adoption through the reduction of trial and error and accompanying User Interface (UI) familiarity gains, not to mention the potential frustration of the generated prompts not quite fitting the problem context. It seems a bit ironic to highlight this potential loss of learning when educators everywhere are worrying about the AIs themselves as being potential sources of cheating! And yet, we think there’s an interesting potential avenue of study here that would likely uncover a raft of new interactive behaviors.

Beyond these questions of effectiveness, the widespread adoption of prompt generators also raises significant security concerns. Many of the new AI users that we train are fairly limited in technical capabilities. Imagining many of them combing through libraries of effectively random browser extensions with often surprisingly expansive permissions is concerning, to say the least. At a practical level, many use browsers they can’t or don’t want to modify with additional software. Even if we charitably assume the best intentions from the developers marketing these tools, the potential for abuse is high. Keystroke logging and other forms of data harvesting could put users at risk. While not our primary focus, these security issues are one more reason we’re skeptical about the value of these tools in the learning process.

ROXANE: And a mind sublime?
CYRANO: Oh, yes!
ROXANE: A heart too deep for common minds to plumb, A spirit subtle, charming?
CYRANO (firmly): Ay, Roxane.

In the end, as we consider the wave of software and services aimed at wringing every drop of utility from the current era of AI, our perspective depends on our expectations for what AI can actually give us. At the Center, our optimism leads us to envision people receiving far more than satisfactory emails and pictures of steampunk cats. Carefully crafting and honing concepts into finished products is something that takes time with an AI, but can yield life-changing results.

AI can be truly sublime, but only when we engage with it directly.

Peter Dresslar is Executive Director, Hawai‘i Center for AI.

[1] There is a raft of evidence regarding this, including the well-cited Malik, B., Masud, Z., Rony, J., Dolz, J., Ayed, I., & Piantanida, P. (2021). Mutual-information based few-shot classification. https://doi.org/10.48550/arxiv.2106.12252

[2] See also: Boudiaf, M. (2020). Transductive information maximization for few-shot learning. https://doi.org/10.48550/arxiv.2008.11297

[3] For instance Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H. S., & Hospedales, T. M. (2018). Learning to compare: relation network for few-shot learning. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/cvpr.2018.00131

[4] Brief searches on Google Scholar and with Scite indicate that there is little or nothing published, though we can guess that anything that is being researched along these lines would be so new as to be not yet well-indexed or well-cited.

--

--

Peter Dresslar
Kinomoto.Mag AI

Exec Dir Hawai‘i Center for AI. Program Mgr. American Samoa Community College.