Bi-directional prompting

Adrian Chan
Bootcamp
Published in
3 min readMar 1, 2023

--

LOST

Response engineering?

Is this a thing? I’ve been thinking about this and so wish to pen a few thoughts on the idea.

LLM based AIs like ChatGPT use language, but language operates in two modes: the language instantiated as text in the form of writing; and language expressed by speech in the mode of talk.

ChatGPT also clearly operates in these modes: users talk to it, engage it in conversation; and users write it, asking it for text or documentation.

The AI does these effortlessly and interchangeably. It’s in the nature of the technology. As a human social interaction, it would be somewhat strange to modulate between speech and writing in this manner. But with the AI, it’s not.

The current state-of-the-art exercise in chatbot proficiency is prompt engineering. This includes basic text prompts and their refinements and iterations, as well as API prompts more technically precise and designed to extract specific responses (such as code).

But given that language is fundamentally communicative, that is, a medium facilitating the meaningful exchanges, why not use its intrinsic bi-directionality for interaction purposes? Why not engineer responses as well as prompts?

This would require some technical interventions on the back end by means of which the first pass response by the Chat AI would be supplemented with a prompt to the user. This prompt to the user would be designed to steer the user towards more successful refinements.

A response from the Chat AI might, for example, suggest that the user be more specific. It might query the user whether the response was complete enough. Whether it was helpful enough. Whether the response was too generic, or not generic enough.

One could imagine a type of category system or taxonomy in which a series of conversational “rounds” be used to produce more complete interactions. For example, the application of the skills information architects bring to the table in navigation and information search.

For any prompt/response coupling, a set of branching options might be called on the back end to make suggestions to the user for how to proceed. We do this normally in conversation and social interaction. Any recognizable social situation—pastime, game, ritual, ceremony, etc—has built-in cues and informal “rules” by which we interpret each other’s actions. Games, at the meta level, inform us how to proceed.

Why not use this in the engineering of Chat AI responses to better the user experience? In fact, embed these in personas so that personality styles can be leveraged as a means of “packaging” the internal prompting of responses.

This strikes me as an opportunity area for exploration in how we move forward with Chat AIs. There is no reason to burden users with the entire process of constructing prompts, when responses themselves can be used as steering mechanisms. I see this as UX, but as a pretty unique kind of UX. I’m curious to see if this area of AI design catches on.

--

--

Adrian Chan
Bootcamp

CX, UX, and Social Interaction Design (SxD). Ex Deloitte Digital. San Francisco. Gravity7.com. Cycling, Photography, Guitar, Philosophy. Stanford ’88, IR.