Lines of Questioning — The logic behind question engineering in Conversational AI Systems

Sam Bobo
Speaking Artificially
6 min readFeb 24, 2023
Imagined by Midjourney

Since our birth, humans have been pre-programmed with curiosity about the world, starting with sheer observation, progressing to primitive forms of questions such as pointing, through generalized questioning to ask about a particular topic or object, through highly complex and crafted questions. Furthermore, we are thought that there are no silly questions (whether desired or not based on the nature of the topic). Humans are innately intellectually curious beings and questions are the foundation to learning, from forming hypothesis and undergoing to the scientific method, to invoking the desire to learn a complex topic, to even delving into the emotional wellbeing of another psychologically.

Extrapolating one’s line of questioning centered around a particular topic can uncover a lot about the individuals thought process / logic, analysis thereof, motivation in the particular situation, and more.

When I first joined IBM as part of the Blue Spark Leadership Development Program, I was provided to attend the Global Sales School. During Global Sales School, we were taught inductive selling, a method whereby we listen to the needs of the customer to uncover any current pain points or problems, and continue to ask probing questions until the right solution (if applicable) was revealed. One particular session stood out to me. After the simulation I attended, the observer who was responsible for my overall grade on the exercise showed me a note pad that contained all of the questions I had asked. What was revealed upon reflection was my mental logic throughout the session, unbeknown to me in real-time as a combination of nerves and strategy were interplayed, blocking my long-term tracking of the questions I had asked. In that moment, I knew the true power of questioning.

Why bring up the topic of questions? In Conversational AI design, specifically for conversational designers, lines of questions (known as prompts) are immensely important to the overall outcome of the conversation and experience thereof. Questions can dictate the amount of turns a conversation has before the conversation is terminated, either with a successful or unsuccessful result, as well as set the tone for the incoming response.

The ability for systems to asks questions have evolved over time, including:

  1. Hard coded within application logic — building applications with proprietary dialog frameworks or standards like voiceXML furnish the application developer and conversational designer with the ability to ask pointed questions to achieve a specific task, typically to solicit information from the user to perform a task or route to a particular agent. In this manner, there are two forms: (a) Closed and (b) Open Ended

a) In closed questions, typically experienced within directed dialog application, the system will ask for information one question at a time. For example, when booking an airline, the system would ask for (1) The origin of the flight (2) the destination (3) time of day (4) carrier, etc and then prompt the user for payment information such as (1) credit card number, (2) CVV code (3) expiry date, etc. As illistrated, just to book a flight can take over 7 questions.

b) Open Ended — with the advent of Natrual Language Understanding, the concept of open ended questions arose. Common practice within conversational design was to prompt the user with a highly generic question “In a few words, tell me what you are calling about?” Typically, the task set was known and limited to a small set of intents. For example, calling a bank might solicit tasks such as (1) Checking the balance of an account (2) moving money (3) making a transaction or movement of money (4) closing or opening an account. Open Ended questions can typically capture both the intent and appropriate entities within the utterance to capture key information required for the task at hand.

The above lines of questioning are almost always to capture information and do not glean as much information in regards to conversational design. If any, the ability to match an utterance to an intent and the frequency in which that intent was invoked could provide conversational designers with the systems users are calling about most.

2. Slot Filling — with both open and closed questions, often times, the system does not capture all available information from the user, as all required information is not always shared on the first pass. Newer conversational applications allow for the concept of “slot filling” whereby the system will reply back within the particular node of the dialog flow with questions such as “I did not capture {inormation}, can you please provide that to me?” Typically this line of questioning can inform conversational designers about the initial prompt and what can be modified to capture all information in one shot.

3. Prompting for Additional Content — For anyone who has a Google Assistant or Amazon Alexa, systems which can query the internet to provide general questions found on the web within web pages, can often come back with additional information, prompting the user with “I found {subject} on the web, would you like a little more context?” This set of questioning can help augment the user’s query with more information to learn more but is fairly limited in its prompt diction.

4. Inductive Questioning— With Large Language Models such as ChatGPT, the system contains enough linguistic mapping and information to start and prompt the user intelligently with a line of questioning pertinent to achieving a desired goal. This is major milestone in Conversational AI Design.

I experimented with ChatGPT on two scenarios: (1) Prompting the system to “sell me anything” and the other (2) assigning the system with a particular role, in this case, a psychologist and I the patient, with the general task to diagnose me. Here are the results:

ChatGPT Conversation (1 of 2) on Selling
ChatGPT Conversation (2 of 2) on Selling
ChatGPT Conversation (1 of 2) on Psychology
ChatGPT Conversation (2 of 2) on Psychology

What can be observed from the above examples is that ChatGPT and the underlying Large Language Model contains enough context of the particular goal and subject matter to generate a logical line of questioning guiding me to the desired outcome, instead of generic slot filling. All of this done, notably, without underlying conversational design or application logic, just the LLM with Reinforcement Learning.

What I found particularly funny, however, is how excited the system seemed to be, explained through punctuation, to know my preferences and often repeated this statement multiple times when I had answered a question.

The one area (currently per my experimentation) that ChatGPT and other LLMs lack is the ability to imbue emotion into the line of questioning to solicit an emotional response to arrive at a particular outcome with greater speed or results. This topic will be explored in a future blog post.

As illustrated within this post, a line of questioning is fundamental to Conversational AI application design strategy: the diction within the prompt, the question being asked, and subsequent questions thereafter when aiming to achieve a particular goal. Generative AI is now showing the ability to craft questions dynamically with understanding the underlying subject matter. Writing down the lines of questioning these systems follow can yiled a deeper understanding of the undelrying thought process the systems undergo. This will be an excersise I contineu to do when evaluating Generative AI witin the Conversational AI space. Until then, focus on the questions at hand and take a moment next time you engage in a conversation to note the questions being asked, as the insights you uncover might be extremely valuable.

--

--

Sam Bobo
Speaking Artificially

Product Manager of Artificial Intelligence, Conversational AI, and Enterprise Transformation | Former IBM Watson | https://www.linkedin.com/in/sambobo/