“What you talking about, Willis?” The importance of Clarification Requests for Conversational AI

Maria Di Maro
URBAN/ECO Research
Published in
5 min readApr 17, 2024

As pointed out in a previous post, language plays a crucial role in allowing individuals to articulate their intentions, desires, and actions within communication. In communication, in addition to the importance of conveying a text intentionally and in a coherent and cohesive manner, it is also essential to ensure clarity. To achieve this, it is important to refer not only to the ability to elaborate a declarative text of various kinds but also to the ability to ask questions. More specifically, in this quest for clarity, clarification requests emerge as important communicative strategies (Clark, 1996), aiding in the precise expression of actions through language.

Clarification requests, often manifesting in the form of questions or prompts seeking further elucidation or confirmation, embody the speaker’s commitment to ensuring mutual understanding and alignment of interpretation between interlocutors. By eliciting additional information or clarification, these requests bridge potential gaps in comprehension, thereby enhancing the clarity and effectiveness of communication. Moreover, they are an important tool for grounding. Grounding is an essential part of communication used to ensure mutual understanding. It involves confirming, validating, or clarifying information in the construction of a Common Ground, i.e., set of shared knowledge, beliefs, and information guiding the mutual comprehension (Clark & Schaefer 1989). This can be distinguished between Communal Common Ground (CCG) — the amount of information shared with people that belong to the same community — , and Personal Common Ground (PCG) — the amount of information collected over time through communicative exchanges with an interlocutor (Clark, 2015).

Whether in daily interactions or complex professional contexts, effective communication depends on the clarity of the exchange. Consider a scenario where a project manager delegates tasks to team members. Without clarification requests, vague instructions might lead to misunderstandings or misinterpretations, jeopardizing the project’s success. By actively seeking clarification through queries such as “Could you please clarify the deadline for this task?” or “Can you elaborate on the specific requirements?”, the team members ensure that actions are understood unambiguously.

Similarly, for Conversational AI, clarification requests play a pivotal role in enhancing the user experience and improving the effectiveness of interactions. One of the primary challenges faced by Conversational AI systems is the inherent ambiguity and variability of natural language. Users may phrase their queries in diverse ways, use slang or colloquialisms, or provide incomplete information. In such cases, clarification requests enable the AI system to clarify the user’s intent, disambiguate ambiguous statements, and provide more relevant and accurate responses.

In this context, one thing that needs to be therefore explored is the use of clarification requests depending on the communication problem, and the selection of a corresponding appropriate form of request.

Based on previous studies, a hierarchical classification of clarification requests was proposed (Di Maro, 2021). Starting from Allwood (1992), four basic communicative functions were defined, corresponding to the communicative levels of contact, perception, undestanding, and intention. On each of these levels, one or more problems may occur, triggered by specific linguistic and/or informational issues (triggers). Moreover, clarification requests can also occur in different forms: open questions, alternatives, positive polar questions, negative polar questions, and declarative sentences. Each formulation can convey a specific function and refer to a problematic item in the previous utterance (compromised item). Specifically, for the third level of communication problems — understanding — we can further define the specific issues as follows:

· Lexical Understanding: problems occur at this level when a lexical item is unclear to the hearer.

· Reference Reconstruction: uncertainties in the resolution of anaphora or extralinguistic reference can lead to clarification needs.

· Syntactic Understanding: understanding problems can be caused by a problematic recognition of word or phrases boundaries and syntactic structures.

· Logical Understanding: this problem refers to the logical relation connecting the new information to the antecedent one.

· Information Processing: this problem refers to two different issues concerning the received information, as i) it could not be satisfactory for its entire understanding (i.e., missing information), or ii) the previously grounded information needs to be stabilised or checked because of inconsistencies via a confirmation or a control-targeted question (i.e., common ground inconsistencies).

A striking example of the management of the Common Ground (Clark, 1996) via clarification requests might be represented by requests aimed at signalling a Common Ground Inconsistency. With this problem, we refer to the incompatibility between the listener belief, stored in the PCG, and the new evidence provided by the speaker, candidate to be part of the PCG.

Representation of the scenario eliciting a Common Ground Clarification Request. In the
representation of the agent A, the CCG is stored to guide the process of accumulating information in the Common Ground. The information (i1, i2, i3, …, in) are communicated by the agent B to A,
and sequentially stored in the PCG. When B utters a new information iz , this is represented as a new item candidate to be part of the PCG. This representation has disastrous results, in that the presence of the new item iz in the PCG clashes with the presence of another item i3, whose validity is now questioned. This conflict represents a Common Ground Inconsistency and is translated in the Common Ground Clarification Request ¬i3? (i.e., high negative polar question).

For instance, the action of [Grinding] requires, as its pre-condition, a solid ingredient. The same action of [Grinding] results in the ingredient becoming powder as post-condition. This means that, after this action, an action like [Cutting] cannot be performed on the same ingredient as the previous post-condition caused the pre-condition of [Cutting] (i.e., solid object) not to be verified. Clarification Requests can be in this case adopted as a corrective feedback. This problem has indeed relevant linguistic consequences as it is presented using specific syntactic forms. Precisely, when facing a conflict between a previously grounded (i.e., melt the butter) information and a newly conveyed one (i.e., cut the butter), a specific type of clarification request is usually produced to signal the problem: a high negative polar question (Van Rooy & Safarova, 2003), i.e., Should I not have melted the butter? (Di Maro et al., 2021a, Di Maro et al. 2021b).

Clarification requests are indispensable tools for grounding actions and for fostering understanding. By actively asking for clarification and elucidation, individuals pave the way for clearer, more coherent communication, enhancing the efficacy of the interaction. Since adopting clarification requests enriches linguistic exchanges, it is important to design conversational systems that are capable of representing and managing the Common Ground dynamically, so that any potential problems or conflicts can be detected and signalled with the most appropriate type of clarification request.

In the context of Conversational AI, the importance of choosing the right question should be explored as much as the generation of the right answer. Indeed, asking questions is not only a useful tool for understanding, but also a strategic-communicative tool. On the one hand, this can be symptomatic of the speaker’s intentionality and, on the other hand, a preferential pathway to achieving the communicative goal.

References

Allwood, J., Nivre, J., & Ahlsén, E. (1992). On the semantics and pragmatics of linguistic feedback. Journal of semantics, 9(1), 1–26.

Clark, Eve V. (2015). Common ground. The handbook of language emergence, 328–353.

Clark, H. H., & Schaefer, E. F. (1989). Collaborating on contributions to conversations. In North-Holland Linguistic Series: Linguistic Variations (Vol. 54, pp. 123–152). Elsevier.

Clark, Herbert H. (1996). Using language. Cambridge university press.

Di Maro, M., Origlia, A., & Cutugno, F. (2021a). PolarExpress: Polar question forms expressing bias-evidence conflicts in italian. International Journal of linguistics, 13(4), 14–35.

Di Maro, M., Origlia, A., & Cutugno, F. (2021b). Cutting melted butter? Common Ground inconsistencies management in dialogue systems using graph databases. IJCoL. Italian Journal of Computational Linguistics, 7(7–1, 2), 157–190.

Di Maro, Maria (2021). “Shouldn’t I use a polar question?”. Proper Question Forms Disentangling Inconsistencies in Dialogue Systems. Ph. D. thesis, Mind, Gender and Language, University of Naples Federico II.

--

--