Computational Linguistics
About

Implicature Processing

Implicature processing concerns how listeners derive meanings that speakers intend but do not explicitly state, and how computational systems can model these pragmatic inferences grounded in conversational cooperation.

P(implicature | u, C) = P_S(u | meaning, C) · P(meaning) / P(u)

Conversational implicature, introduced by H. P. Grice (1975), refers to meaning that is communicated by a speaker and inferred by a listener but is not part of the literal content of an utterance. When someone asks "Did you enjoy the movie?" and receives the reply "The popcorn was good," the respondent implicates that the movie itself was not particularly enjoyable. This inference arises not from what was said but from what was conspicuously not said, combined with the assumption that the speaker is being cooperative. Modeling such inferences computationally is central to building systems that understand natural language as humans use it.

Types of Implicature

Scalar Implicature Utterance: "Some students passed."
Literal meaning: ∃x[student(x) ∧ passed(x)]
Scalar alternative: "All students passed."
Implicature: ¬∀x[student(x) → passed(x)]

RSA derivation: P_L₁(¬all | "some") ∝ P_S₁("some" | ¬all) · P(¬all)
A cooperative speaker who knew all would say "all," so "some" implies "not all"

Grice distinguished between particularized implicatures, which depend on specific conversational contexts, and generalized implicatures, which arise in most contexts regardless of specifics. The most extensively studied type is scalar implicature, where the use of a weaker term on a scale (e.g., "some" on the scale <some, most, all>) implicates the negation of the stronger term. Other types include relevance implicatures (inferring that a response addresses the question at hand), manner implicatures (interpreting marked expressions as conveying marked meanings), and quantity implicatures beyond the scalar case.

Computational Models

The Rational Speech Act (RSA) framework provides the most successful computational model of implicature processing. In RSA, scalar implicature emerges naturally: a pragmatic speaker who intends "not all" would prefer "some" over "all," because "all" would be literally false. The pragmatic listener, inverting this reasoning, infers "not all" from the speaker's choice of "some." RSA models have been extended to handle embedded implicatures ("Every student read some of the books"), exhaustivity effects, and the interaction of implicature with other pragmatic phenomena. These models make quantitative predictions about human behavior that have been confirmed in experimental studies.

Developmental and Processing Evidence

Psycholinguistic research on scalar implicature has revealed that these inferences are not automatic but require cognitive effort. Children under age five often interpret "some" literally, accepting "Some horses jumped over the fence" even when all horses did (Noveck, 2001). Adults compute scalar implicatures more slowly than literal interpretations, as measured by reading time and response latency. These findings support the view that implicature involves a post-semantic pragmatic enrichment process — a distinction that computational models must capture to accurately predict human language processing.

Implicature in NLP Systems

Natural language processing systems encounter implicature in numerous contexts. Question-answering systems must recognize when an answer implicates additional information: if asked "Is the restaurant expensive?" and a review says "The portions are generous," the system should infer a positive evaluation. Dialogue systems must handle indirect responses that convey information through implicature rather than direct statement. Sentiment analysis systems must recognize that "The service was not bad" implicates a mildly positive evaluation through litotes, a specific kind of scalar reasoning.

Recent work has evaluated large language models' ability to process implicature, with mixed results. While models like GPT-4 perform well on standard scalar implicature tests, they struggle with particularized implicatures that require reasoning about specific conversational contexts and speaker knowledge states. The IMPLICATURE benchmark (Ruis et al., 2022) revealed significant gaps between model and human performance on conversational implicature, particularly for cases requiring multi-step pragmatic reasoning. Bridging this gap likely requires integrating explicit models of speaker intention and world knowledge into neural architectures.

Related Topics

References

  1. Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and Semantics, Vol. 3: Speech Acts (pp. 41–58). Academic Press.
  2. Noveck, I. A. (2001). When children are more logical than adults: Experimental investigations of scalar implicature. Cognition, 78(2), 165–188. doi:10.1016/S0010-0277(00)00114-1
  3. Goodman, N. D., & Frank, M. C. (2016). Pragmatic language interpretation as probabilistic inference. Trends in Cognitive Sciences, 20(11), 818–829. doi:10.1016/j.tics.2016.08.005
  4. Ruis, L., Khan, A., Biderman, S., Hooker, S., Rocktäschel, T., & Grefenstette, E. (2022). Large language models are not zero-shot communicators. arXiv preprint arXiv:2210.14986.

External Links