Conversational implicature, introduced by H. P. Grice (1975), refers to meaning that is communicated by a speaker and inferred by a listener but is not part of the literal content of an utterance. When someone asks "Did you enjoy the movie?" and receives the reply "The popcorn was good," the respondent implicates that the movie itself was not particularly enjoyable. This inference arises not from what was said but from what was conspicuously not said, combined with the assumption that the speaker is being cooperative. Modeling such inferences computationally is central to building systems that understand natural language as humans use it.
Types of Implicature
Literal meaning: ∃x[student(x) ∧ passed(x)]
Scalar alternative: "All students passed."
Implicature: ¬∀x[student(x) → passed(x)]
RSA derivation: P_L₁(¬all | "some") ∝ P_S₁("some" | ¬all) · P(¬all)
A cooperative speaker who knew all would say "all," so "some" implies "not all"
Grice distinguished between particularized implicatures, which depend on specific conversational contexts, and generalized implicatures, which arise in most contexts regardless of specifics. The most extensively studied type is scalar implicature, where the use of a weaker term on a scale (e.g., "some" on the scale <some, most, all>) implicates the negation of the stronger term. Other types include relevance implicatures (inferring that a response addresses the question at hand), manner implicatures (interpreting marked expressions as conveying marked meanings), and quantity implicatures beyond the scalar case.
Computational Models
The Rational Speech Act (RSA) framework provides the most successful computational model of implicature processing. In RSA, scalar implicature emerges naturally: a pragmatic speaker who intends "not all" would prefer "some" over "all," because "all" would be literally false. The pragmatic listener, inverting this reasoning, infers "not all" from the speaker's choice of "some." RSA models have been extended to handle embedded implicatures ("Every student read some of the books"), exhaustivity effects, and the interaction of implicature with other pragmatic phenomena. These models make quantitative predictions about human behavior that have been confirmed in experimental studies.
Psycholinguistic research on scalar implicature has revealed that these inferences are not automatic but require cognitive effort. Children under age five often interpret "some" literally, accepting "Some horses jumped over the fence" even when all horses did (Noveck, 2001). Adults compute scalar implicatures more slowly than literal interpretations, as measured by reading time and response latency. These findings support the view that implicature involves a post-semantic pragmatic enrichment process — a distinction that computational models must capture to accurately predict human language processing.
Implicature in NLP Systems
Natural language processing systems encounter implicature in numerous contexts. Question-answering systems must recognize when an answer implicates additional information: if asked "Is the restaurant expensive?" and a review says "The portions are generous," the system should infer a positive evaluation. Dialogue systems must handle indirect responses that convey information through implicature rather than direct statement. Sentiment analysis systems must recognize that "The service was not bad" implicates a mildly positive evaluation through litotes, a specific kind of scalar reasoning.
Recent work has evaluated large language models' ability to process implicature, with mixed results. While models like GPT-4 perform well on standard scalar implicature tests, they struggle with particularized implicatures that require reasoning about specific conversational contexts and speaker knowledge states. The IMPLICATURE benchmark (Ruis et al., 2022) revealed significant gaps between model and human performance on conversational implicature, particularly for cases requiring multi-step pragmatic reasoning. Bridging this gap likely requires integrating explicit models of speaker intention and world knowledge into neural architectures.