1. 6

Ambiguity is an intrinsic feature of natural language. Managing ambiguity is a key part of human language understanding, allowing us to anticipate misunderstanding as communicators and revise our interpretations as listeners. As language models (LMs) are increasingly employed as dialogue interfaces and writing aids, handling ambiguous language is critical to their success. We characterize ambiguity in a sentence by its effect on entailment relations with another sentence, and collect AmbiEnt, a linguist-annotated benchmark of 1,645 examples with diverse kinds of ambiguity. We design a suite of tests based on AmbiEnt, presenting the first evaluation of pretrained LMs to recognize ambiguity and disentangle possible meanings. We find that the task remains extremely challenging, including for the recent GPT-4, whose generated disambiguations are considered correct only 32% of the time in human evaluation, compared to 90% for disambiguations in our dataset. Finally, to illustrate the value of ambiguity-sensitive tools, we show that a multilabel NLI model can flag political claims in the wild that are misleading due to ambiguity. We encourage the field to rediscover the importance of ambiguity for NLP.

    1. 2

      I’m slightly embarrassed that I didn’t notice the pun in the title until the second reading.

      1. 1

        Sorry, what pun? Maybe I am a language model.

        1. 2

          The ‘We’re Afraid’ bit is a direct reference to the kinds of language construct that the paper is describing. It’s similar to the ‘we found an neuron’ thing in the title of another paper that was shared here a few weeks back.

          1. 4

            How’s that a pun? (native english speaker wondering what you’re seeing)

          2. 1

            Interesting, thank you for explaining!

    2. 2

      We know that ambiguity is modeled. Further, we have a toy model which can explain how polysemy is encoded. The Wiktionary page for afraid calls it a colloquialism, and I think that that is a better explanation of what the paper is trying to explore: humans often have hidden implications in their ordinary speech, and we choose semantics which let us deny those implications. The resulting utterances are inherently intentionally ambiguous when we consider their entailments, but that is a wholly semantic issue which can’t ever be handled at the syntactic level.

      Quick formal proof of my last line there: Consider the statement, “There exists a set of countable cardinality.” Then, the (upward) Löwenhein-Skolem theorem says that this statement has models of every uncountable cardinality; it is ambiguous whether it entails the continuum hypothesis, for example.

      The paper’s data is confounded! They used InstructGPT, a fine-tuned GPT-3, to generate instances of the problem. We can’t trust the results showing that GPT-4 performs better than alternatives, because GPT-3 and GPT-4 are both from the same vendor and neither are reproducible or documented for the public.

    3. 1

      I’m afraid I have real trouble understanding what they mean by “they’re afraid …”. So I asked ChatGPT (GPT-4),

      Me: what are the possible meanings of “We’re Afraid Language Models Aren’t Modeling Ambiguity”?

      GPT-4: The phrase “We’re Afraid Language Models Aren’t Modeling Ambiguity” can have several possible meanings. Here are a few interpretations:

      Concern about language models’ performance: The statement might express concern or disappointment that existing language models, like GPT-4, may not be accurately capturing or handling the ambiguous nature of human language.

      Ambiguity in natural language: The phrase acknowledges that ambiguity is an inherent aspect of human language, which can make it difficult for language models to understand and generate text that accurately reflects intended meanings.

      Limitations of current AI technology: It might imply that despite advancements in AI and language models, there are still limitations in the technology’s ability to fully comprehend and process the complexities of ambiguous language.

      Call for further research: The statement could be a call to action for researchers to improve language models by developing new techniques and approaches to better model and understand ambiguity in language.

      Comparison to human language understanding: The phrase might suggest a comparison between AI language models and human language understanding, highlighting that AI models may not yet have achieved the same level of proficiency as humans in handling ambiguous language.

      Ethical considerations: The statement may also touch upon ethical implications, as language models that are unable to effectively model ambiguity could lead to misunderstandings or misrepresentations when used in real-world applications.