LLF – Bât. ODG – 5e étage – Salle du conseil (533)
Marie-Catherine de Marneffe
When we communicate, we infer a lot beyond the literal meaning of the words we hear or read. In particular, our understanding of an utterance depends on assessing the extent to which the speaker presents events as factual. An unadorned declarative like “The cancer has spread” conveys firm speaker commitment of the cancer having spread, whereas “There are some indicators that the cancer has spread” imbues the claim with uncertainty. In this talk, I will investigate how well BERT, a current neural language model, performs on predicting factuality in several existing English datasets, encompassing various linguistic constructions. I will show that, although BERT achieves very good results, it does so by exploiting surface patterns that correlate with certain factuality labels, but it fails on items that necessitate pragmatic knowledge. These results highlight directions for improvement to build robust natural language understanding systems.