Bingzhi Li

Docteurs récents

0000-0002-1270-9207

Status : PhD student

Address :

LLF, CNRS – UMR 7110
Université Paris Diderot-Paris 7
Case 7031 – 5, rue Thomas Mann,
75205 Paris cedex 13

E-mail : ovatmuv2013@tznvy.pbz

General presentation

Interests:

  • computational syntax, linguistic structure 
  • neural language models, syntactic representation, interpretability
  • compositional generalization

Education:

  • MA in Computational linguistics, LLF, Université de Paris, France

Professional experiences:

  • Fall 2022, Visiting researcher at Center for Data Science, NYU, US
    Project: Build a structural generalization benchmark and analyse the generalization properties of Transformers seq2seq models and symbolic models
    Advisors: Tal Linzen, Najoung Kim
  • Spring 2020, research intern at LLF, Université de Paris
    Project: Investigate the representation of tense in BERT
    Advisor: Guillaume Wisniewski

Teaching

  • Formal Grammar and Parsing ( as TA, fall 2020, 2021)
  • Introduction to Python ( main instructor, fall 2020, 2021)
  • L3 and M1 NLP projects ( as co-advisor, spring 2021, 2022,2023)

Thèse

Title : Study of the abstraction capabilities of neural language models

Supervision :
  Benoît Crabbé
  Guillaume Wisniewski

PhD Defense : 2023-11-28

Inscription : 2020 à Université Paris-Cité

Jury :

  • Thierry POIBEAU, Université Sorbonne Nouvelle - CNRS, rapporteur
  • François YVON,  Sorbonne Université - CNRS, rapporteur
  • Barbara HEMFORTH, Université Paris Cité - CNRS, examiner
  • Dieuwke HUPKES, Meta AI, examiner
  • Benoît CRABBÉ, Université Paris Cité, thesis supervisor
  • Guillaume WISNIEWSKI, Université Paris Cité, thesis supervisor.

Abstract :

Traditional linguistic theories have long posited that human language competence is founded on innate structural properties and symbolic representations. However, Transformer-based language models, which learn language representations from unannotated text, have excelled in various NLP tasks without explicitly modeling such linguistic priors. Their empirical success challenges these long-standing linguistic assumptions and also raises questions about the models' underlying mechanisms for linguistic competence. This thesis seeks to determine whether Transformer models primarily rely on surface-level patterns for representing syntactic structures, or if they also implicitly capture more abstract rules. The study serves two main objectives: i) assessing the potential of an autoregressive Transformer language model as an explanatory tool for human syntactic processing; ii) enhancing the model's interpretability. To achieve these goals, we assess the syntactic abstractions in Transformer models on two levels: first, the ability to represent hierarchical structures, and second, the ability to compositionally generalize observed structures. We introduce an integrated linguistically-informed analysis framework that consists of three interrelated layers: our analysis starts with assessing the model's performance on syntactic challenge sets to see how closely it mirrors human language behavior. Following this, we use linguistic probes and causal interventions to assess how well the model's internal representations align with established linguistic theories. Our findings reveal that Transformers manage to represent hierarchical structures for nuanced syntactic generalization. However, instead of relying on systematic compositional rules, they seem to lean more towards lexico-categorical abstraction and structural analogies. While this allows them to handle a sophisticated form of grammatical productivity for familiar structures, they encounter challenges with structures that require a systematic application of compositional rules. This study highlights both the promise and potential limitations of autoregressive Transformer models as explanatory tools for human syntactic processing, and provides a methodological framework for its analysis and interpretability.

Bibliography

Bingzhi Li, Lucia Donatelli, Alexander Koller, Tal Linzen, Yuekun Yao, Najoung Kim. 2023. SLOG: A Structural Generalization Benchmark for Semantic Parsing, accepted by EMNLP 2023

Bingzhi Li, Guillaume Wisniewski, and Benoit Crabbé 2022a. Assessing the capacity of transformer to abstract syntactic representations: a contrastive analysis based on long-distance agreementTransactions of the Association for Computational Linguistics

Bingzhi Li, Guillaume Wisniewski, and Benoit Crabbé. 2022b. How Distributed are Distributed Representations? An Observation on the Locality of Syntactic Information in Verb Agreement Tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics,Volume 2, pages 501–507, Dublin, Ireland. Association for Computational Linguistics.

Bingzhi Li, Guillaume Wisniewski, and Benoît Crabbé. 2022c. Les représentations distribuées sont-elles vraiment distribuées ? Observations sur la localisation de l’information syntaxique dans les tâches d’accord du verbe en français. Traitement Automatique des Langues Naturelles, pages 384–391, Avignon, France. ATALA.

Bingzhi Li, Guillaume Wisniewski, and Benoit Crabbé. 2021. Are Transformers a Modern Version of ELIZA? Observations on French Object Verb Agreement. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4599–4610, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.

Bingzhi Li and Guillaume Wisniewski. 2021. Are Neural Networks Extracting Linguistic Properties or Memorizing Training Data? An Observation with a Multilingual Probe for Predicting Tense. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3080–3089, Online. Association for Computational Linguistics.