Tal Linzen

I am an Assistant Professor of Linguistics and Data Science at New York University.

News

September: I gave a keynote talk at AMLaP on "Neural networks as a framework for modeling human syntactic processing". There's a video recording of the talk on Twitch (!).

July: We are presenting two papers at CogSci, one on inductive biases through meta-learning and the other on agreement errors in humans and RNNs.

April: The preprint of the review I wrote with Marco Baroni on Syntactic Structure from Deep Learning, for Annual Reviews of Linguistics, is now online.

April: Papers accepted to ACL: cross-linguistic language model syntactic evaluation, data augmentation for robustness to inference heuristics, constituency vs. dependency tree-LSTMs, and an opinion piece on evaluation for 'human-like' linguistic generalization.

Raquel Fernández and I are organizing CoNLL 2020, with a renewed focus on "theoretically, cognitively and scientifically motivated approaches to computational linguistics"!

Contact

linzen@nyu.edu
Office 704
60 5th Avenue
New York, NY 10011

Representative publications

R. Thomas McCoy, Robert Frank & Tal Linzen (2020). Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. Transactions of the Association for Computational Linguistics. [arXiv]

Tal Linzen (2019). What can linguistics and deep learning contribute to each other? Response to Pater. Language 95(1), e98–e108. [link] [pdf]

Tal Linzen & Florian Jaeger (2016). Uncertainty and expectation in sentence processing: Evidence from subcategorization distributions. Cognitive Science 40(6), 1382–1411. [link] [pdf] [bib]

Tal Linzen, Emmanuel Dupoux & Yoav Goldberg (2016). Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics 4, 521–535. [link] [pdf] [bib]

Here's a video of a talk I gave in December 2018 at the Allen Institute for Artificial Intelligence.