I am an Assistant Professor of Linguistics and Data Science at New York University. I direct the Computation and Psycholinguistics Lab, which develops computational models of human language comprehension and acquisition, as well as methods for interpreting and evaluating neural network models for natural language processing.
I do not offer internships.
December 10: Keynote talk at COLING.
September: I gave a keynote talk at AMLaP (recording).
April: The preprint of the review I wrote with Marco Baroni on Syntactic Structure from Deep Learning, for Annual Reviews of Linguistics, is now online.
April: Papers accepted to ACL: cross-linguistic language model syntactic evaluation, data augmentation for robustness to inference heuristics, constituency vs. dependency tree-LSTMs, and an opinion piece on evaluation for 'human-like' linguistic generalization (video).
Raquel Fernández and I are organizing CoNLL 2020, with a renewed focus on "theoretically, cognitively and scientifically motivated approaches to computational linguistics"!
Tal Linzen (2020). How can we accelerate progress towards human-like linguistic generalization? ACL. [pdf]
R. Thomas McCoy, Robert Frank & Tal Linzen (2020). Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. TACL. [arXiv]
Tal Linzen & Florian Jaeger (2016). Uncertainty and expectation in sentence processing: Evidence from subcategorization distributions. Cognitive Science. [pdf]
Tal Linzen, Emmanuel Dupoux & Yoav Goldberg (2016). Assessing the ability of LSTMs to learn syntax-sensitive dependencies. TACL. [pdf]
How can we accelerate progress towards human-like linguistic generalization? (ACL position piece; July 2020).
Neural networks as a framework for modeling human syntactic processing (AMLaP keynote; September 2020).
Talk at Allen Institute for Artificial Intelligence (December 2018).