I am an Assistant Professor of Linguistics and Data Science at New York University. I direct the Computation and Psycholinguistics Lab, which develops computational models of human language comprehension and acquisition, as well as methods for interpreting and evaluating neural network models for natural language processing.
For Fall 2021, I am accepting Ph.D. students through the Data Science program. The application deadline is December 12. I especially welcome applications from prospective students who are interested in the computational study of human language processing and acquisition, and in how such computational study can contribute to the development of robust and data-efficient artificial intelligence systems. To keep things fair for all applicants, I generally don’t meet with prospective students outside of the admissions process, but I am happy to answer questions not addressed by the Ph.D. program website.
I do not currently offer internships.
September: I gave a keynote talk at AMLaP (recording).
April: The preprint of the review I wrote with Marco Baroni on Syntactic Structure from Deep Learning, for Annual Reviews of Linguistics, is now online.
April: Papers accepted to ACL: cross-linguistic language model syntactic evaluation, data augmentation for robustness to inference heuristics, constituency vs. dependency tree-LSTMs, and an opinion piece on evaluation for 'human-like' linguistic generalization (video).
Raquel Fernández and I are organizing CoNLL 2020, with a renewed focus on "theoretically, cognitively and scientifically motivated approaches to computational linguistics"!
Tal Linzen (2020). How can we accelerate progress towards human-like linguistic generalization? ACL. [pdf]
R. Thomas McCoy, Robert Frank & Tal Linzen (2020). Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. TACL. [arXiv]
Tal Linzen & Florian Jaeger (2016). Uncertainty and expectation in sentence processing: Evidence from subcategorization distributions. Cognitive Science. [pdf]
Tal Linzen, Emmanuel Dupoux & Yoav Goldberg (2016). Assessing the ability of LSTMs to learn syntax-sensitive dependencies. TACL. [pdf]
How can we accelerate progress towards human-like linguistic generalization? (ACL position piece; July 2020).
Neural networks as a framework for modeling human syntactic processing (AMLaP keynote; September 2020).
Talk at Allen Institute for Artificial Intelligence (December 2018).