I am an Assistant Professor of Linguistics and Data Science at New York University. I direct the Computation and Psycholinguistics Lab, which develops computational models of human language comprehension and acquisition, as well as methods for interpreting and evaluating neural network models for natural language processing.
I am accepting Ph.D. students through the Data Science program, to start in Fall 2021. I do not currently offer internships.
September: I gave a keynote talk at AMLaP on "Neural networks as a framework for modeling human syntactic processing".
April: The preprint of the review I wrote with Marco Baroni on Syntactic Structure from Deep Learning, for Annual Reviews of Linguistics, is now online.
April: Papers accepted to ACL: cross-linguistic language model syntactic evaluation, data augmentation for robustness to inference heuristics, constituency vs. dependency tree-LSTMs, and an opinion piece on evaluation for 'human-like' linguistic generalization.
Raquel Fernández and I are organizing CoNLL 2020, with a renewed focus on "theoretically, cognitively and scientifically motivated approaches to computational linguistics"!
R. Thomas McCoy, Robert Frank & Tal Linzen (2020). Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. Transactions of the Association for Computational Linguistics. [arXiv]
Tal Linzen, Emmanuel Dupoux & Yoav Goldberg (2016). Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics 4, 521–535. [link] [pdf] [bib]
Here's a video of a talk I gave in December 2018 at the Allen Institute for Artificial Intelligence.