Tal Linzen

In progress

R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao & Asli Celikyilmaz. How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN. [arXiv]

Thibault Sellam, Steve Yadlowsky, Jason Wei, Naomi Saphra, Alexander D'Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, Ian Tenney & Ellie Pavlick. The MultiBERTs: BERT reproductions for robustness analysis. [arXiv]

Nouha Dziri, Hannah Rashkin, Tal Linzen & David Reitter. Evaluating groundedness in dialogue systems: the BEGIN benchmark. [arXiv]


Grusha Prasad & Tal Linzen (2021). Rapid syntactic adaptation in self-paced reading: detectable, but only with many participants. Journal of Experimental Psychology: Learning, Memory, and Cognition. [pdf] [link] [PsyArXiv]

Jason Wei, Dan Garrette, Tal Linzen & Ellie Pavlick (2021). Frequency effects on syntactic rule learning in Transformers. EMNLP. [link] [pdf]

Alicia Parrish, William Huang, Omar Agha, Soo-Hwan Lee, Nikita Nangia, Alex Warstadt, Karmanya Aggarwal, Emily Allaway, Tal Linzen & Samuel R. Bowman (2021). Does putting a linguist in the loop improve NLU data collection? Findings of EMNLP. [link] [pdf]

Alicia Parrish*, Sebastian Schuster*, Alex Warstadt*, Omar Agha, Soo-Hwan Lee, Zhuoye Zhao, Samuel R. Bowman & Tal Linzen (2021). NOPE: A corpus of naturally-occurring presuppositions in English. CoNLL. [pdf] [link]

Shauli Ravfogel*, Grusha Prasad*, Tal Linzen & Yoav Goldberg (2021). Counterfactual interventions reveal the causal effect of relative clause representations on agreement prediction. CoNLL. [pdf] [link]

Laura Aina & Tal Linzen (2021). The language model understood the prompt was ambiguous: probing syntactic uncertainty through generation. BlackboxNLP. [pdf] [link]

Matthew Finlayson, Aaron Mueller, Stuart Shieber, Sebastian Gehrmann, Tal Linzen & Yonatan Belinkov (2021). Causal analysis of syntactic agreement mechanisms in neural language models. ACL. [link] [pdf]

Marten van Schijndel & Tal Linzen (2021). Single-stage prediction models do not explain the magnitude of syntactic disambiguation difficulty. Cognitive Science. [link] [pdf]

Charles Lovering, Rohan Jha, Tal Linzen & Ellie Pavlick (2021). Predicting inductive biases of pre-trained models. ICLR. [link] [pdf] [bib]

Karl Mulligan, Robert Frank & Tal Linzen (2021). Structure here, bias there: Hierarchical generalization by jointly learning syntactic transformations. Society for Computation in Linguistics. [bib] [link] [pdf]

Tal Linzen & Marco Baroni (2021). Syntactic structure from deep learning. Annual Reviews of Linguistics. [bib] [link] [pdf]


Paul Soulos, R. Thomas McCoy, Tal Linzen & Paul Smolensky (2020). Discovering the compositional structure of vector representations with role learning networks. BlackboxNLP. [link] [pdf]

R. Thomas McCoy, Junghyun Min & Tal Linzen (2020). BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. BlackboxNLP. [link] [pdf]

Najoung Kim & Tal Linzen (2020). COGS: A compositional generalization challenge based on semantic interpretation. EMNLP. [link] [pdf] [bib]

R. Thomas McCoy, Erin Grant, Paul Smolensky, Tom Griffiths & Tal Linzen (2020). Universal linguistic inductive biases via meta-learning. Cognitive Science Society. [bib] [link] [pdf]

Suhas Arehalli & Tal Linzen (2020). Neural language models capture some, but not all, agreement attraction effects. Cognitive Science Society. [bib] [link] [pdf]

Naomi Havron, Camila Scaff, Maria Julia Carbajal, Tal Linzen, Axel Barrault & Anne Christophe (2020). Priming syntactic ambiguity resolution in children and adults. Language, Cognition and Neuroscience. [link] [pdf]

Tal Linzen (2020). How can we accelerate progress towards human-like linguistic generalization? ACL. [pdf] [bib]

Aaron Mueller, Garrett Nicolai, Panayiota Petrou-Zeniou, Natalia Talmina & Tal Linzen (2020). Cross-linguistic syntactic evaluation of word prediction models. ACL. [link] [pdf] [bib]

Junghyun Min, Richard T. McCoy, Dipanjan Das, Emily Pitler & Tal Linzen (2020). Syntactic data augmentation increases robustness to inference heuristics. ACL. [pdf] [arXiv] [bib]

Michael Lepori, Tal Linzen & R. Thomas McCoy (2020). Representations of syntax [MASK] useful: Effects of constituency and dependency structure in recursive LSTMs. ACL. [link] [pdf] [bib]

R. Thomas McCoy, Robert Frank & Tal Linzen (2020). Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. Transactions of the Association for Computational Linguistics, 8, 125–140. [link] [pdf] [bib]

Natalia Talmina & Tal Linzen (2020). Neural network learning of the Russian genitive of negation: optionality and structure sensitivity. Society for Computation in Linguistics (SCiL), 21. [link] [pdf] [bib]


Marten van Schijndel, Aaron Mueller & Tal Linzen (2019). Quantity doesn't buy quality syntax with neural language models. EMNLP. [link] [pdf] [bib]

Grusha Prasad, Marten van Schijndel & Tal Linzen (2019). Using priming to uncover the organization of syntactic representations in neural language models. CoNLL. [pdf] [bib]

R. Thomas McCoy, Ellie Pavlick & Tal Linzen (2019). Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. ACL. [link] [pdf] [bib]

Afra Alishahi, Grzegorz Chrupała & Tal Linzen (2019). Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop. Journal of Natural Language Engineering, 25(4), 543–557. [link] [pdf] [bib]

Brenden Lake, Tal Linzen & Marco Baroni (2019). Human few-shot learning of compositional instructions. Cognitive Science Society. [link] [pdf] [bib]

Shauli Ravfogel, Yoav Goldberg & Tal Linzen (2019). Studying the inductive biases of RNNs with synthetic variations of natural languages. NAACL. [link] [pdf] [bib]

Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick (2019). Probing what different NLP tasks teach machines about function word comprehension. Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 235–249. [link] [pdf] [bib]

R. Thomas McCoy, Tal Linzen, Ewan Dunbar & Paul Smolensky (2019). RNNs implicitly implement tensor product representations. ICLR. [arXiv] [pdf]

Tal Linzen (2019). What can linguistics and deep learning contribute to each other? Response to Pater. Language. [link] [pdf] [bib]

R. Thomas McCoy & Tal Linzen (2019). Non-entailed subsequences as a challenge for natural language inference. Society for Computation in Linguistics (SCiL) (extended abstract). [pdf] [bib]

Marten van Schijndel & Tal Linzen (2019). Can entropy explain successor surprisal effects in reading? Society for Computation in Linguistics (SCiL). [pdf] [bib]


Rebecca Marvin & Tal Linzen (2018). Targeted syntactic evaluation of language models. EMNLP. [link] [pdf] [video] [bib]

Marten van Schijndel & Tal Linzen (2018). A neural model of adaptation in reading. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), pages 4704–4710. [link] [pdf] [video] [bib]

Tal Linzen & Yohei Oseki (2018). The reliability of acceptability judgments across languages. Glossa: a journal of general linguistics, 3(1), 100. [link] [pdf] [bib] [data]

Laura Gwilliams, Tal Linzen, David Poeppel & Alec Marantz (2018). In spoken word recognition the future predicts the past. Journal of Neuroscience 38(35), 7585–7599. [link] [pdf] [bib]

Tal Linzen & Brian Leonard (2018). Distinct patterns of syntactic agreement errors in recurrent networks and humans. Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 692–697. [link] [pdf] [bib]

Marten van Schijndel & Tal Linzen (2018). Modeling garden path effects without explicit hierarchical syntax. Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 2600–2605. [link] [pdf] [bib]

R. Thomas McCoy, Robert Frank & Tal Linzen (2018). Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 2093–2098. [link] [pdf] [bib]

Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, Marco Baroni (2018). Colorless green recurrent networks dream hierarchically. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 1195–1205. [link] [pdf] [bib]

Laura Gwilliams, David Poeppel, Alec Marantz & Tal Linzen (2018). Phonological (un)certainty weights lexical activation. Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL), pages 29–34. [link] [pdf] [bib]

James White, René Kager, Tal Linzen, Giorgos Markopoulos, Alexander Martin, Andrew Nevins, Sharon Peperkamp, Krisztina Polgárdi, Nina Topintzi & Ruben van de Vijver (2018). Preference for locality is affected by the prefix/suffix asymmetry: Evidence from artificial language learning. Proceedings of the 48th Annual Meeting of the North East Linguistic Society (NELS), pages 207–220. [pdf]

Itamar Kastner & Tal Linzen (2018). A morphosyntactic inductive bias in artificial language learning. Proceedings of the 48th Annual Meeting of the North East Linguistic Society (NELS), pages 81–90. [pdf] [bib]


Tal Linzen & Gillian Gallagher (2017). Rapid generalization in phonotactic learning. Laboratory Phonology: Journal of the Association for Laboratory Phonology 8(1): 24. [link] [pdf] [bib]

Émile Enguehard, Yoav Goldberg & Tal Linzen (2017). Exploring the Syntactic Abilities of RNNs with Multi-task Learning. Proceedings of the SIGNLL Conference on Computational Natural Language Learning (CoNLL), pages 3–14. [link] [pdf] [bib]

Tal Linzen, Noam Siegelman & Louisa Bogaerts (2017). Prediction and uncertainty in an artificial language. Proceedings of the 39th Annual Conference of the Cognitive Science Society, pages 2592–2597. [link] [pdf] [bib]

Gael Le Godais, Tal Linzen & Emmanuel Dupoux (2017). Comparing character-level neural language models using a lexical decision task. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL): Volume 2, Short Papers, pages 125–130. [link] [pdf] [bib]


Tal Linzen, Emmanuel Dupoux & Yoav Goldberg (2016). Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics 4, 521–535. [link] [pdf] [bib]

Allyson Ettinger & Tal Linzen (2016). Evaluating vector space models using human semantic priming results. Proceedings of the First Workshop on Evaluating Vector Space Representations for NLP, 72–77. [link] [pdf] [bib]

Tal Linzen (2016). Issues in evaluating semantic spaces using word analogies. Proceedings of the First Workshop on Evaluating Vector Space Representations for NLP, 13–18. [link] [pdf] [bib]

Tal Linzen, Emmanuel Dupoux & Benjamin Spector (2016). Quantificational features in distributional word representations. Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics (*SEM 2016), 1–11. [link] [pdf] [bib]

Einat Shetreet, Tal Linzen & Naama Friedmann (2016). Against all odds: exhaustive activation in lexical access of verb complementation options. Language, Cognition & Neuroscience 31(9), 1206–1214. [link] [pdf] [bib]

Tal Linzen (2016). The diminishing role of inalienability in the Hebrew Possessive Dative. Corpus Linguistics and Linguistic Theory 12(2), 325–354. [link] [pdf] [bib]

Tal Linzen & Florian Jaeger (2016). Uncertainty and expectation in sentence processing: Evidence from subcategorization distributions. Cognitive Science 40(6), 1382–1411. [link] [pdf] [bib]


Joseph Fruchter*, Tal Linzen*, Masha Westerlund & Alec Marantz (2015). Lexical preactivation in basic linguistic phrases. Journal of Cognitive Neuroscience 27(10), 1912–1935. (* indicates equal contribution.) [link] [pdf] [bib]

Tal Linzen & Timothy O'Donnell (2015). A model of rapid phonotactic generalization. Proceedings of Empirical Methods in Natural Language Processing (EMNLP 2015), 1126–1131. [link] [pdf] [bib]

Maria Gouskova & Tal Linzen (2015). Morphological conditioning of phonological regularization. The Linguistic Review 32(3), 427–473. [link] [lingbuzz] [pdf] [bib]

Mira Ariel, Elitzur Dattner, John Du Bois & Tal Linzen (2015). Pronominal datives: The royal road to argument status. Studies in Language 39(2), 257–321. [link] [pdf] [bib]


Tal Linzen & Florian Jaeger (2014). Investigating the role of entropy in sentence processing. Proceedings of the 2014 ACL Workshop on Cognitive Modeling and Computational Linguistics (CMCL), 10–18. [link] [pdf] [bib]

Tal Linzen & Gillian Gallagher (2014). The timecourse of generalization in phonotactic learning. Proceedings of Phonology 2013, ed. John Kingston, Claire Moore-Cantwell, Joe Pater, and Robert Staub. Washington, DC: Linguistic Society of America. [link] [pdf] [bib]

Tal Linzen (2014). Parallels between cross-linguistic and language-internal variation in Hebrew possessive constructions. Linguistics 52(3), 759–792. [link] [pdf] [bib]

Allyson Ettinger, Tal Linzen & Alec Marantz (2014). The role of morphology in phoneme prediction: Evidence from MEG. Brain and Language 129, 14–23. [link] [pdf] [bib]


Tal Linzen, Alec Marantz & Liina Pylkkänen (2013). Syntactic context effects in single word recognition: An MEG study. The Mental Lexicon 8(2), 117–139. [link] [pdf] [bib]

Tal Linzen, Sophia Kasyanenko & Maria Gouskova (2013). Lexical and phonological variation in Russian prepositions. Phonology 30(3), 453–515. [lingbuzz] [link] [pdf] [code and data] [bib]