Tal Linzen

I am an Assistant Professor of Cognitive Science and Computer Science at Johns Hopkins University. I am also affiliated with the JHU Center for Language and Speech Processing.

I direct the JHU Computation and Psycholinguistics Lab. We're hiring postdoctoral researchers!

Representative publications

Tal Linzen (2019). What can linguistics and deep learning contribute to each other? Response to Pater. Language 95(1), e98–e108. [link] [pdf]

Tal Linzen & Florian Jaeger (2016). Uncertainty and expectation in sentence processing: Evidence from subcategorization distributions. Cognitive Science 40(6), 1382–1411. [link] [pdf] [bib]

Tal Linzen, Emmanuel Dupoux & Yoav Goldberg (2016). Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics 4, 521–535. [link] [pdf] [bib]

Here's a video of a talk I gave in December 2018 at the Allen Institute for Artificial Intelligence.

Short bio

Before joining JHU, I was a postdoctoral researcher in LSCP and IJN in the Cognitive Science Department at the École Normale Supérieure in Paris, where I worked with Emmanuel Dupoux and Benjamin Spector.

I obtained my Ph.D. in Linguistics in September 2015 from New York University, under the supervision of Alec Marantz. During my Ph.D., I also collaborated with Gillian Gallagher, Maria Gouskova and Liina Pylkkänen at NYU, as well as with Florian Jaeger at the University of Rochester.

Before that, I obtained a B.Sc. in Mathematics and Linguistics and an M.A. in Linguistics (with Mira Ariel), both from Tel Aviv University, and worked as a data scientist and software engineer.

Contact

tal.linzen@jhu.edu
243 Krieger Hall
Cognitive Science Department
Johns Hopkins University
3400 N. Charles Street
Baltimore, MD 21218

News

November 4: Paper on syntactic priming for studying neural network representations received honorable mention for the Best Paper Award for Research Inspired by Human Language Learning and Processing at CoNLL.

August 13: Paper on data efficiency of neural network language models accepted to EMNLP.

August 11: Keynote talk at the Conference on Formal Grammar in Riga.

August 7: Received an NSF award.

August 5-9: Participated in the Workshop on Compositionality in Brains and Machines in Leiden.

August 1: Co-organized the BlackboxNLP 2019 workshop at ACL.

July: Yoav Goldberg and I were awarded a grant by the US-Israel Binational Science Foundation.

June 6: Keynote talk at the Workshop on Evaluating Vector Space Representations for NLP (RepEval) at NAACL. Slides.

May 24: Talks at Waseda University and the RIKEN institute in Tokyo.

May 13/14: Paper on syntactic heuristics in neural natural language inference systems accepted to ACL.

May 3: Keynote talk at Midwest Speech and Language Days.

April 24: Talk at the Cognitive Talk Series at Princeton.

April 18: Talk at the CompLang seminar at MIT.

March 13: I received a Google Faculty Research Award.

February 22: Paper with Shauli Ravfogel and Yoav Goldberg accepted to NAACL.

February 8: LTI colloquium talk at Carnegie Mellon University.

December: Paper on interpreting neural network internal representations using tensor products accepted to ICLR.

Winter break talks: Yale Linguistics (Dec 10), Microsoft Research Redmond (Dec 13), Allen AI Institute (Dec 14), Google New York (Jan 7).

November 1: Co-organized the workshop on analyzing and interpreting neural networks for NLP (at EMNLP).