Tal Linzen, Emmanuel Dupoux & Yoav Goldberg (2016). Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics 4, 521–535. [link] [pdf] [bib]
Here's a video of a talk I gave in December 2018 at the Allen Institute for Artificial Intelligence.
Before joining JHU, I was a postdoctoral researcher in LSCP and IJN in the Cognitive Science Department at the École Normale Supérieure in Paris, where I worked with Emmanuel Dupoux and Benjamin Spector.
I obtained my Ph.D. in Linguistics in September 2015 from New York University, under the supervision of Alec Marantz. During my Ph.D., I also collaborated with Gillian Gallagher, Maria Gouskova and Liina Pylkkänen at NYU, as well as with Florian Jaeger at the University of Rochester.
Before that, I obtained a B.Sc. in Mathematics and Linguistics and an M.A. in Linguistics (with Mira Ariel), both from Tel Aviv University, and worked as a data scientist and software engineer.
August 28: Paper on syntactic priming for studying neural network representations accepted to CoNLL.
August 13: Paper on data efficiency of neural network language models accepted to EMNLP.
August 11: Keynote talk at the Conference on Formal Grammar in Riga.
August 7: Received an NSF award.
August 5-9: Participated in the Workshop on Compositionality in Brains and Machines in Leiden.
August 1: Co-organized the BlackboxNLP 2019 workshop at ACL.
July: Yoav Goldberg and I were awarded a grant by the US-Israel Binational Science Foundation.
May 24: Talks at Waseda University and the RIKEN institute in Tokyo.
May 13/14: Paper on syntactic heuristics in neural natural language inference systems accepted to ACL.
May 3: Keynote talk at Midwest Speech and Language Days.
April 24: Talk at the Cognitive Talk Series at Princeton.
April 18: Talk at the CompLang seminar at MIT.
March 13: I received a Google Faculty Research Award.
February 22: Paper with Shauli Ravfogel and Yoav Goldberg accepted to NAACL.
February 8: LTI colloquium talk at Carnegie Mellon University.
December: Paper on interpreting neural network internal representations using tensor products accepted to ICLR.
Winter break talks: Yale Linguistics (Dec 10), Microsoft Research Redmond (Dec 13), Allen AI Institute (Dec 14), Google New York (Jan 7).
November 1: Co-organized the workshop on analyzing and interpreting neural networks for NLP (at EMNLP).