Speakers of a language generalize their knowledge of syntax in a systematic way to constructions they have never encountered before. This observation has motivated the influential position in linguistics that humans are innately endowed with syntax-specific inductive biases. The applied success of deep learning systems that are not designed with such biases invites a reconsideration of this position. In this talk, Prof. Tal Linzen, Assistant Professor of Linguistics and Data Science at NYU, will review work that uses paradigms from psycholinguistics to examine the syntactic generalization capabilities of contemporary neural network architectures. Alongside some successes, this work suggests that human-like generalization requires stronger inductive biases than those expressed in standard neural network architectures.Hanes