We are in the midst of a revolution driven by models for learning in neural networks. In this symposium we will explore these models, their application to the deep networks that are the state of the art in language processing by machines, and the relation of these ideas to the extraordinary ability of human infants to learn language.
10:00 - 11:30 Learning to understand: Statistical learning and infant language development
Jenny Saffran, University of Wisconsin
12:00 - 1:30 TBD
Surya Ganguli, Stanford University and Facebook AI Research
2:00 - 3:30 Neural scaling laws and GPT-3
Jared Kaplan, Johns Hopkins University and OpenAI
Please register for this Zoom event.
Sponsored by the Initiative for the Theoretical Sciences, CUNY doctoral programs in Physics and Biology, and the Center for the Physics of Biological Function.