Learning from Language

Natural language is built from a library of concepts and compositional operators that provide a rich source of information about how humans understand the world. Can this information help us build better machine learning models? In this talk, we’ll explore three ways of integrating compositional linguistic structure and learning: using language as a source of modular reasoning operators for question answering, as a scaffold for fast and generalizable reinforcement learning, and as a tool for understanding representations in neural networks.

Prof. Jacob Andreas

Assistant Professor, MIT on May 3, 2019 at 11:45 AM in EB2 1230

Jacob Andreas is an assistant professor at MIT and a senior researcher at Microsoft Semantic Machines. His research focuses on language learning as a window into reasoning, planning and perception, and on more general machine learning problems involving compositionality and modularity. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has been the recipient of an NSF graduate fellowship, a Facebook fellowship, and paper awards at NAACL and ICML.

Interdisciplinary Distinguished Seminar Series

The Department of Electrical and Computer Engineering hosts a regularly scheduled seminar series with preeminent and leading reseachers in the US and the world, to help promote North Carolina as a center of innovation and knowledge and to ensure safeguarding its place of leading research.