Over the last half-decade, deep learning has disrupted the methodology of language generation, leading to commercial successes, blue sky tasks, and broad interest in a difficult area. The hero (or culprit) in this story is a single model class that is leveraged, with surprisingly few alterations, in all applications. In this talk, I will present collaborative research into applying, extending, analyzing, implementing, and deploying this model for real-world language generation tasks. I will then describe some core issues with this approach and present on-going work integrating probabilistic models with deep learning to target specific challenges in language generation.
Alexander "Sasha" Rush is an Assistant Professor at Harvard University, where he studies natural language processing and machine learning. Sasha received his PhD from MIT supervised by Michael Collins and was a postdoc at Facebook NY under Yann LeCun. His group supports open-source development, running several projects including OpenNMT. His work has received five paper and demo awards at major NLP and visualization conferences, as well as faculty awards from Google, Facebook, Bloomberg, Sony, and Amazon. He is currently the senior program chair of ICLR 2019.