Natural Language Processing Master’s Degree

Invited Speakers

Register here for both talks.

Abstract: Latent structure models are a powerful tool for modelling compositional data, discovering linguistic structure, and building NLP pipelines. They are appealing for two main reasons: they allow incorporating structural bias during training, leading to more accurate models; and they allow discovering hidden linguistic structure, which provides better interpretability, translation, and semantic parsing.

This tutorial will cover recent advances in discrete latent structure models. We discuss their motivation, potential, and limitations, then explore in detail three strategies for designing such models: gradient approximation, reinforcement learning, and end-to-end differentiable methods. We highlight connections among all these methods, enumerating their strengths and weaknesses. The models we present and analyze have been applied to a wide variety of NLP tasks, including sentiment analysis, natural language inference, language modelling, machine translation, and semantic parsing.

Examples and evaluation will be covered throughout. After attending the tutorial, a practitioner will be better informed about which method is best suited for their problem.

Bio: Vlad Niculae is an assistant professor in the Language Technology Lab at the University of Amsterdam. Vlad’s research lies at the intersection of machine learning and natural language processing, building upon techniques from optimization, geometry, and probability, in order to develop and analyze better models of language structures and phenomena. Vlad obtained his PhD in Computer Science from Cornell University in 2018, advised by Prof. Claire Cardie. His PhD thesis, “Learning Deep Models With Linguistically-Inspired Structure”, received the Cornell CS Dissertation Award. Afterwards, Vlad has worked until 2020 as a post-doctoral researcher in the DeepSPIN project (Deep Structured Prediction for Natural Language Processing) at the Instituto de Telecomunicações, Lisbon, Portugal. He is an alumnus of the University of Bucharest who was introduced to academic research by Prof. Liviu P. Dinu.

Bio: Tsvetomila Mihaylova is a PhD student in the DeepSPIN project at the Instituto de Telecomunicações in Lisbon, Portugal, supervised by Vlad Niculae and André Martins. She is working on machine learning and natural language processing and her current research is related to understanding models with latent structures and trying to use them for practical applications. She has a master’s degree in Information Retrieval from Sofia University, where she was supervised by Dr. Preslav Nakov. She was also a teaching assistant in Artificial Intelligence and has organized a shared task for fact-checking in SemEval 2019.

Slides: https://deep-spin.github.io/tutorial/


Register here for the talk.

Abstract: Can conversational dynamics — the nature of the back and forth between people — predict outcomes of social interactions? This talk will describe efforts on developing an artificial intuition about ongoing conversations, by modeling the subtle pragmatic and rhetorical choices of the participants. The resulting framework distills emerging conversational patterns that can point to the nature of the social relation between interlocutors, as well as to the future trajectory of this relation. For example, I will discuss how interactional dynamics can be used to foretell whether an online conversation will stay on track or eventually derail into personal attacks, providing community moderators several hours of prior notice before an anti-social event is likely to occur. The data and code are available through the Cornell Conversational Analysis Toolkit ConvoKit. This talk includes joint work with Jonathan P. Chang, Lucas Dixon, Liye Fu, Yiqing Hua, Dan Jurafsky, Lillian Lee, Jure Leskovec, Vlad Niculae, Chris Potts, Arthur Spirling, Dario Taraborelli, Nithum Thain, and Justine Zhang.

Short bio: Cristian Danescu-Niculescu-Mizil is an associate professor in the information science department at Cornell University. His research aims at developing computational methods that can lead to a better understanding of our conversational practices, supporting tools that can improve the way we communicate online. He is the recipient of several awards—including an NSF CAREER Award, the WWW 2013 Best Paper Award, a CSCW 2017 Best Paper Award, and two Google Faculty Research Awards—and his work has been featured in popular media outlets such as The Wall Street Journal, NBC’s The Today Show, NPR and the New York Times.