Natural Language Processing Master’s Degree

Invited Speakers

Abstract: Word embeddings have largely been a “success story” in our field. They have enabled progress in numerous language processing applications, and have facilitated the application of large-scale language analyses in other domains, such as social sciences and humanities. While less talked about, word embeddings also have many shortcomings — instability, lack of transparency, biases, and more. In this talk, I will review the “ups” and “downs” of word embeddings, discuss tradeoffs, and chart potential future research directions to address some of the downsides of these word representations.

About Rada Mihalcea: is the Janice M. Jenkins Collegiate Professor of Computer Science and Engineering at the University of Michigan and the Director of the Michigan Artificial Intelligence Lab. Her research interests are in computational linguistics, with a focus on lexical semantics, computational social sciences, and multimodal language processing. She serves or has served on the editorial boards of the Journals of Computational Linguistics, Language Resources and Evaluations, Natural Language Engineering, Journal of Artificial Intelligence Research, IEEE Transactions on Affective Computing, and Transactions of the Association for Computational Linguistics. She was a program co-chair for EMNLP 2009 and ACL 2011, and a general chair for North American ACL 2015 and SEM 2019. She directs multiple diversity and mentorship initiatives, including Girls Encoded and the ACL Year-Round Mentorship program. She currently serves as ACL President. She is the recipient of a Presidential Early Career Award for Scientists and Engineers awarded by President Obama (2009), and was named an ACM Fellow (2019) and an AAAI Fellow (2021). In 2013, she was made an honorary citizen of her hometown of Cluj-Napoca, Romania.

Register here for both talks.

Abstract: Latent structure models are a powerful tool for modelling compositional data, discovering linguistic structure, and building NLP pipelines. They are appealing for two main reasons: they allow incorporating structural bias during training, leading to more accurate models; and they allow discovering hidden linguistic structure, which provides better interpretability, translation, and semantic parsing.

This tutorial will cover recent advances in discrete latent structure models. We discuss their motivation, potential, and limitations, then explore in detail three strategies for designing such models: gradient approximation, reinforcement learning, and end-to-end differentiable methods. We highlight connections among all these methods, enumerating their strengths and weaknesses. The models we present and analyze have been applied to a wide variety of NLP tasks, including sentiment analysis, natural language inference, language modelling, machine translation, and semantic parsing.

Examples and evaluation will be covered throughout. After attending the tutorial, a practitioner will be better informed about which method is best suited for their problem.

Bio: Vlad Niculae is an assistant professor in the Language Technology Lab at the University of Amsterdam. Vlad’s research lies at the intersection of machine learning and natural language processing, building upon techniques from optimization, geometry, and probability, in order to develop and analyze better models of language structures and phenomena. Vlad obtained his PhD in Computer Science from Cornell University in 2018, advised by Prof. Claire Cardie. His PhD thesis, “Learning Deep Models With Linguistically-Inspired Structure”, received the Cornell CS Dissertation Award. Afterwards, Vlad has worked until 2020 as a post-doctoral researcher in the DeepSPIN project (Deep Structured Prediction for Natural Language Processing) at the Instituto de Telecomunicações, Lisbon, Portugal. He is an alumnus of the University of Bucharest who was introduced to academic research by Prof. Liviu P. Dinu.

Bio: Tsvetomila Mihaylova is a PhD student in the DeepSPIN project at the Instituto de Telecomunicações in Lisbon, Portugal, supervised by Vlad Niculae and André Martins. She is working on machine learning and natural language processing and her current research is related to understanding models with latent structures and trying to use them for practical applications. She has a master’s degree in Information Retrieval from Sofia University, where she was supervised by Dr. Preslav Nakov. She was also a teaching assistant in Artificial Intelligence and has organized a shared task for fact-checking in SemEval 2019.

Slides: https://deep-spin.github.io/tutorial/


Register here for the talk.

Abstract: Can conversational dynamics — the nature of the back and forth between people — predict outcomes of social interactions? This talk will describe efforts on developing an artificial intuition about ongoing conversations, by modeling the subtle pragmatic and rhetorical choices of the participants. The resulting framework distills emerging conversational patterns that can point to the nature of the social relation between interlocutors, as well as to the future trajectory of this relation. For example, I will discuss how interactional dynamics can be used to foretell whether an online conversation will stay on track or eventually derail into personal attacks, providing community moderators several hours of prior notice before an anti-social event is likely to occur. The data and code are available through the Cornell Conversational Analysis Toolkit ConvoKit. This talk includes joint work with Jonathan P. Chang, Lucas Dixon, Liye Fu, Yiqing Hua, Dan Jurafsky, Lillian Lee, Jure Leskovec, Vlad Niculae, Chris Potts, Arthur Spirling, Dario Taraborelli, Nithum Thain, and Justine Zhang.

Short bio: Cristian Danescu-Niculescu-Mizil is an associate professor in the information science department at Cornell University. His research aims at developing computational methods that can lead to a better understanding of our conversational practices, supporting tools that can improve the way we communicate online. He is the recipient of several awards—including an NSF CAREER Award, the WWW 2013 Best Paper Award, a CSCW 2017 Best Paper Award, and two Google Faculty Research Awards—and his work has been featured in popular media outlets such as The Wall Street Journal, NBC’s The Today Show, NPR and the New York Times.