[Archimedes Seminar Series] Uncertainty in NLP: Quantification, interpretation, evaluation and beyond
Dates
2024-07-03 11:00 - 13:00
Venue
Artemidos 1 - Amphitheater
Title: Uncertainty in NLP: Quantification, interpretation, evaluation and beyond.
Presenter: Prof Chrysoula (Chryssa) Zerva (Instituto Superior Tecnico, Portugal)

Abstract: As the popularity, availability (and size) of language models keep increasing, so do their applications to different tasks, rendering them ubiquitous in modern society. This, in turn, brings forward the question of reliability. We know models don’t always “know what they don’t know” and may end up generating seemingly convincing answers that are entirely wrong. Hence, being able to quantify the uncertainty over their predictions is a key step in the path towards the reliability of language models.
This talk will discuss the challenges of uncertainty estimation for natural language processing, emphasising aspects such as multiple sources of uncertainty and limited access to the model parameters (black box models) as well as aspects of interpretation and evaluation. I will focus on generation and evaluation tasks, using machine translation as the main paradigm, and discuss how the conformal prediction framework can be leveraged to provide meaningful confidence intervals with statistical guarantees, while also allowing us to calibrate our confidence to obtain more interpretable and fair uncertainty representations.
Bio: Dr. Chrysoula Zerva is an Assistant Professor in Artificial Intelligence at the Instituto Superior Técnico in Lisbon, Portugal and a researcher at the Lisbon branch of the Instituto de Telecomunicações. She is a member of the ELLIS network as well as a member of LUMLIS, the Lisbon ELLIS unit.
Dr. Chrysoula Zerva obtained her PhD in 2019 from the University of Manchester working on “Automated identification of textual uncertainty” under the supervision of Prof. Sophia Ananiadou. She was subsequently awarded the EPSRC doctoral prize fellowship to work on (mis)information propagation in the health and science domains. In 2021, she joined the Instituto de Telecomunicações in Lisbon as a post-doc researcher for the DeepSPIN project led by Prof. André Martins, focussing on uncertainty quantification, machine translation and quality estimation.
She is co-PI in the Centre of Responsible AI, a PRR-funded initiative aiming to promote AI technology that is trustworthy, privacy-preserving, sustainable and fair, contributing towards a more equal society. She is also participating in the UTTER project, which aims to advance transcription and translation for extended reality models, where she leads the work related to Adaptable and context-aware models. Overall, her research interests lie in the areas of machine learning (ML) and natural language processing (NLP). In more detail, she is interested in topics of uncertainty (both in models and data), fairness, contextualization and explainability and is keen on exploring multilingual and multimodal setups along the aforementioned directions.
Microsoft Teams Need help?
Meeting ID: 332 690 580 834
Passcode: VWdWex
For organizers: Meeting options