[Archimedes Talks] Advances in Robust Representations for Inverse problems and Continual Learning
Dates
2025-04-24 13:00 - 15:00
Venue
Artemidos 1 - Amphitheater
Title: Advances in Robust Representations for Inverse problems and Continual Learning
Speaker: Prof. Giampouras Paris (Assistant Professor, University of Warwick)
Abstract: In this talk, I will first focus on the problem of robust low-rank matrix estimation from corrupted measurements. Building on a matrix factorization (MF) approach, I will present a novel optimization algorithm that employs preconditioned-type updates for the matrix factors. This algorithm provably addresses two longstanding challenges in MF-based methods: (a) reliance on awareness of the rank of the target matrix in advance, and (b) slow convergence when the underlying matrix is ill-conditioned. Next, I will introduce an inverse algorithm for deep generative models. I will discuss new theoretical guarantees that relax the commonly used assumption of randomized model weights, demonstrating that a gradient-descent-based approach globally converges for a broad class of inverse problems. Finally, I will present a unifying theoretical framework that provides new insights into state-of-the-art strategies for continual learning i.e., training neural networks on a series of tasks while mitigating catastrophic forgetting. This framework not only sheds light on the strengths and weaknesses of existing methods but also paves the way for the development of more efficient continual learning techniques.
Short Bio: Paris Giampouras is an Assistant Professor of ML/AI at the Department of Computer Science of the University of Warwick. Before that, he was a Research Faculty member at the Mathematical Institute for Data Science (MINDS) of Johns Hopkins University (JHU), working with Professor Rene Vidal. From 2019 to 2022, he held a Marie-Sklodowska Curie postdoctoral fellowship at MINDS at JHU. His research area is a combination of machine learning and signal processing, with a focus on parsimonious representation learning and its applications in deep generative models and continual learning. His work is interdisciplinary and encompasses topics such as convex/nonconvex optimization theory, Bayesian inference, high-dimensional probability theory with applications in image processing/computer vision, and trustworthy ML.
Microsoft Teams Need help?
Meeting ID: 381 035 609 023 0
Passcode: WX28dj2r
For organizers: Meeting options