[Archimedes Talks Series] Can Q-learning be improved with Advice?
Dates
2024-06-12 11:00 - 13:00
Venue
Artemidos 1 - Amphitheater
Archimedes is proud to host a Talk on "Can Q-learning be improved with Advice?" by Noah Golowich(MIT) as part of our Prediction Study Group this following Wednesday at 11am.
Title: Can Q-learning be improved with Advice?
Presenter: Noah Golowich, Massachusetts Institute of Technology
Abstract: Despite rapid progress in theoretical reinforcement learning (RL) over the last few years, most of the known guarantees are worst-case in nature, failing to take advantage of structure that may be known a priori about a given RL problem at hand. In this paper we address the question of whether worst-case lower bounds for regret in online learning of Markov decision processes (MDPs) can be circumvented when information about the MDP, in the form of predictions about its optimal Q-value function, is given to the algorithm. We show that when the predictions about the optimal Q-value function satisfy a reasonably weak condition we call distillation, then we can improve regret bounds by replacing the set of state-action pairs with the set of state-action pairs on which the predictions are grossly inaccurate. This improvement holds for both uniform regret bounds and gap-based ones. Further, we are able to achieve this property with an algorithm that achieves sublinear regret when given arbitrary predictions (i.e., even those which are not a distillation). Our work extends a recent line of work on algorithms with predictions, which has typically focused on simple online problems such as caching and scheduling, to the more complex and general problem of reinforcement learning.
Bio: Noah Golowich (Massachusetts Institute of Technology) was advised by Constantinos Daskalakis and Ankur Moitra. He completed his A.B. and S.M. at Harvard University. His research interests lie in theoretical machine learning, with a particular focus on the connections between multi-agent learning, game theory, and online learning, and in theoretical reinforcement learning. He is supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Fellowship.
Bio: Noah Golowich (Massachusetts Institute of Technology) was advised by Constantinos Daskalakis and Ankur Moitra. He completed his A.B. and S.M. at Harvard University. His research interests lie in theoretical machine learning, with a particular focus on the connections between multi-agent learning, game theory, and online learning, and in theoretical reinforcement learning. He is supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Fellowship.
Microsoft Teams Need help?
Meeting ID: 373 512 132 356
Passcode: 9bUg8w
For organizers: Meeting options