Game Theory, Optimization and Multi-Agent Learning
DESCRIPTION
Over the past decade, Machine Learning has delivered important advances in learning challenges such as speech and image recognition, translation, text and image generation, and protein folding. These challenges pertain to single-agent learning problems, involving a single agent whose goal is to use observations from some unknown environment in order to learn how to make good predictions or decisions in this environment. These problems are typically modeled in the language of single-objective optimization and solved via simple methods such as gradient descent or some variant of it. From robustifying machine learning models against adversarial attacks to training generative models, to performing causal inference, to playing difficult games like Go, Poker and Starcraft, to improving autonomous driving agents, to evaluating the outcomes of economic policies, to training agents for some multi-agent interaction, many outstanding challenges in Machine Learning pertain to multi-agent learning problems, wherein multiple agents learn and make decisions and predictions in a shared environment. These settings deviate from the single-objective optimization paradigm as different agents may have different objectives, and Game Theory provides a useful framework for thinking about such settings. At the same time, classical Game Theory falls short from addressing the challenges posed by modern ML applications, such as the high-dimensionality of strategies and the non-concavity of utilities/non-convexity of losses that one typically encounters in these settings. We develop the foundations of multi-agent learning, bringing to bear techniques from optimization, game theory and learning.