Nicolas Loizou, Johns Hopkins University

Recent Advances in Min-max Optimization: Convergence Guarantees and Practical Performance
Date
Nov 21, 2024, 4:30 pm5:30 pm

Details

Event Description

Min-max optimization plays a prominent role in game theory, statistics, economics, finance, and engineering. It has recently received significant attention, especially in the machine learning community, where adversarial training of neural networks, multi-agent reinforcement learning, and distributionally robust learning are formulated as structured min-max optimization problems. Stochastic Gradient Descent Ascent (SGDA) and Stochastic Extragradient methods (SEG) rank among the most efficient algorithms for solving large-scale min-max optimization and variational inequality problems (VIP) that occur in various machine learning tasks. Despite their undeniable popularity, current convergence analyses of SGDA and SEG require strong assumptions like bounded variance or growth conditions. Moreover, several important questions regarding the convergence properties of these methods remain unanswered, including mini-batching, efficient step-size selection, and convergence guarantees under various sampling strategies. In this talk, we will address these questions and provide novel convergence guarantees for several variants of SGDA and SEG, diving into the details of their efficient implementations. Finally, if time permits, we will discuss extensions of these approaches in the federated learning (FL) regime, where we introduce Multiplayer Federated Learning—a novel framework that models the clients in the FL environment as players in a game-theoretic context, aiming to reach an equilibrium.

Bio:
Nicolas Loizou is an Assistant Professor in the Department of Applied Mathematics and Statistics and the Mathematical Institute for Data Science (MINDS) at Johns Hopkins University, where he leads the Optimization and Machine Learning Lab. Prior to this, he was a Postdoctoral Research Fellow at Mila – Quebec Artificial Intelligence Institute and the Université de Montréal. He holds a Ph.D. in Optimization and Operational Research from the University of Edinburgh, School of Mathematics, an M.Sc. in Computing from Imperial College London, and a BSc in Mathematics from National and Kapodistrian University of Athens. His research interests include large-scale optimization, machine learning, randomized numerical linear algebra, distributed and decentralized algorithms, algorithmic game theory, and federated learning. He has received several awards, including OR Society’s 2019 Doctoral Award for the ”Most Distinguished Body of Research leading to the Award of a Doctorate in the field of Operational Research,” the IVADO Fellowship,  COAP 2020 Best Paper Award, and CISCO 2023 Research Award.

Event Category
Optimization Seminar