Many applications require repeatedly solving a certain type of optimization problem, each time with new but similar data. ”Learning to optimize” or L2O is an approach to develop algorithms that solve these similar problems much faster. L2O-generated algorithms have achieved significant success in neural-network training, signal processing, and inverse problems. On LPs, SAT problems, and MIPs, L2O shows promising progress in some aspects. This talk introduces the motivation for L2O and overviews different types of L2O approaches for continuous optimization. We will cover model-based approaches, which are derived from general-purpose optimization algorithms but involve (possibly many) tunable parameters, as well as model-free approaches, which use recurrent neural networks and other deep learning architectures to build algorithms. We will briefly go through plug-and-play and safeguarded L2O approaches, which incorporate learned algorithms into classic optimization frameworks.
Short-bio: Dr. Wotao Yin received his Ph.D. degree in operations research from Columbia University, New York, NY, USA, in 2006, respectively. He is currently a Professor with the Department of Mathematics, University of California, Los Angeles. Since mid-2019, he is on leave at DAMO Academy of Alibaba US. His research interests include computational optimization and its applications in signal processing, machine learning, and other data science problems. He invented fast algorithms for sparse optimization and large-scale distributed optimization problems. During 2006–2013, he was at Rice University. He was the NSF CAREER award in 2008, the Alfred P. Sloan Research Fellowship in 2009, and the Morningside Gold Medal in 2016, and has co-authored five papers receiving best paper-type awards. He is among the top 1% cited cross-discipline researchers by Clarivate Analytics since 2018.