What does it mean for a learning algorithm to be robust? We could hope to tolerate adversarial corruptions, but this makes many simple learning tasks computationally intractable. We could assume a particular stochastic model for noise, but then our algorithms might be overtuning to this one distribution, which was only meant to be a simplifying assumption.
In this talk we will revisit some classic learning problems, like learning halfspaces, online regression and contextual bandits, in models that blend worst-case and average-case noise. We will give simple twists on old algorithms that allow them to enjoy strong provable robustness guarantees. Finally we explore some intriguing connections between robustness and fairness.
Bio: Ankur Moitra is the Norbert Wiener Professor of Mathematics at MIT and the Director of the Statistics and Data Science Center. The aim of his work is to bridge the gap between theoretical computer science and machine learning by developing algorithms with provable guarantees and foundations for reasoning about their behavior. He has won numerous awards for his research and teaching, including a Packard Fellowship, a Sloan Fellowship, an ONR Young Investigator Award, an NSF CAREER Award, a Hertz Fellowship and a School of Science Excellence in Graduate Teaching Prize.