You are here

Speakers

  • Ben Recht, UC Berkeley
    Title: The Statistical Foundations of Learning to Control
    Abstract:

    Given the dramatic successes in machine learning and reinforcement learning over the past half decade, there has been a surge of interest in applying these techniques to continuous control problems in robotics and self-driving cars. Though such control applications appear to be straightforward generalizations of standard reinforcement learning, few fundamental baselines have been established prescribing how well one must know a system in order to control it. In this talk, I will discuss how one might merge techniques from statistical learning theory with robust control to derive such baselines for such continuous control. I will explore several examples that balance parameter identification against controller design and demonstrate finite sample tradeoffs between estimation fidelity and desired control performance. I will describe how these simple baselines give us insights into shortcomings of existing reinforcement learning methodology. I will close by listing several exciting open problems that must be solved before we can build robust, safe learning systems that interact with an uncertain physical environment.

    Bio: Benjamin Recht is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Ben's research group studies the theory and practice of optimization algorithms with a focus on applications in machine learning and data analysis. Ben is the recipient of a Presidential Early Career Awards for Scientists and Engineers, an Alfred P. Sloan Research Fellowship, the 2012 SIAM/MOS Lagrange Prize in Continuous Optimization, the 2014 Jamon Prize, the 2015 William O. Baker Award for Initiatives in Research, and the 2017 NIPS Test of Time Award.