In many real-world settings, image observations of physical systems, such as satellites, may be available when low-dimensional measurements are not. However, the high-dimensionality of image data precludes the use of classical estimation techniques to learn the dynamics, and a lack of interpretability reduces the usefulness of standard deep learning methods. In this talk, I will discuss our work on leveraging Lagrangian and Hamiltonian formalisms in neural network design for physically plausible neural network based video prediction and generation. In our prediction pipeline we explicitly construct the equations of motion from learned representations of the underlying physical quantities, and in our generative model we implicitly discover the structure of the configuration space.
Bio: Christine Allen-Blanchette is an assistant professor in the Mechanical and Aerospace Engineering department, and Center for Statistics and Machine Learning at Princeton University. They hold an associated faculty appointment in the Computer Science department and an affiliation with Robotics at Princeton. Before joining the faculty, they were a Princeton Presidential Postdoctoral Fellow mentored by Naomi Leonard. They completed their PhD in Computer Science and MSE in Robotics at the University of Pennsylvania, and their BS degrees in Mechanical Engineering and Computer Engineering at San Jose State University. Among their awards are the Princeton Presidential Postdoctoral Fellowship, NSF Integrative Graduate Education and Research Training award, and GEM Fellowship sponsored by the Adobe Foundation.