Details
Diffusion models have become pivotal in the realm of modern generative modeling, achieving state-of-the-art performance across multiple domains. Despite the high sample quality they obtain, diffusion models generally require hundreds to thousands of sequential neural network evaluations, incurring considerably higher computational costs compared to
single-step generators like generative adversarial networks (GANs) and variational auto-encoders (VAEs). While numerous acceleration methods have been proposed, the theoretical foundations for accelerating diffusion models remain largely underexplored. In this talk, I will discuss a training-free acceleration algorithm for SDE-based diffusion samplers that is designed based on the stochastic Runge-Kutta method. Our sampler requires $\tilde O(d^{3/2} / \varepsilon)$ network function evaluations to attain $\varepsilon$ error measured in KL divergence, outperforming the state-of-the-art guarantee $\tilde O(d^{3} / \varepsilon)$ in terms of dimensional dependency. Numerical simulations validate the efficiency of the proposed method.
Short Bio: Yuchen Wu is a postdoctoral researcher in the Department of Statistics and Data Science at the Wharton School, University of Pennsylvania. She received her Ph.D. from the Department of Statistics at Stanford University in 2023. Her research interests lie at the intersection of statistics and machine learning, with a recent focus on developing modern sampling algorithms, in particular, those related to the diffusion models.