Demystifying the effectiveness of diffusion models
Speaker: Yuxin Chen – Philadelphia, USATopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
Abstract
Diffusion models have emerged as a cornerstone of modern generative modeling, yet their theoretical and algorithmic foundations remain under-explored. This talk aims to advance our understanding in two directions. First, we investigate how diffusion models leverage (unknown) low-dimensional structure to accelerate sampling. For two prominent samplers --- denoising diffusion implicit model (DDIM) and denoising diffusion probabilistic model (DDPM) --- we prove, assuming accurate scores, that their iteration complexities scale linearly in some intrinsic dimension of the target distribution, as opposed to the ambient dimension. Second, we turn to guided data generation with diffusion models. We prove that classifier-free guidance (CFG) decreases the expected reciprocal of the classifier probability, providing the first theoretical characterization of the specific performance metric that CFG improves for general target distributions.About this Lecture
Number of Slides: 45Duration: 55 minutes
Languages Available: Chinese (Simplified), English
Last Updated: 03/02/2026
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.