Bio:
Yuxin Chen is currently a professor of Statistics and Data Science and of Electrical and Systems Engineering at the University of Pennsylvania. Before joining UPenn, he was an assistant professor of Electrical and Computer Engineering at Princeton University, and an associated faculty in Computer Science, Applied and Computational Mathematics, and Center for Statistics and Machine Learning. He completed his Ph.D. in Electrical Engineering at Stanford University and was also a postdoc scholar at Stanford Statistics. His current research interests include statistical learning theory, reinforcement learning, diffusion models, high-dimensional statistics, and mathematical optimization. He has received the Alfred P. Sloan Research Fellowship, the SIAM Activity Group on Imaging Science Best Paper Prize, the International Congress of Chinese Mathematicians Best Paper Award (gold medal), the IEEE Transactions on Power Electronics Prize Paper Award (first place), the AFOSR Young Investigator Award, the Army Young Investigator Award, the Google Research Scholar Award, the Amazon Research Award, the Princeton SEAS Junior Faculty Award, and was selected as a finalist for the Best Paper Prize for Young Researchers in Continuous Optimization. He is the IEEE Information Theory Society Distinguished Lecturer. He has also received the Princeton Graduate Mentoring Award, as well as six other teaching awards. He has cotaught 13 tutorials at high-profile conferences, including ACM SIGMETRICS and International Conference on Machine Learning.
Available Lectures
To request a single lecture/event, click on the desired lecture and complete the Request Lecture Form.
-
Breaking the Sample Size Barrier in Reinforcement Learning
Emerging reinforcement learning (RL) applications necessitate the design of sample-efficient solutions in order to accommodate the explosive growth of problem dimensionality. Despite the empirical...
-
Demystifying the effectiveness of diffusion models
Diffusion models have emerged as a cornerstone of modern generative modeling, yet their theoretical and algorithmic foundations remain under-explored. This talk aims to advance our understanding...
-
Transformers Meet In-Context Learning: A Universal Approximation Theory
Large language models are capable of in-context learning, the ability to perform new tasks at test time using a handful of input-output examples, without parameter updates. We develop a universal...
To request a tour with this speaker, please complete this online form.
If you are not requesting a tour, click on the desired lecture and complete the Request this Lecture form.
All requests will be sent to ACM headquarters for review.