TITLE: A Stein Variational Framework for Deep Probabilistic Modeling
ABSTRACT:
Modern AI and machine learning techniques increasingly depend on highly complex, hierarchical (deep) probabilistic models to reason with complex relations and learn to predict and act under uncertain environment. This, however, casts a significant demand for developing efficient computational methods for handling highly complex probabilistic models for which exact calculation is prohibitive. In this talk, we discuss a new framework for approximate learning and inference that combines ideas from Stein's method, an advantaged theoretical technique developed by mathematical statistician Charles Stein, with practical machine learning and statistical computation techniques such as variational inference, Monte Carlo, optimal transport and reproducing kernel Hilbert space (RKHS). Our framework provides a new foundation for probabilistic learning and reasoning and allows us to develop a host of new algorithms for a variety of challenging statistical tasks, that are significantly different from, and have critical advantages over, traditional methods. Examples of applications include computationally tractable goodness-of-fit tests for evaluating highly complex models, new efficient approximation inference methods for scalable Bayesian computation, amortized maximum likelihood training for deep generative models, and new policy gradient methods that yield better exploration using Bayesian uncertainty for deep reinforcement learning.
BIO: Qiang Liu is an assistant professor of computer science at Dartmouth College. His research interests are in machine learning, Bayesian inference, probabilistic graphical models and deep learning. He received his Ph.D from University of California at Irvine, followed with a postdoc at MIT CSAIL. He is an action editor of journal of machine learning research.