首页 | 本学科首页   官方微博 | 高级检索  
     


An Introduction to Variational Methods for Graphical Models
Authors:Jordan  Michael I  Ghahramani  Zoubin  Jaakkola  Tommi S  Saul  Lawrence K
Affiliation:(1) Department of Electrical Engineering and Computer Sciences and Department of Statistics, University of California, Berkeley, CA 94720, USA;(2) Gatsby Computational Neuroscience Unit, University College, London, WC1N 3AR, UK;(3) Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA;(4) AT&T Labs–Research, Florham Park, NJ 07932, USA
Abstract:This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, which exploit laws of large numbers to transform the original graphical model into a simplified graphical model in which inference is efficient. Inference in the simpified model provides bounds on probabilities of interest in the original model. We describe a general framework for generating variational transformations based on convex duality. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case.
Keywords:graphical models  Bayesian networks  belief networks  probabilistic inference  approximate inference  variational methods  mean field methods  hidden Markov models  Boltzmann machines  neural networks
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号