首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
Bayesian learning, widely used in many applied data-modeling problems, is often accomplished with approximation schemes because it requires intractable computation of the posterior distributions. In this study, we focus on two approximation methods, variational Bayes and local variational approximation. We show that the variational Bayes approach for statistical models with latent variables can be viewed as a special case of local variational approximation, where the log-sum-exp function is used to form the lower bound of the log-likelihood. The minimum variational free energy, the objective function of variational Bayes, is analyzed and related to the asymptotic theory of Bayesian learning. This analysis additionally implies a relationship between the generalization performance of the variational Bayes approach and the minimum variational free energy.  相似文献   

2.
3.
Most state-of-the-art blind image deconvolution methods rely on the Bayesian paradigm to model the deblurring problem and estimate both the blur kernel and latent image. It is customary to model the image in the filter space, where it is supposed to be sparse, and utilize convenient priors to account for this sparsity. In this paper, we propose the use of the spike-and-slab prior together with an efficient variational Expectation Maximization (EM) inference scheme to estimate the blur in the image. The spike-and-slab prior, which constitutes the gold standard in sparse machine learning, selectively shrinks irrelevant variables while mildly regularizing the relevant ones. The proposed variational Expectation Maximization algorithm is more efficient than usual Markov Chain Monte Carlo (MCMC) inference and, also, proves to be more accurate than the standard mean-field variational approximation. Additionally, all the prior model parameters are estimated by the proposed scheme. After blur estimation, a non-blind restoration method is used to obtain the actual estimation of the sharp image. We investigate the behavior of the prior in the experimental section together with a series of experiments with synthetically generated and real blurred images that validate the method's performance in comparison with state-of-the-art blind deconvolution techniques.  相似文献   

4.
给出了二值probit回归模型的坍缩变分贝叶斯推断算法.此算法比变分贝叶斯推断算法能更逼近对数边缘似然,得到更精确的模型参数后验期望值.如果两个算法得到的分类错误一致,则该算法的迭代次数较变分法明显减少.仿真实验结果验证了所提出算法的有效性.  相似文献   

5.
In this paper we offer a variational Bayes approximation to the multinomial probit model for basis expansion and kernel combination. Our model is well-founded within a hierarchical Bayesian framework and is able to instructively combine available sources of information for multinomial classification. The proposed framework enables informative integration of possibly heterogeneous sources in a multitude of ways, from the simple summation of feature expansions to weighted product of kernels, and it is shown to match and in certain cases outperform the well-known ensemble learning approaches of combining individual classifiers. At the same time the approximation reduces considerably the CPU time and resources required with respect to both the ensemble learning methods and the full Markov chain Monte Carlo, Metropolis-Hastings within Gibbs solution of our model. We present our proposed framework together with extensive experimental studies on synthetic and benchmark datasets and also for the first time report a comparison between summation and product of individual kernels as possible different methods for constructing the composite kernel matrix.  相似文献   

6.
The Variational Bayesian learning, proposed as an approximation of the Bayesian learning, has provided computational tractability and good generalization performance in many applications. However, little has been done to investigate its theoretical properties.  相似文献   

7.
Variational methods for approximate Bayesian inference provide fast, flexible, deterministic alternatives to Monte Carlo methods. Unfortunately, unlike Monte Carlo methods, variational approximations cannot, in general, be made to be arbitrarily accurate. This paper develops grid-based variational approximations which endeavor to approximate marginal posterior densities in a spirit similar to the Integrated Nested Laplace Approximation (INLA) of Rue et al. (2009) but which may be applied in situations where INLA cannot be used. The method can greatly increase the accuracy of a base variational approximation, although not in general to arbitrary accuracy. The methodology developed is at least reasonably accurate on all of the examples considered in the paper.  相似文献   

8.
In this paper, variational inference is studied on manifolds with certain metrics. To solve the problem, the analysis is first proposed for the variational Bayesian on Lie group, and then extended to the manifold that is approximated by Lie groups. Then the convergence of the proposed algorithm with respect to the manifold metric is proved in two iterative processes: variational Bayesian expectation (VB-E) step and variational Bayesian maximum (VB-M) step. Moreover, the effective of different metrics for Bayesian analysis is discussed.  相似文献   

9.
In this work, a variational Bayesian framework for efficient training of echo state networks (ESNs) with automatic regularization and delay&sum (D&S) readout adaptation is proposed. The algorithm uses a classical batch learning of ESNs. By treating the network echo states as fixed basis functions parameterized with delay parameters, we propose a variational Bayesian ESN training scheme. The variational approach allows for a seamless combination of sparse Bayesian learning ideas and a variational Bayesian space-alternating generalized expectation-maximization (VB-SAGE) algorithm for estimating parameters of superimposed signals. While the former method realizes automatic regularization of ESNs, which also determines which echo states and input signals are relevant for "explaining" the desired signal, the latter method provides a basis for joint estimation of D&S readout parameters. The proposed training algorithm can naturally be extended to ESNs with fixed filter neurons. It also generalizes the recently proposed expectation-maximization-based D&S readout adaptation method. The proposed algorithm was tested on synthetic data prediction tasks as well as on dynamic handwritten character recognition.  相似文献   

10.
Tapani  Matti 《Neurocomputing》2009,72(16-18):3704
This paper studies the identification and model predictive control in nonlinear hidden state-space models. Nonlinearities are modelled with neural networks and system identification is done with variational Bayesian learning. In addition to the robustness of control, the stochastic approach allows for various control schemes, including combinations of direct and indirect controls, as well as using probabilistic inference for control. We study the noise-robustness, speed, and accuracy of three different control schemes as well as the effect of changing horizon lengths and initialisation methods using a simulated cart–pole system. The simulations indicate that the proposed method is able to find a representation of the system state that makes control easier especially under high noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号