首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Hidden Markov random fields appear naturally in problems such as image segmentation, where an unknown class assignment has to be estimated from the observations at each pixel. Choosing the probabilistic model that best accounts for the observations is an important first step for the quality of the subsequent estimation and analysis. A commonly used selection criterion is the Bayesian Information Criterion (BIC) of Schwarz (1978), but for hidden Markov random fields, its exact computation is not tractable due to the dependence structure induced by the Markov model. We propose approximations of BIC based on the mean field principle of statistical physics. The mean field theory provides approximations of Markov random fields by systems of independent variables leading to tractable computations. Using this principle, we first derive a class of criteria by approximating the Markov distribution in the usual BIC expression as a penalized likelihood. We then rewrite BIC in terms of normalizing constants, also called partition functions, instead of Markov distributions. It enables us to use finer mean field approximations and to derive other criteria using optimal lower bounds for the normalizing constants. To illustrate the performance of our partition function-based approximation of BIC as a model selection criterion, we focus on the preliminary issue of choosing the number of classes before the segmentation task. Experiments on simulated and real data point out our criterion as promising: It takes spatial information into account through the Markov model and improves the results obtained with BIC for independent mixture models.  相似文献   

2.
The Bayesian information criterion (BIC) is one of the most popular criteria for model selection in finite mixture models. However, it implausibly penalizes the complexity of each component using the whole sample size and completely ignores the clustered structure inherent in the data, resulting in over-penalization. To overcome this problem, a novel criterion called hierarchical BIC (HBIC) is proposed which penalizes the component complexity only using its local sample size and matches the clustered data structure well. Theoretically, HBIC is an approximation of the variational Bayesian (VB) lower bound when sample size is large and the widely used BIC is a less accurate approximation. An empirical study is conducted to verify this theoretical result and a series of experiments is performed on simulated and real data sets to compare HBIC and BIC. The results show that HBIC outperforms BIC substantially and BIC suffers from underestimation.  相似文献   

3.
4.
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback-Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation-maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion.  相似文献   

5.
Data available in software engineering for many applications contains variability and it is not possible to say which variable helps in the process of the prediction. Most of the work present in software defect prediction is focused on the selection of best prediction techniques. For this purpose, deep learning and ensemble models have shown promising results. In contrast, there are very few researches that deals with cleaning the training data and selection of best parameter values from the data. Sometimes data available for training the models have high variability and this variability may cause a decrease in model accuracy. To deal with this problem we used the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) for selection of the best variables to train the model. A simple ANN model with one input, one output and two hidden layers was used for the training instead of a very deep and complex model. AIC and BIC values are calculated and combination for minimum AIC and BIC values to be selected for the best model. At first, variables were narrowed down to a smaller number using correlation values. Then subsets for all the possible variable combinations were formed. In the end, an artificial neural network (ANN) model was trained for each subset and the best model was selected on the basis of the smallest AIC and BIC value. It was found that combination of only two variables’ ns and entropy are best for software defect prediction as it gives minimum AIC and BIC values. While, nm and npt is the worst combination and gives maximum AIC and BIC values.  相似文献   

6.
7.
Regression models are used in geosciences to extrapolate data and identify significant predictors of a response variable. Criterion approaches based on the residual sum of squares (RSS), such as the Akaike Information Criterion, Bayesian Information Criterion (BIC), Deviance Information Criterion, or Mallows' Cp can be used to compare non-nested models to identify an optimal subset of covariates. Computational limitations arise when the number of observations or candidate covariates is large in comparing all possible combinations of the available covariates, and in characterizing the covariance of the residuals for each examined model when the residuals are autocorrelated, as is often the case in spatial and temporal regression analysis. This paper presents computationally efficient algorithms for identifying the optimal model as defined using any RSS-based model selection criterion. The proposed dual criterion optimal branch and bound (DCO B&B) algorithm is guaranteed to identify the optimal model, while a single criterion heuristic (SCH) B&B algorithm provides further computational savings and approximates the optimal solution. These algorithms are applicable both to multiple linear regression (MLR) and to response variables with correlated residuals. We also propose an approach for iterative model selection, where a single set of covariance parameters is used in each iteration rather than a different set of parameters being used for each examined model. Simulation experiments are performed to evaluate the performance of the algorithms for regression models, using MLR and geostatistical regression as prototypical regression tools and BIC as a prototypical model selection approach. Results show massive computational savings using the DCO B&B algorithm relative to performing an exhaustive search. The SCH B&B is shown to provide a good approximation of the optimal model in most cases, while the DCO B&B with iterative covariance parameter optimization yields the closest approximation to the DCO B&B algorithm while also providing additional computational savings.  相似文献   

8.
In this paper, a two-dimensional heat diffusion system modelled by a partial differential equation (PDE) is considered. Finite order approximations are constructed first by a direct application of the standard finite difference approximation (FD) scheme. Using standard tools, the constructed FD approximate models are reduced to computationally simpler models. Further, alternative approximate models are proposed using the asymptotic limits of the FD approximations. Numerical experiments suggest that the proposed alternative approximations are more accurate than the FD approximation.  相似文献   

9.
Bayesian approaches have been widely used in quantitative trait locus (QTL) linkage analysis in experimental crosses, and have advantages in interpretability and in constructing parameter probability intervals. Most existing Bayesian linkage methods involve Monte Carlo sampling, which is computationally prohibitive for high-throughput applications such as eQTL analysis. In this paper, we present a Bayesian linkage model that offers directly interpretable posterior densities or Bayes factors for linkage. For our model, we employ the Laplace approximation for integration over nuisance parameters in backcross (BC) and F2 intercross designs. Our approach is highly accurate, and very fast compared with alternatives, including grid search integration, importance sampling, and Markov Chain Monte Carlo (MCMC). Our approach is thus suitable for high-throughput applications. Simulated and real datasets are used to demonstrate our proposed approach.  相似文献   

10.
M.  J. 《Neurocomputing》2008,71(7-9):1321-1329
Bayesian information criterion (BIC) criterion is widely used by the neural-network community for model selection tasks, although its convergence properties are not always theoretically established. In this paper we will focus on estimating the number of components in a mixture of multilayer perceptrons and proving the convergence of the BIC criterion in this frame. The penalized marginal-likelihood for mixture models and hidden Markov models introduced by Keribin [Consistent estimation of the order of mixture models, Sankhya Indian J. Stat. 62 (2000) 49–66] and, respectively, Gassiat [Likelihood ratio inequalities with applications to various mixtures, Ann. Inst. Henri Poincare 38 (2002) 897–906] is extended to mixtures of multilayer perceptrons for which a penalized-likelihood criterion is proposed. We prove its convergence under some hypothesis which involve essentially the bracketing entropy of the generalized score-function class and illustrate it by some numerical examples.  相似文献   

11.
An algorithm for automatic speaker segmentation based on the Bayesian information criterion (BIC) is presented. BIC tests are not performed for every window shift, as previously, but when a speaker change is most probable to occur. This is done by estimating the next probable change point thanks to a model of utterance durations. It is found that the inverse Gaussian fits best the distribution of utterance durations. As a result, less BIC tests are needed, making the proposed system less computationally demanding in time and memory, and considerably more efficient with respect to missed speaker change points. A feature selection algorithm based on branch and bound search strategy is applied in order to identify the most efficient features for speaker segmentation. Furthermore, a new theoretical formulation of BIC is derived by applying centering and simultaneous diagonalization. This formulation is considerably more computationally efficient than the standard BIC, when the covariance matrices are estimated by other estimators than the usual maximum-likelihood ones. Two commonly used pairs of figures of merit are employed and their relationship is established. Computational efficiency is achieved through the speaker utterance modeling, whereas robustness is achieved by feature selection and application of BIC tests at appropriately selected time instants. Experimental results indicate that the proposed modifications yield a superior performance compared to existing approaches.  相似文献   

12.
13.
Amari S  Park H  Ozeki T 《Neural computation》2006,18(5):1007-1065
The parameter spaces of hierarchical systems such as multilayer perceptrons include singularities due to the symmetry and degeneration of hidden units. A parameter space forms a geometrical manifold, called the neuromanifold in the case of neural networks. Such a model is identified with a statistical model, and a Riemannian metric is given by the Fisher information matrix. However, the matrix degenerates at singularities. Such a singular structure is ubiquitous not only in multilayer perceptrons but also in the gaussian mixture probability densities, ARMA time-series model, and many other cases. The standard statistical paradigm of the Cramér-Rao theorem does not hold, and the singularity gives rise to strange behaviors in parameter estimation, hypothesis testing, Bayesian inference, model selection, and in particular, the dynamics of learning from examples. Prevailing theories so far have not paid much attention to the problem caused by singularity, relying only on ordinary statistical theories developed for regular (nonsingular) models. Only recently have researchers remarked on the effects of singularity, and theories are now being developed.This article gives an overview of the phenomena caused by the singularities of statistical manifolds related to multilayer perceptrons and gaussian mixtures. We demonstrate our recent results on these problems. Simple toy models are also used to show explicit solutions. We explain that the maximum likelihood estimator is no longer subject to the gaussian distribution even asymptotically, because the Fisher information matrix degenerates, that the model selection criteria such as AIC, BIC, and MDL fail to hold in these models, that a smooth Bayesian prior becomes singular in such models, and that the trajectories of dynamics of learning are strongly affected by the singularity, causing plateaus or slow manifolds in the parameter space. The natural gradient method is shown to perform well because it takes the singular geometrical structure into account. The generalization error and the training error are studied in some examples.  相似文献   

14.
Variational methods for approximate Bayesian inference provide fast, flexible, deterministic alternatives to Monte Carlo methods. Unfortunately, unlike Monte Carlo methods, variational approximations cannot, in general, be made to be arbitrarily accurate. This paper develops grid-based variational approximations which endeavor to approximate marginal posterior densities in a spirit similar to the Integrated Nested Laplace Approximation (INLA) of Rue et al. (2009) but which may be applied in situations where INLA cannot be used. The method can greatly increase the accuracy of a base variational approximation, although not in general to arbitrary accuracy. The methodology developed is at least reasonably accurate on all of the examples considered in the paper.  相似文献   

15.
Variational learning for switching state-space models   总被引:6,自引:0,他引:6  
We introduce a new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learnsthe parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time-series models -- hidden Markov models and linear dynamical systems -- and is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs, Jordan, Nowlan, & Hinton, 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact expectation maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log-likelihood and makes use of both the forward and backward recursions for hidden Markov models and the Kalman filter recursions for linear dynamical systems. We tested the algorithm on artificial data sets and a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models.  相似文献   

16.
17.
One of the most popular criteria for model selection is the Bayesian Information Criterion (BIC). It is based on an asymptotic approximation using Bayes rule when the sample size tends to infinity and the dimension of the model is fixed. Although it works well in classical applications, it performs less satisfactorily for high dimensional problems, i.e. when the number of regressors is very large compared to the sample size. For this reason, an alternative version of the BIC has been proposed for the problem of mapping quantitative trait loci (QTLs) considered in genetics. One approach is to locate QTLs by using model selection in the context of a regression model with an extremely large number of potential regressors. Since the assumption of normally distributed errors is often unrealistic in such settings, we extend the idea underlying the modified BIC to the context of robust regression.  相似文献   

18.
Markov chain Monte Carlo (MCMC) algorithms have greatly facilitated the popularity of Bayesian variable selection and model averaging in problems with high-dimensional covariates where enumeration of the model space is infeasible. A variety of such algorithms have been proposed in the literature for sampling models from the posterior distribution in Bayesian variable selection. Ghosh and Clyde proposed a method to exploit the properties of orthogonal design matrices. Their data augmentation algorithm scales up the computation tremendously compared to traditional Gibbs samplers, and leads to the availability of Rao-Blackwellized estimates of quantities of interest for the original non-orthogonal problem. The algorithm has excellent performance when the correlations among the columns of the design matrix are small, but empirical results suggest that moderate to strong multicollinearity leads to slow mixing. This motivates the need to develop a class of novel sandwich algorithms for Bayesian variable selection that improves upon the algorithm of Ghosh and Clyde. It is proved that the Haar algorithm with the largest group that acts on the space of models is the optimum algorithm, within the parameter expansion data augmentation (PXDA) class of sandwich algorithms. The result provides theoretical insight but using the largest group is computationally prohibitive so two new computationally viable sandwich algorithms are developed, which are inspired by the Haar algorithm, but do not necessarily belong to the class of PXDA algorithms. It is illustrated via simulation studies and real data analysis that several of the sandwich algorithms can offer substantial gains in the presence of multicollinearity.  相似文献   

19.
A default strategy for fully Bayesian model determination for generalised linear mixed models (GLMMs) is considered which addresses the two key issues of default prior specification and computation. In particular, the concept of unit-information priors is extended to the parameters of a GLMM. A combination of Markov chain Monte Carlo (MCMC) and Laplace approximations is used to compute approximations to the posterior model probabilities to find a subset of models with high posterior model probability. Bridge sampling is then used on the models in this subset to approximate the posterior model probabilities more accurately. The strategy is applied to four examples.  相似文献   

20.
Exact calculations of model posterior probabilities or related quantities are often infeasible due to the analytical intractability of predictive densities. Here new approximations for obtaining predictive densities are proposed and contrasted with those based on the Laplace method. Our theory and a numerical study indicate that the proposed methods are easy to implement, computationally efficient, and accurate over a wide range of hyperparameters. In the context of GLMs, we show that they can be employed to facilitate the posterior computation under three general classes of informative priors on regression coefficients. A real example is provided to demonstrate the feasibility and usefulness of the proposed methods in a fully Bayes variable selection procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号