共查询到20条相似文献,搜索用时 15 毫秒
1.
Accurate information on patterns of introduction and spread of non-native species is essential for making predictions and management decisions. In many cases, estimating unknown rates of introduction and spread from observed data requires evaluating intractable variable-dimensional integrals. In general, inference on the large class of models containing latent variables of large or variable dimension precludes the use of exact sampling techniques. Approximate Bayesian computation (ABC) methods provide an alternative to exact sampling but rely on inefficient conditional simulation of the latent variables. To accomplish this task efficiently, a new transdimensional Monte Carlo sampler is developed for approximate Bayesian model inference and used to estimate rates of introduction and spread for the non-native earthworm species Dendrobaena octaedra (Savigny) along roads in the boreal forest of northern Alberta. Using low and high estimates of introduction and spread rates, the extent of earthworm invasions in northeastern Alberta is simulated to project the proportion of suitable habitat invaded in the year following data collection. 相似文献
2.
Markov chain Monte Carlo (MCMC) algorithms have greatly facilitated the popularity of Bayesian variable selection and model averaging in problems with high-dimensional covariates where enumeration of the model space is infeasible. A variety of such algorithms have been proposed in the literature for sampling models from the posterior distribution in Bayesian variable selection. Ghosh and Clyde proposed a method to exploit the properties of orthogonal design matrices. Their data augmentation algorithm scales up the computation tremendously compared to traditional Gibbs samplers, and leads to the availability of Rao-Blackwellized estimates of quantities of interest for the original non-orthogonal problem. The algorithm has excellent performance when the correlations among the columns of the design matrix are small, but empirical results suggest that moderate to strong multicollinearity leads to slow mixing. This motivates the need to develop a class of novel sandwich algorithms for Bayesian variable selection that improves upon the algorithm of Ghosh and Clyde. It is proved that the Haar algorithm with the largest group that acts on the space of models is the optimum algorithm, within the parameter expansion data augmentation (PXDA) class of sandwich algorithms. The result provides theoretical insight but using the largest group is computationally prohibitive so two new computationally viable sandwich algorithms are developed, which are inspired by the Haar algorithm, but do not necessarily belong to the class of PXDA algorithms. It is illustrated via simulation studies and real data analysis that several of the sandwich algorithms can offer substantial gains in the presence of multicollinearity. 相似文献
3.
Learning structure from data is one of the most important fundamental tasks of Bayesian network research. Particularly, learning optional structure of Bayesian network is a non-deterministic polynomial... 相似文献
4.
The Bayesian neural networks are useful tools to estimate the functional structure in the nonlinear systems. However, they suffer from some complicated problems such as controlling the model complexity, the training time, the efficient parameter estimation, the random walk, and the stuck in the local optima in the high-dimensional parameter cases. In this paper, to alleviate these mentioned problems, a novel hybrid Bayesian learning procedure is proposed. This approach is based on the full Bayesian learning, and integrates Markov chain Monte Carlo procedures with genetic algorithms and the fuzzy membership functions. In the application sections, to examine the performance of proposed approach, nonlinear time series and regression analysis are handled separately, and it is compared with the traditional training techniques in terms of their estimation and prediction abilities. 相似文献
5.
The outer layers of the Earth’s atmosphere are known as the ionosphere, a plasma of free electrons and positively charged atomic ions. The electron density of the ionosphere varies considerably with time of day, season, geographical location and the sun’s activity. Maps of electron density are required because local changes in this density can produce inaccuracies in the Navy Navigation Satellite System (NNSS) and Global Positioning System (GPS). Satellite to ground based receiver measurements produce tomographic information about the density in the form of path integrated snapshots of the total electron content which must be inverted to generate electron density maps. A Bayesian approach is proposed for solving the inversion problem using spatial priors in a parsimonious model for the variation of electron density with height. The Bayesian approach to modelling and inference provides estimates of electron density along with a measure of uncertainty for these estimates, leading to credible intervals for all quantities of interest. The standard parameterisation does not lend itself well to standard Metropolis-Hastings algorithms. A much more efficient form of Markov chain Monte Carlo sampler is developed using a transformation of variables based on a principal components analysis of initial output. 相似文献
6.
Markov chain Monte Carlo (MCMC) techniques revolutionized statistical practice in the 1990s by providing an essential toolkit for making the rigor and flexibility of Bayesian analysis computationally practical. At the same time the increasing prevalence of massive datasets and the expansion of the field of data mining has created the need for statistically sound methods that scale to these large problems. Except for the most trivial examples, current MCMC methods require a complete scan of the dataset for each iteration eliminating their candidacy as feasible data mining techniques.In this article we present a method for making Bayesian analysis of massive datasets computationally feasible. The algorithm simulates from a posterior distribution that conditions on a smaller, more manageable portion of the dataset. The remainder of the dataset may be incorporated by reweighting the initial draws using importance sampling. Computation of the importance weights requires a single scan of the remaining observations. While importance sampling increases efficiency in data access, it comes at the expense of estimation efficiency. A simple modification, based on the rejuvenation step used in particle filters for dynamic systems models, sidesteps the loss of efficiency with only a slight increase in the number of data accesses.To show proof-of-concept, we demonstrate the method on two examples. The first is a mixture of transition models that has been used to model web traffic and robotics. For this example we show that estimation efficiency is not affected while offering a 99% reduction in data accesses. The second example applies the method to Bayesian logistic regression and yields a 98% reduction in data accesses. 相似文献
7.
Tatiana Miazhynskaia Sylvia Frühwirth-Schnatter 《Computational statistics & data analysis》2006,51(3):2029-2042
Neural networks provide a tool for describing non-linearity in volatility processes of financial data and help to answer the question “how much” non-linearity is present in the data. Non-linearity is studied under three different specifications of the conditional distribution: Gaussian, Student-t and mixture of Gaussians. To rank the volatility models, a Bayesian framework is adopted to perform a Bayesian model selection within the different classes of models. In the empirical analysis, the return series of the Dow Jones Industrial Average index, FTSE 100 and NIKKEI 225 indices over a period of 16 years are studied. The results show different behavior across the three markets. In general, if a statistical model accounts for non-normality and explains most of the fat tails in the conditional distribution, then there is less need for complex non-linear specifications. 相似文献
8.
While latent variable models have been successfully applied in many fields and underpin various modeling techniques, their ability to incorporate categorical responses is hindered due to the lack of accurate and efficient estimation methods. Approximation procedures, such as penalized quasi-likelihood, are computationally efficient, but the resulting estimators can be seriously biased for binary responses. Gauss-Hermite quadrature and Markov Chain Monte Carlo (MCMC) integration based methods can yield more accurate estimation, but they are computationally much more intensive. Estimation methods that can achieve both computational efficiency and estimation accuracy are still under development. This paper proposes an efficient direct sampling based Monte Carlo EM algorithm (DSMCEM) for latent variable models with binary responses. Mixed effects and item factor analysis models with binary responses are used to illustrate this algorithm. Results from two simulation studies and a real data example suggest that, as compared with MCMC based EM, DSMCEM can significantly improve computational efficiency as well as produce equally accurate parameter estimates. Other aspects and extensions of the algorithm are discussed. 相似文献
9.
This work presents the current state-of-the-art in techniques for tracking a number of objects moving in a coordinated and interacting fashion. Groups are structured objects characterized with particular motion patterns. The group can be comprised of a small number of interacting objects (e.g. pedestrians, sport players, convoy of cars) or of hundreds or thousands of components such as crowds of people. The group object tracking is closely linked with extended object tracking but at the same time has particular features which differentiate it from extended objects. Extended objects, such as in maritime surveillance, are characterized by their kinematic states and their size or volume. Both group and extended objects give rise to a varying number of measurements and require trajectory maintenance. An emphasis is given here to sequential Monte Carlo (SMC) methods and their variants. Methods for small groups and for large groups are presented, including Markov Chain Monte Carlo (MCMC) methods, the random matrices approach and Random Finite Set Statistics methods. Efficient real-time implementations are discussed which are able to deal with the high dimensionality and provide high accuracy. Future trends and avenues are traced. 相似文献
10.
Causal knowledge based on causal analysis can advance the quality of decision-making and thereby facilitate a process of transforming strategic objectives into effective actions. Several creditable studies have emphasized the usefulness of causal analysis techniques. Partial least squares (PLS) path modeling is one of several popular causal analysis techniques. However, one difficulty often faced when we commence research is that the causal direction is unknown due to the lack of background knowledge. To solve this difficulty, this paper proposes a method that links the Bayesian network and PLS path modeling for causal analysis. An empirical study is presented to illustrate the application of the proposed method. Based on the findings of this study, conclusions and implications for management are discussed. 相似文献
11.
In this paper we present the results of a simulation study to explore the ability of Bayesian parametric and nonparametric models to provide an adequate fit to count data of the type that would routinely be analyzed parametrically either through fixed-effects or random-effects Poisson models. The context of the study is a randomized controlled trial with two groups (treatment and control). Our nonparametric approach uses several modeling formulations based on Dirichlet process priors. We find that the nonparametric models are able to flexibly adapt to the data, to offer rich posterior inference, and to provide, in a variety of settings, more accurate predictive inference than parametric models. 相似文献
12.
Qian Ren Andrew O. Finley James S. Hodges 《Computational statistics & data analysis》2011,55(12):3197-3217
With scientific data available at geocoded locations, investigators are increasingly turning to spatial process models for carrying out statistical inference. However, fitting spatial models often involves expensive matrix decompositions, whose computational complexity increases in cubic order with the number of spatial locations. This situation is aggravated in Bayesian settings where such computations are required once at every iteration of the Markov chain Monte Carlo (MCMC) algorithms. In this paper, we describe the use of Variational Bayesian (VB) methods as an alternative to MCMC to approximate the posterior distributions of complex spatial models. Variational methods, which have been used extensively in Bayesian machine learning for several years, provide a lower bound on the marginal likelihood, which can be computed efficiently. We provide results for the variational updates in several models especially emphasizing their use in multivariate spatial analysis. We demonstrate estimation and model comparisons from VB methods by using simulated data as well as environmental data sets and compare them with inference from MCMC. 相似文献
13.
Monte Carlo (MC) methods are widely used for Bayesian inference and optimization in statistics, signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In order to foster better exploration of the state space, specially in high-dimensional applications, several schemes employing multiple parallel MCMC chains have been recently introduced. In this work, we describe a novel parallel interacting MCMC scheme, called orthogonal MCMC (O-MCMC), where a set of “vertical” parallel MCMC chains share information using some “horizontal” MCMC techniques working on the entire population of current states. More specifically, the vertical chains are led by random-walk proposals, whereas the horizontal MCMC techniques employ independent proposals, thus allowing an efficient combination of global exploration and local approximation. The interaction is contained in these horizontal iterations. Within the analysis of different implementations of O-MCMC, novel schemes in order to reduce the overall computational cost of parallel Multiple Try Metropolis (MTM) chains are also presented. Furthermore, a modified version of O-MCMC for optimization is provided by considering parallel Simulated Annealing (SA) algorithms. Numerical results show the advantages of the proposed sampling scheme in terms of efficiency in the estimation, as well as robustness in terms of independence with respect to initial values and the choice of the parameters. 相似文献
14.
Association Models for Web Mining 总被引:3,自引:0,他引:3
We describe how statistical association models and, specifically, graphical models, can be usefully employed to model web mining data. We describe some methodological problems related to the implementation of discrete graphical models for web mining data. In particular, we discuss model selection procedures. 相似文献
15.
Dellaert Frank Seitz Steven M. Thorpe Charles E. Thrun Sebastian 《Machine Learning》2003,50(1-2):45-71
Learning spatial models from sensor data raises the challenging data association problem of relating model parameters to individual measurements. This paper proposes an EM-based algorithm, which solves the model learning and the data association problem in parallel. The algorithm is developed in the context of the the structure from motion problem, which is the problem of estimating a 3D scene model from a collection of image data. To accommodate the spatial constraints in this domain, we compute virtual measurements as sufficient statistics to be used in the M-step. We develop an efficient Markov chain Monte Carlo sampling method called chain flipping, to calculate these statistics in the E-step. Experimental results show that we can solve hard data association problems when learning models of 3D scenes, and that we can do so efficiently. We conjecture that this approach can be applied to a broad range of model learning problems from sensordata, such as the robot mapping problem. 相似文献
16.
子空间聚类的目标是在不同的特征子集上对给定的一组数据归类.此非监督学习方法试图发现数据在不同表达下的相似模式,并且引起了相关领域大量的关注和研究.首先扩展Hoff提出的均值与方差平移模型为一个新的基于特征子集的非参数聚类模型,其优点是能应用变分贝叶斯方法学习模型参数.此模型结合Dirichlet过程混合模型和选择特征子集的非参数模型,能自动选择聚类个数和进行子空间聚类.然后给出基于马尔可夫链蒙特卡罗的参数后验推断算法.出于计算速度上的考虑,提出应用变分贝叶斯方法学习模型参数.在仿真数据上的实验结果及在人脸聚类问题上的应用均表明了此模型能同时选择相关特征和在这些特征上具有相似模式的数据点.在UCI多特征数据库上应用无需抽样的变分贝叶斯方法,其实验结果说明此方法能快速推断模型参数. 相似文献
17.
Efficient and accurate Bayesian Markov chain Monte Carlo methodology is proposed for the estimation of event rates under an overdispersed Poisson distribution. An approximate Gibbs sampling method and an exact independence-type Metropolis-Hastings algorithm are derived, based on a log-normal/gamma mixture density that closely approximates the conditional distribution of the Poisson parameters. This involves a moment matching process, with the exact conditional moments obtained employing an entropy distance minimisation (Kullback-Liebler divergence) criterion. A simulation study is conducted and demonstrates good Bayes risk properties and robust performance for the proposed estimators, as compared with other estimating approaches under various loss functions. Actuarial data on insurance claims are used to illustrate the methodology. The approximate analysis displays superior Markov chain Monte Carlo mixing efficiency, whilst providing almost identical inferences to those obtained with exact methods. 相似文献
18.
In recent years several approaches have been proposed to overcome the multiple-minima problem associated with nonlinear optimization techniques used in the analysis of molecular conformations. One such technique based on a parallel Monte Carlo search algorithm is analyzed. Experiments on the Intel iPSC/2 confirm that the attainable parallelism is limited by the underlying acceptance rate in the Monte Carlo search. It is proposed that optimal performance can be achieved in combination with vector processing. Tests on both the IBM 3090 and Intel iPSC/2-VX indicate that vector performance is related to molecule size and vector pipeline latency. 相似文献
19.
In blind source separation, there are M sources that produce sounds independently and continuously over time. These sounds are then recorded by m receivers. The sound recorded by each receiver at each time point is a linear superposition of the sounds produced by the M sources at the same time point. The problem of blind source separation is to recover the sounds of the sources from the sounds recorded by the receivers, without knowledge of the m×M mixing matrix that transforms the sounds of the sources to the sounds of the receivers at each time point. Over-complete separation refers to the situation where the number of sources M is greater than the number of receivers m, so that the source sounds cannot be uniquely solved from the receiver sounds even if the mixing matrix is known. In this paper, we propose a null space representation for the over-complete blind source separation problem. This representation explicitly identifies the solution space of the source sounds in terms of the null space of the mixing matrix using singular value decomposition. Under this representation, the problem can be posed in the framework of Bayesian latent variable model, where the mixing matrix and the source sounds can be inferred based on their posterior distributions. We then propose a null space algorithm for Markov chain Monte Carlo posterior sampling. We illustrate the algorithm using several examples under two different statistical assumptions about the independent source sounds. The blind source separation problem is mathematically equivalent to the independent component analysis problem. So our method can be equally applied to over-complete independent component analysis for unsupervised learning of high-dimensional data. 相似文献
20.
S?awomir LasotaAuthor VitaeWojciech NiemiroAuthor Vitae 《Pattern recognition》2003,36(4):931-941
An algorithm for restoration of images degraded by Poisson noise is proposed. The algorithm belongs to the family of Markov chain Monte Carlo methods with auxiliary variables. We explicitly use the fact that medical images consist of finitely many, often relatively few, grey-levels. The continuous scale of grey-levels is discretized in an adaptive way, so that a straightforward application of the Swendsen-Wang (Phys. Rev. Lett. 58 (1987) 86) algorithm becomes possible. Partial decoupling method due to Higdon (J. Am. Statist. Assoc. 93 (1998) 442, 585) is also incorporated into the algorithm. Simulation results suggest that the algorithm is reliable and efficient. 相似文献