首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Generalized exponential distribution: Bayesian estimations   总被引:2,自引:0,他引:2  
Recently two-parameter generalized exponential distribution has been introduced by the authors. In this paper we consider the Bayes estimators of the unknown parameters under the assumptions of gamma priors on both the shape and scale parameters. The Bayes estimators cannot be obtained in explicit forms. Approximate Bayes estimators are computed using the idea of Lindley. We also propose Gibbs sampling procedure to generate samples from the posterior distributions and in turn computing the Bayes estimators. The approximate Bayes estimators obtained under the assumptions of non-informative priors, are compared with the maximum likelihood estimators using Monte Carlo simulations. One real data set has been analyzed for illustrative purposes.  相似文献   

2.
In Bayesian analysis with objective priors, it should be justified that the posterior distribution is proper. In this paper, we show that the reference prior (or independent Jeffreys prior) of a two-parameter Birnbaum-Saunders distribution will result in an improper posterior distribution. However, the posterior distributions are proper based on the reference priors with partial information (RPPI). Based on censored samples, slice sampling is utilized to obtain the Bayesian estimators based on RPPI. Monte Carlo simulations are used to compare the efficiencies of different RPPIs, to assess the sensitivity of the choice of the priors, and to compare the Bayesian estimators with the maximum likelihood estimators, for various scales of sample size and degree of censoring. A real data set is analyzed for illustrative purpose.  相似文献   

3.
We apply the idea of averaging ensembles of estimators to probability density estimation. In particular, we use Gaussian mixture models which are important components in many neural-network applications. We investigate the performance of averaging using three data sets. For comparison, we employ two traditional regularization approaches, i.e., a maximum penalized likelihood approach and a Bayesian approach. In the maximum penalized likelihood approach we use penalty functions derived from conjugate Bayesian priors such that an expectation maximization (EM) algorithm can be used for training. In all experiments, the maximum penalized likelihood approach and averaging improved performance considerably if compared to a maximum likelihood approach. In two of the experiments, the maximum penalized likelihood approach outperformed averaging. In one experiment averaging was clearly superior. Our conclusion is that maximum penalized likelihood gives good results if the penalty term in the cost function is appropriate for the particular problem. If this is not the case, averaging is superior since it shows greater robustness by not relying on any particular prior assumption. The Bayesian approach worked very well on a low-dimensional toy problem but failed to give good performance in higher dimensional problems.  相似文献   

4.
Efficient Markov chain Monte Carlo methods for decoding neural spike trains   总被引:1,自引:0,他引:1  
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nongaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the "hit-and-run" algorithm performed better than other MCMC methods. Using these algorithms, we show that for this latter class of priors, the posterior mean estimate can have a considerably lower average error than MAP, whereas for gaussian priors, the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting nonmarginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators.  相似文献   

5.
A finite mixture of gamma distributions [Finite mixture of certain distributions. Comm. Statist. Theory Methods 31(12), 2123-2137] is used as a conjugate prior, which gives a nice form of posterior distribution. This class of conjugate priors offers a more flexible class of priors than the class of gamma prior distributions. The usefulness of a mixture gamma-type prior and the posterior of uncertain parameters λ for the Poisson distribution are illustrated by using Markov Chain Monte Carlo (MCMC), Gibbs sampling approach, on hierarchical models. Using the generalized hypergeometric function, the method to approximate maximum likelihood estimators for the parameters of Agarwal and Al-Saleh [Generalized gamma type distribution and its hazard rate function. Comm. Statist. Theory Methods 30(2), 309-318] generalized gamma-type distribution is also suggested.  相似文献   

6.
We describe approaches for positive data modeling and classification using both finite inverted Dirichlet mixture models and support vector machines (SVMs). Inverted Dirichlet mixture models are used to tackle an outstanding challenge in SVMs namely the generation of accurate kernels. The kernels generation approaches, grounded on ideas from information theory that we consider, allow the incorporation of data structure and its structural constraints. Inverted Dirichlet mixture models are learned within a principled Bayesian framework using both Gibbs sampler and Metropolis-Hastings for parameter estimation and Bayes factor for model selection (i.e., determining the number of mixture’s components). Our Bayesian learning approach uses priors, which we derive by showing that the inverted Dirichlet distribution belongs to the family of exponential distributions, over the model parameters, and then combines these priors with information from the data to build posterior distributions. We illustrate the merits and the effectiveness of the proposed method with two real-world challenging applications namely object detection and visual scenes analysis and classification.  相似文献   

7.
广义Gamma模型是近年来新提出的一种语音分布模型,相对于传统的高斯或超高斯模型具有更好的普适性和灵活性,提出一种基于广义Gamma语音模型和语音存在概率修正的语音增强算法。在假设语音和噪声的幅度谱系数分别服从广义Gamma分布和Gaussian分布的基础上,推导了语音信号对数谱的最小均方误差估计式;在该模型下进一步推导了语音存在概率,对最小均方误差估计进行修正。仿真结果表明,与传统的短时谱估计算法相比,该算法不仅能够进一步提高增强语音的信噪比,而且可以有效减小增强语音的失真度,提高增强语音的主观感知质量。  相似文献   

8.
This paper discusses the Bayesian inference of accelerated life tests (ALT) in the presence of competing failure causes. The time to failure due to a specific cause is described by a Weibull distribution. A two-stage approach is utilized to obtain the estimates of parameters in the model. We use the Bayesian method to estimate the parameters of the distribution of component lifetimes in the first stage, in which two noninformative priors (Jeffreys prior and reference prior) are derived in the case of ALT, and based on these two priors we present the Gibbs sampling procedures to obtain the posterior estimates of the parameters. Besides, to overcome the problem of improper posterior densities under some conditions, we modify the likelihood function to make the posterior densities proper. In the second stage, parameters in the accelerating function are obtained by least squares approach. A numerical example is given to show the effectiveness of the method and a real data from Nelson (1990) is analyzed.  相似文献   

9.
In this paper, we present an incremental method for model selection and learning of Gaussian mixtures based on the recently proposed variational Bayes approach. The method adds components to the mixture using a Bayesian splitting test procedure: a component is split into two components and then variational update equations are applied only to the parameters of the two components. As a result, either both components are retained in the model or one of them is found to be redundant and is eliminated from the model. In our approach, the model selection problem is treated locally, in a region of the data space, so we can set more informative priors based on the local data distribution. A modified Bayesian mixture model is presented to implement this approach, along with a learning algorithm that iteratively applies a splitting test on each mixture component. Experimental results and comparisons with two other techniques testify for the adequacy of the proposed approach  相似文献   

10.
In some biological experiments, it is quite common that laboratory subjects differ in their patterns of susceptibility to a treatment. Finite mixture models are useful in those situations. In this paper we model the number of components and the component parameters jointly, and base inference about these quantities on their posterior probabilities, making use of the reversible jump Markov chain Monte Carlo methods. In particular, we apply the methodology to the analysis of univariate normal mixtures with multidimensional parameters, using a hierarchical prior model that allows weak priors while avoiding improper priors in the mixture context. The practical significance of the proposed method is illustrated with a dose-response data set.  相似文献   

11.
In this article we consider the statistical inferences of the unknown parameters of a Weibull distribution when the data are Type-I censored. It is well known that the maximum likelihood estimators do not always exist, and even when they exist, they do not have explicit expressions. We propose a simple fixed point type algorithm to compute the maximum likelihood estimators, when they exist. We also propose approximate maximum likelihood estimators of the unknown parameters, which have explicit forms. We construct the confidence intervals of the unknown parameters using asymptotic distribution and also by using the bootstrapping technique. Bayes estimates and the corresponding highest posterior density credible intervals of the unknown parameters are also obtained under fairly general priors on the unknown parameters. The Bayes estimates cannot be obtained explicitly. We propose to use the Gibbs sampling technique to compute the Bayes estimates and also to construct the highest posterior density credible intervals. Different methods have been compared by Monte Carlo simulations. One real data set has been analyzed for illustrative purposes.  相似文献   

12.
The so-called posterior probability estimator, e, formed by averaging the minimum of the posterior probabilities over a set of initial or additional observations (which need not be classified) is considered in the context of estimating the overall actual error rate for the linear discriminant function appropriate for two multivariate normal populations with a common covariance matrix. The bias of e is examined by deriving asymptotic approximations under three different models, the normal, logistic, and mixture models. The properties of e are investigated further by a series of simulation experiments for the logistic and mixture models for which there are few other available estimators.  相似文献   

13.
A stochastic search variable selection approach is proposed for Bayesian model selection in binary and tobit quantile regression. A simple and efficient Gibbs sampling algorithm was developed for posterior inference using a location-scale mixture representation of the asymmetric Laplace distribution. The proposed approach is then illustrated via five simulated examples and two real data sets. Results show that the proposed method performs very well under a variety of scenarios, such as the presence of a moderately large number of covariates, collinearity and heterogeneity.  相似文献   

14.
A default strategy for fully Bayesian model determination for generalised linear mixed models (GLMMs) is considered which addresses the two key issues of default prior specification and computation. In particular, the concept of unit-information priors is extended to the parameters of a GLMM. A combination of Markov chain Monte Carlo (MCMC) and Laplace approximations is used to compute approximations to the posterior model probabilities to find a subset of models with high posterior model probability. Bridge sampling is then used on the models in this subset to approximate the posterior model probabilities more accurately. The strategy is applied to four examples.  相似文献   

15.
System performance measures of a repairable system are studied from a Bayesian viewpoint with different types of priors assumed for unknown parameters, in which the system consists of two active components and one warm standby. There is a failure probability q that switches from standby state to active state. Time-to-failure of components is assumed to be an exponential distribution. The reboot time and repair time are also exponential distributions. When time-to-failure, time-to-repair and reboot time are with uncertain parameters, a Bayesian assessing is adopted to evaluate system performance measures. Monte Carlo simulation is used to derive the posterior distribution for the steady-state availability and the mean time-to-system failure. Some numerical experiments are performed to illustrate the results derived in this article.  相似文献   

16.
In this paper, we evaluate the mean square error (MSE) performance of empirical characteristic function (ECF) based signal level estimator in a binary communication system. By calculating Cramér-Rao lower bound (CRLB) we investigate the performance of the ECF based estimator in the presence of Laplace and Gaussian mixture noises. We have derived an analytic expression for the variance of the ECF based estimator which shows that it is asymptotically unbiased and consistent. Simulation and analytic results indicate that the ECF based level estimator outperforms the previously proposed estimators in some signal to noise ratio (SNR) regions when the observation noise distribution is unknown.  相似文献   

17.
System characteristics of a repairable system are studied from a Bayesian viewpoint with different types of priors assumed for unknown parameters, in which the system consists of one active component and one standby component. The detection of standby, the coverage factor and reboot delay of failed components are possibly considered. Time to failure of the components is assumed to follow exponential distribution. Time to repair and time to reboot of the failed components also follow exponential distributions. When time to failure, time to repair and time to reboot have uncertain parameters, a Bayesian approach is adopted to evaluate system characteristics. Monte Carlo simulation is used to derive the posterior distribution for the mean time to system failure and the steady-state availability. Some numerical experiments are performed to illustrate the results derived in this paper.  相似文献   

18.
Bayesian paradigm has been widely acknowledged as a coherent approach to learning putative probability model structures from a finite class of candidate models. Bayesian learning is based on measuring the predictive ability of a model in terms of the corresponding marginal data distribution, which equals the expectation of the likelihood with respect to a prior distribution for model parameters. The main controversy related to this learning method stems from the necessity of specifying proper prior distributions for all unknown parameters of a model, which ensures a complete determination of the marginal data distribution. Even for commonly used models, subjective priors may be difficult to specify precisely, and therefore, several automated learning procedures have been suggested in the literature. Here we introduce a novel Bayesian learning method based on the predictive entropy of a probability model, that can combine both subjective and objective probabilistic assessment of uncertain quantities in putative models. It is shown that our approach can avoid some of the limitations of the earlier suggested objective Bayesian methods.  相似文献   

19.
We developed a variational Bayesian learning framework for the infinite generalized Dirichlet mixture model (i.e. a weighted mixture of Dirichlet process priors based on the generalized inverted Dirichlet distribution) that has proven its capability to model complex multidimensional data. We also integrate a “feature selection” approach to highlight the features that are most informative in order to construct an appropriate model in terms of clustering accuracy. Experiments on synthetic data as well as real data generated from visual scenes and handwritten digits datasets illustrate and validate the proposed approach.  相似文献   

20.
Assessing the accuracy of land cover maps is often prohibitively expensive because of the difficulty of collecting a statistically valid probability sample from the classified map. Even when post-classification sampling is undertaken, cost and accessibility constraints may result in imprecise estimates of map accuracy. If the map is constructed via supervised classification, then the training sample provides a potential alternative source of data for accuracy assessment. Yet unless the training sample is collected by probability sampling, the estimates are, at best, of uncertain quality, and may be substantially biased. This article discusses a new approach to map accuracy assessment based on maximum posterior probability estimators. Maximum posterior probability estimators are resistant to bias induced by non-representative sampling, and so are intended for situations in which the training sample is collected without using a statistical sampling design. The maximum posterior probability approach may also be used to increase the precision of estimates obtained from a post-classification sample. In addition to discussing maximum posterior probability estimators, this article reports on a simulation study comparing three approaches to estimating map accuracy: 1) post-classification sampling, 2) resampling the training sample via cross-validation, and 3) maximum posterior probability estimation. The simulation study showed substantial reductions in bias and improvements in precision in comparisons of maximum posterior probability and cross-validation estimators when the training sample was not representative of the map. In addition, combining an ordinary post-classification estimator and the maximum posterior probability estimator produced an estimator that was at least, and usually more precise than the ordinary post-classification estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号