首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
This paper describes the Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. First we consider the Bayesian inference of the unknown parameter under a squared error loss function. Although we have discussed mainly the squared error loss function, any other loss function can easily be considered. A Gibbs sampling procedure is used to draw Markov Chain Monte Carlo (MCMC) samples, and they have in turn, been used to compute the Bayes estimates and also to construct the corresponding credible intervals with the help of an importance sampling technique. We have performed a simulation study in order to compare the proposed Bayes estimators with the maximum likelihood estimators. We further consider one-sample and two-sample Bayes prediction problems based on the observed sample and provide appropriate predictive intervals with a given coverage probability. A real life data set is used to illustrate the results derived. Some open problems are indicated for further research.  相似文献   

2.
A variety of existing symmetric parametric models for 3-D rotations found in both statistical and materials science literatures are considered from the point of view of the “uniform-axis-random-spin” (UARS) construction. One-sample Bayes methods for non-informative priors are provided for all of these models and attractive frequentist properties for corresponding Bayes inference on the model parameters are confirmed. Taken together with earlier work, the broad efficacy of non-informative Bayes inference for symmetric distributions on 3-D rotations is conclusively demonstrated.  相似文献   

3.
In this article, we perform statistical inference on a skew model that belongs to a class of distributions proposed by Fernández and Steel (1998). Specifically, we introduce two ways to represent this model by means of which moments and generation of random numbers can be obtained. In addition, we carry out estimation of the model parameters by moment and maximum likelihood methods. Asymptotic inference based on both of these methods is also produced. We analyze the expected Fisher information matrix associated with the model and highlight the fact that this does not have the singularity problem, as occurs with the corresponding information matrix of the skew-normal model introduced by Azzalini (1985). Furthermore, we conduct a simulation study to compare the performance of the moment and maximum likelihood estimators. Finally, an application based on real data is carried out.  相似文献   

4.
One of the main concepts in quantum physics is a density matrix, which is a symmetric positive definite matrix of trace one. Finite probability distributions can be seen as a special case when the density matrix is restricted to be diagonal. We develop a probability calculus based on these more general distributions that includes definitions of joints, conditionals and formulas that relate these, including analogs of the Theorem of Total Probability and various Bayes rules for the calculation of posterior density matrices. The resulting calculus parallels the familiar “conventional” probability calculus and always retains the latter as a special case when all matrices are diagonal. We motivate both the conventional and the generalized Bayes rule with a minimum relative entropy principle, where the Kullbach-Leibler version gives the conventional Bayes rule and Umegaki’s quantum relative entropy the new Bayes rule for density matrices. Whereas the conventional Bayesian methods maintain uncertainty about which model has the highest data likelihood, the generalization maintains uncertainty about which unit direction has the largest variance. Surprisingly the bounds also generalize: as in the conventional setting we upper bound the negative log likelihood of the data by the negative log likelihood of the MAP estimator.  相似文献   

5.
提出了摄像机和激光雷达外部参数的最大似然估计算法.本算法在计算摄像机和3维激光雷达坐标系转换的旋转矩阵时,考虑了各参数初值的误差情况,根据方差的大小为参数初值赋予相应的权值,使误差大的初值具有较小的权重,而误差小的初值具有较大的权重,从而使参数初值的误差对标定结果的影响能够一致.在此基础上求解旋转矩阵的最大似然估计值,...  相似文献   

6.
Population coding and decoding in a neural field: a computational study   总被引:1,自引:0,他引:1  
Wu S  Amari S  Nakahara H 《Neural computation》2002,14(5):999-1026
This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only rediscovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further--wider than (sqrt)2 times the effective width of the turning function--the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.  相似文献   

7.
The dynamical behavior of learning is known to be very slow for the multilayer perceptron, being often trapped in the “plateau.” It has been recently understood that this is due to the singularity in the parameter space of perceptrons, in which trajectories of learning are drawn. The space is Riemannian from the point of view of information geometry and contains singular regions where the Riemannian metric or the Fisher information matrix degenerates. This paper analyzes the dynamics of learning in a neighborhood of the singular regions when the true teacher machine lies at the singularity. We give explicit asymptotic analytical solutions (trajectories) both for the standard gradient (SGD) and natural gradient (NGD) methods. It is clearly shown, in the case of the SGD method, that the plateau phenomenon appears in a neighborhood of the critical regions, where the dynamical behavior is extremely slow. The analysis of the NGD method is much more difficult, because the inverse of the Fisher information matrix diverges. We conquer the difficulty by introducing the “blow-down” technique used in algebraic geometry. The NGD method works efficiently, and the state converges directly to the true parameters very quickly while it staggers in the case of the SGD method. The analytical results are compared with computer simulations, showing good agreement. The effects of singularities on learning are thus qualitatively clarified for both standard and NGD methods.   相似文献   

8.
This paper discusses learning algorithms of layered neural networks from the standpoint of maximum likelihood estimation. At first we discuss learning algorithms for the most simple network with only one neuron. It is shown that Fisher information of the network, namely minus expected values of Hessian matrix, is given by a weighted covariance matrix of input vectors. A learning algorithm is presented on the basis of Fisher's scoring method which makes use of Fisher information instead of Hessian matrix in Newton's method. The algorithm can be interpreted as iterations of weighted least squares method. Then these results are extended to the layered network with one hidden layer. Fisher information for the layered network is given by a weighted covariance matrix of inputs of the network and outputs of hidden units. Since Newton's method for maximization problems has the difficulty when minus Hessian matrix is not positive definite, we propose a learning algorithm which makes use of Fisher information matrix, which is non-negative, instead of Hessian matrix. Moreover, to reduce the computation of full Fisher information matrix, we propose another algorithm which uses only block diagonal elements of Fisher information. The algorithm is reduced to an iterative weighted least squares algorithm in which each unit estimates its own weights by a weighted least squares method. It is experimentally shown that the proposed algorithms converge with fewer iterations than error back-propagation (BP) algorithm.  相似文献   

9.
10.
Lu Z  Leen TK  Kaye J 《Neural computation》2011,23(9):2390-2420
We develop several kernel methods for classification of longitudinal data and apply them to detect cognitive decline in the elderly. We first develop mixed-effects models, a type of hierarchical empirical Bayes generative models, for the time series. After demonstrating their utility in likelihood ratio classifiers (and the improvement over standard regression models for such classifiers), we develop novel Fisher kernels based on mixture of mixed-effects models and use them in support vector machine classifiers. The hierarchical generative model allows us to handle variations in sequence length and sampling interval gracefully. We also give nonparametric kernels not based on generative models, but rather on the reproducing kernel Hilbert space. We apply the methods to detecting cognitive decline from longitudinal clinical data on motor and neuropsychological tests. The likelihood ratio classifiers based on the neuropsychological tests perform better than than classifiers based on the motor behavior. Discriminant classifiers performed better than likelihood ratio classifiers for the motor behavior tests.  相似文献   

11.
Geometry of single axis motions using conic fitting   总被引:1,自引:0,他引:1  
Previous algorithms for recovering 3D geometry from an uncalibrated image sequence of a single axis motion of unknown rotation angles are mainly based on the computation of two-view fundamental matrices and three-view trifocal tensors. We propose three new methods that are based on fitting a conic locus to corresponding image points over multiple views. The main advantage is that determining only five parameters of a conic from one corresponding point over at least five views is simpler and more robust than determining a fundamental matrix from two views or a trifocal tensor from three views. It is shown that the geometry of single axis motion can be recovered either by computing one conic locus and one fundamental matrix or by computing at least two conic loci. A maximum likelihood solution based on this parametrization of the single axis motion is also described for optimal estimation using three or more loci. The experiments on real image sequences demonstrate the simplicity, accuracy, and robustness of the new methods.  相似文献   

12.
In this paper we use Markov chain Monte Carlo (MCMC) methods in order to estimate and compare GARCH models from a Bayesian perspective. We allow for possibly heavy tailed and asymmetric distributions in the error term. We use a general method proposed in the literature to introduce skewness into a continuous unimodal and symmetric distribution. For each model we compute an approximation to the marginal likelihood, based on the MCMC output. From these approximations we compute Bayes factors and posterior model probabilities.  相似文献   

13.
Fisher information is of key importance in estimation theory. It also serves in inference problems as well as in the interpretation of many physical processes. The mean-squared estimation error for the location parameter of a distribution is bounded by the inverse of the Fisher information associated with this distribution. In this paper we look for minimum Fisher information distributions with a restricted support. More precisely, we study the problem of minimizing the Fisher information in the set of distributions with fixed variance defined on a bounded subset S of R or on the positive real line. We show that the solutions of the underlying differential equation can be expressed in terms of Whittaker functions. Then, in the two considered cases, we derive the explicit expressions of the solutions and investigate their behavior. We also characterize the behavior of the minimum Fisher information as a function of the imposed variance.  相似文献   

14.
This paper proposes a new method for estimating the symmetric axis of a pottery from its small fragment using surface geometry. Also, it provides a scheme for grouping such fragments into shape categories using distribution of surface curvature. For automatic assembly of pot from broken sherds, axis estimation is an important task and when a fragment is small, it is difficult to estimate axis orientation since it looks like a patch of a sphere and conventional methods mostly fail. But the proposed method provides fast and robust axis estimation by using multiple constraints. The computational cost is also too lowered. To estimate the symmetric axis, the proposed algorithm uses three constraints: (1) The curvature is constant on a circumference CH. (2) The curvature is invariant in any scale. (3) Also the principal curvatures does not vary on CH. CH is a planar circle which is one of all the possible circumferences of a pottery or sherd. A hypothesis test for axis is performed using maximum likelihood. The variance of curvature, multi-scale curvature and principal curvatures is computed in the likelihood function. We also show that the principal curvatures can be used for grouping of sherds. The grouping of sherds will reduce the computation significantly by omitting impossible configurations in broken pottery assembly process.  相似文献   

15.
The two-parameter Birnbaum-Saunders distribution has been used successfully to model fatigue failure times. Although censoring is typical in reliability and survival studies, little work has been published on the analysis of censored data for this distribution. In this paper, we address the issue of performing testing inference on the two parameters of the Birnbaum-Saunders distribution under type-II right censored samples. The likelihood ratio statistic and a recently proposed statistic, the gradient statistic, provide a convenient framework for statistical inference in such a case, since they do not require to obtain, estimate or invert an information matrix, which is an advantage in problems involving censored data. An extensive Monte Carlo simulation study is carried out in order to investigate and compare the finite sample performance of the likelihood ratio and the gradient tests. Our numerical results show evidence that the gradient test should be preferred. Further, we also consider the generalized Birnbaum-Saunders distribution under type-II right censored samples and present some Monte Carlo simulations for testing the parameters in this class of models using the likelihood ratio and gradient tests. Three empirical applications are presented.  相似文献   

16.
Odell and Decell, Odell and Coberly gave necessary and sufficient conditions for the smallest dimension compression matrix B such that the Bayes classification regions are preserved. That is, they developed an explicit expression of a compression matrix B such that the Bayes classification assignment are the same for both the original space x and the compressed space Bx. Odell indicated that whenever the population parameters are unknown, then the dimension of Bx is the same as x with probability one. Furthermore, Odell posed the problem of finding a lower dimension q < p which in some sense best fits the range space generated by the matrix M. The purpose of this paper is to discuss this problem and provide a partial solution.  相似文献   

17.
The maximum likelihood parameter estimation algorithm is known to provide optimal estimates for linear time-invariant dynamic systems. However, the algorithm is computationally expensive and requires evaluations of the gradient of a log likelihood function and the Fisher information matrix. By using the square-root information filter, a numerically reliable algorithm to compute the required gradient and the Fisher information matrix is developed. The algorithm is a significant improvement over the methods based on the conventional Kalman filter. The square-root information filter relies on the use of orthogonal transformations that are well known for numerical reliability. This algorithm can be extended to real-time system identification and adaptive control  相似文献   

18.
This paper presents an a priori probability density function (pdf)-based time-of-arrival (TOA) source localization algorithms. Range measurements are used to estimate the location parameter for TOA source localization. Previous information on the position of the calibrated source is employed to improve the existing likelihood-based localization method. The cost function where the prior distribution was combined with the likelihood function is minimized by the adaptive expectation maximization (EM) and space-alternating generalized expectation–maximization (SAGE) algorithms. The variance of the prior distribution does not need to be known a priori because it can be estimated using Bayes inference in the proposed adaptive EM algorithm. Note that the variance of the prior distribution should be known in the existing three-step WLS method [1]. The resulting positioning accuracy of the proposed methods was much better than the existing algorithms in regimes of large noise variances. Furthermore, the proposed algorithms can also effectively perform the localization in line-of-sight (LOS)/non-line-of-sight (NLOS) mixture situations.  相似文献   

19.
Using existing experimental data from Uniaxial Compressive Strength (UCS) testing, constitutive models were produced to describe the influence of joint geometry (joint location, trace length and orientation) on the UCS of rock containing partially-spanning joints. Separate approaches were used to develop two models: a multivariable regression model, and a fuzzy inference system model. Comparison of model predictions to the experimental data demonstrates that both models are capable of accurately describing the UCS of jointed rock with partially-spanning joints using information relating to joint geometry. However, according to the statistical evaluation methods used for performance evaluation, the multivariable regression model was significantly more accurate. Analysis of predictions made by the fuzzy inference system model showed that it was capable of resolving certain peculiarities in the influence of partially-spanning joint orientation on the compressive strength of rock that, from rock mechanics and fracture mechanics theory, should be expected. The multivariable regression model, whilst more accurate, did not recognise these peculiarities. Due to the additional insight that can be gleaned from the fuzzy inference system modelling, we recommend the use of the fuzzy inference system constitutive model in combination with the multivariable regression model.  相似文献   

20.
The Fisher scoring method is widely used for likelihood maximization, but its application can be difficult in situations where the expected information matrix is not available in closed form or when parameters have constraints. In this paper, we describe an interpolation family that generalizes the Fisher scoring method and propose a general Monte Carlo approach that makes these generalized methods also applicable in such situations. With this approach, random samples are generated from the iteratively estimated models and used to provide estimates of the expected information. As a result, the likelihood function can be optimized by repeatedly solving weighted linear regression problems. Specific extensions of this general approach to fitting multivariate normal mixtures and to fitting mixed-effects models with a single discrete random effect are also described. Numerical studies show that the proposed algorithms are fast and reliable to use, as compared with the classical expectation-maximization algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号