首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider likelihood and Bayes analyses for the symmetric matrix von Mises-Fisher (matrix Fisher) distribution, which is a common model for three-dimensional orientations (represented by 3×3 orthogonal matrices with a positive determinant). One important characteristic of this model is a 3×3 rotation matrix representing the modal rotation, and an important challenge is to establish accurate confidence regions for it with an interpretable geometry for practical implementation. While we provide some extensions of one-sample likelihood theory (e.g., Euler angle parametrizations of modal rotation), our main contribution is the development of MCMC-based Bayes inference through non-informative priors. In one-sample problems, the Bayes methods allow the construction of inference regions with transparent geometry and accurate frequentist coverages in a way that standard likelihood inference cannot. Simulation is used to evaluate the performance of Bayes and likelihood inference regions. Furthermore, we illustrate how the Bayes framework extends inference from one-sample problems to more complicated one-way random effects models based on the symmetric matrix Fisher model in a computationally straightforward manner. The inference methods are then applied to a human kinematics example for illustration.  相似文献   

2.
In this paper, we consider a family of generalized Birnbaum-Saunders distributions and present a lifetime analysis based mainly on the hazard function of this model. In addition, we carry out maximum likelihood estimation by using an iterative algorithm, which produces robust estimates. Asymptotic inference is also presented. Next, the quality of the estimation method is examined by means of Monte Carlo simulations. We then provide a practical example to illustrate the obtained results. From this example and based on goodness-of-fit methods, we show that the GBS distribution results in a more appropriate model for modeling fatigue data than other models commonly used to model this type of data. Finally, we estimate the hazard function and the critical point of this function.  相似文献   

3.
This study investigates a population decoding paradigm in which the maximum likelihood inference is based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known or because a simplified decoding model is preferred for saving computational cost. We consider an unfaithful decoding model that neglects the pair-wise correlation between neuronal activities and prove that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limited range. The performance of UMLI is compared with that of the maximum likelihood inference based on the faithful model and that of the center-of-mass decoding method. It turns out that UMLI has advantages of decreasing the computational complexity remarkably and maintaining high-level decoding accuracy. Moreover, it can be implemented by a biologically feasible recurrent network (Pouget, Zhang, Deneve, & Latham, 1998). The effect of correlation on the decoding accuracy is also discussed.  相似文献   

4.
信息网络结构特征作为影响关系生成与演化的主要因素在信息网络关系分类与推断领域占据重要地位。现有的关系分类与推断算法在处理网络结构特征的过程中,无法达到令人满意的效果。为此,结合互信息的定义,提出一种基于互信息特征选择的关系分类与推断算法。通过定义CN、AA、Katz等相似度指标充分抽取局部和全局(半全局)两类网络结构特征,利用基于密度比函数的最大似然估计来计算特征之间的近似互信息。该密度函数有效地解决了特征选择中全局最优解的过程,同时筛选出更具判别性的特征。通过多个真实信息网络数据集上的实验结果表明,无论是经典分类算法还是新近提出的基于学习理论的关系分类算法,经过互信息特征选择步骤的算法在Accuracy、AUC、Precision等评价指标上均比基准算法要优。  相似文献   

5.
1.定义和概念 对来自具有趋势项点过程的一组数据进行建模时,一种直接的方法是用某种单调过程对这些数据进行模拟,作为这种特殊的单调过程,现介绍所谓的几何过程的定义  相似文献   

6.
Hyvärinen A 《Neural computation》2008,20(12):3087-3110
In signal restoration by Bayesian inference, one typically uses a parametric model of the prior distribution of the signal. Here, we consider how the parameters of a prior model should be estimated from observations of uncorrupted signals. A lot of recent work has implicitly assumed that maximum likelihood estimation is the optimal estimation method. Our results imply that this is not the case. We first obtain an objective function that approximates the error occurred in signal restoration due to an imperfect prior model. Next, we show that in an important special case (small gaussian noise), the error is the same as the score-matching objective function, which was previously proposed as an alternative for likelihood based on purely computational considerations. Our analysis thus shows that score matching combines computational simplicity with statistical optimality in signal restoration, providing a viable alternative to maximum likelihood methods. We also show how the method leads to a new intuitive and geometric interpretation of structure inherent in probability distributions.  相似文献   

7.
Multivariate Gaussian models are widely adopted in continuous estimation of distribution algorithms (EDAs), and covariance matrix plays the essential role in guiding the evolution. In this paper, we propose a new framework for multivariate Gaussian based EDAs (MGEDAs), named eigen decomposition EDA (ED-EDA). Unlike classical EDAs, ED-EDA focuses on eigen analysis of the covariance matrix, and it explicitly tunes the eigenvalues. All existing MGEDAs can be unified within our ED-EDA framework by applying three different eigenvalue tuning strategies. The effects of eigenvalue on influencing the evolution are investigated through combining maximum likelihood estimates of Gaussian model with each of the eigenvalue tuning strategies in ED-EDA. In our experiments, proper eigenvalue tunings show high efficiency in solving problems with small population sizes, which are difficult for classical MGEDA adopting maximum likelihood estimates alone. Previously developed covariance matrix repairing (CMR) methods focusing on repairing computational errors of covariance matrix can be seen as a special eigenvalue tuning strategy. By using the ED-EDA framework, the computational time of CMR methods can be reduced from cubic to linear. Two new efficient CMR methods are proposed. Through explicitly tuning eigenvalues, ED-EDA provides a new approach to develop more efficient Gaussian based EDAs.  相似文献   

8.
Information networks provide a powerful representation of entities and the relationships between them. Information networks fusion is a technique for information fusion that jointly reasons about entities, links and relations in the presence of various sources. However, existing methods for information networks fusion tend to rely on a single task which might not get enough evidence for reasoning. In order to solve this issue, in this paper, we present a novel model called MC-INFM (information networks fusion model based on multi-task coordination). Different from traditional models, MC-INFM casts the fusion problem as a probabilistic inference problem, and collectively performs multiple tasks (including entity resolution, link prediction and relation matching) to infer the final result of fusion. First, we define the intra-features and the inter-features respectively and model them as factor graphs, which can provide abundant evidence to infer. Then, we use conditional random field (CRF) to learn the weight of each feature and infer the results of these tasks simultaneously by performing the maximum probabilistic inference. Experiments demonstrate the effectiveness of our proposed model.  相似文献   

9.
Comparative lifetime experiments are of paramount importance when the object of a study is to ascertain the relative merits of two competing products in regard to the duration of their service life. In this paper, we discuss exact inference for two exponential populations when Type-II censoring is implemented on the two samples in a combined manner. We obtain the conditional maximum likelihood estimators (MLEs) of the two exponential mean parameters. We then derive the moment generating functions and the exact distributions of these MLEs along with exact confidence intervals and simultaneous confidence regions. Moreover, simultaneous approximate confidence regions based on the asymptotic normality of the MLEs and simultaneous credible confidence regions from a Bayesian viewpoint are also discussed. A comparison of the exact, approximate, bootstrap and Bayesian intervals is also made in terms of coverage probabilities. Finally, an example is presented in order to illustrate all the methods of inference discussed here.  相似文献   

10.
刘莉  万九卿 《自动化学报》2014,40(1):117-125
数据关联是视觉传感网络监控系统的基本问题之一. 本文针对无重叠视域视觉监控网络的多目标跟踪问题提出一种 基于多外观模型的视觉传感网络在线分布式数据关联方法,将同一目标在不同摄像机节点上的外观用不同的高斯模型描述,由分布式推理算法综合利用外观与时空观测计算关联变量的后验概率,同时通过近似最大似然估计算法对各传感节点上的外观模型参数进行在线估计. 实验结果表明了所提方法的有效性.  相似文献   

11.
The two-parameter Birnbaum-Saunders distribution has been used successfully to model fatigue failure times. Although censoring is typical in reliability and survival studies, little work has been published on the analysis of censored data for this distribution. In this paper, we address the issue of performing testing inference on the two parameters of the Birnbaum-Saunders distribution under type-II right censored samples. The likelihood ratio statistic and a recently proposed statistic, the gradient statistic, provide a convenient framework for statistical inference in such a case, since they do not require to obtain, estimate or invert an information matrix, which is an advantage in problems involving censored data. An extensive Monte Carlo simulation study is carried out in order to investigate and compare the finite sample performance of the likelihood ratio and the gradient tests. Our numerical results show evidence that the gradient test should be preferred. Further, we also consider the generalized Birnbaum-Saunders distribution under type-II right censored samples and present some Monte Carlo simulations for testing the parameters in this class of models using the likelihood ratio and gradient tests. Three empirical applications are presented.  相似文献   

12.
In reliability analysis, accelerated life-testing allows for gradual increment of stress levels on test units during an experiment. In a special class of accelerated life tests known as step-stress tests, the stress levels increase discretely at pre-fixed time points, and this allows the experimenter to obtain information on the parameters of the lifetime distributions more quickly than under normal operating conditions. Moreover, when a test unit fails, there are often more than one fatal cause for the failure, such as mechanical or electrical. In this article, we consider the simple step-stress model under time constraint when the lifetime distributions of the different risk factors are independently exponentially distributed. Under this setup, we derive the maximum likelihood estimators (MLEs) of the unknown mean parameters of the different causes under the assumption of a cumulative exposure model. Since it is found that the MLEs do not exist when there is no failure by any particular risk factor within the specified time frame, the exact sampling distributions of the MLEs are derived through the use of conditional moment generating functions. Using these exact distributions as well as the asymptotic distributions, the parametric bootstrap method, and the Bayesian posterior distribution, we discuss the construction of confidence intervals and credible intervals for the parameters. Their performance is assessed through Monte Carlo simulations and finally, we illustrate the methods of inference discussed here with an example.  相似文献   

13.
With scientific data available at geocoded locations, investigators are increasingly turning to spatial process models for carrying out statistical inference. However, fitting spatial models often involves expensive matrix decompositions, whose computational complexity increases in cubic order with the number of spatial locations. This situation is aggravated in Bayesian settings where such computations are required once at every iteration of the Markov chain Monte Carlo (MCMC) algorithms. In this paper, we describe the use of Variational Bayesian (VB) methods as an alternative to MCMC to approximate the posterior distributions of complex spatial models. Variational methods, which have been used extensively in Bayesian machine learning for several years, provide a lower bound on the marginal likelihood, which can be computed efficiently. We provide results for the variational updates in several models especially emphasizing their use in multivariate spatial analysis. We demonstrate estimation and model comparisons from VB methods by using simulated data as well as environmental data sets and compare them with inference from MCMC.  相似文献   

14.
This paper presents a new approach to estimating mixture models based on a recent inference principle we have proposed: the latent maximum entropy principle (LME). LME is different from Jaynes' maximum entropy principle, standard maximum likelihood, and maximum a posteriori probability estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the expectation maximization (EM) algorithm can be developed. We show that a regularized version of LME (RLME), is effective at estimating mixture models. It generally yields better results than plain LME, which in turn is often better than maximum likelihood and maximum a posterior estimation, particularly when inferring latent variable models from small amounts of data.  相似文献   

15.

为了解决现有关于证据理论与层次分析的交叉决策方法(DS/AHP) 因信息推断方式缺乏柔性而容易造成决策信息提取结果有效性差的问题, 分析了传统方法的建模步骤和存在的缺陷, 并基于部分与整体、部分与部分、整体与部分3 类相对推断方式提出能够容纳多种推断信息的柔性知识矩阵. 在此基础上, 结合最优化原理构建可以从柔性知识矩阵中有效识别出最优基本概率分配函数的理论模型、计算模型和两个模型之间的等价定理. 最后通过数值对比分析验证了所提出方法的科学有效性.

  相似文献   

16.
In the spirit of the “grand challenge”, this paper covers the development of novel concepts for inference of large phylogenies based on the maximum likelihood method, which has proved to be the most accurate model for inference of huge and complex phylogenetic trees. Here, a novel method called Leaf Pruning and Re-grafting (LPR) has being presented, which is a variant of standard Sub-tree Pruning and Re-grafting (SPR) technique. LPR is a systematic approach where only unique topologies are generated at each step. Various stochastic search strategies for estimation of the maximum likelihood (ML) tree have also being proposed. Here, simulated annealing has been combined with steepest accent method to improve the quality of the final tree obtained. All the current simulated annealing approaches are used with simple hill climbing method to avoid the large number of repeated topologies that are normally generated by SPR. This easily leads to local maxima. However in the present study steepest accent with simulated annealing by way of LPR (SAWSA-LPR) has being used; the chances of returning local maxima has being significantly reduced. A straightforward and efficient parallel version of simulated annealing with steepest accent to accelerate the process of DNA phylogenetic tree inference has also being presented. It was observed that the implementation of the algorithm based on random DNA sequences gave better results as compared to other tree construction methods.  相似文献   

17.
It is well known that for finite-sized networks, one-step retrieval in the autoassociative Willshaw net is a suboptimal way to extract the information stored in the synapses. Iterative retrieval strategies are much better, but have hitherto only had heuristic justification. We show how they emerge naturally from considerations of probabilistic inference under conditions of noisy and partial input and a corrupted weight matrix. We start from the conditional probability distribution over possible patterns for retrieval. We develop two approximate, but tractable, iterative retrieval methods. One performs maximum likelihood inference to find the single most likely pattern, using the conditional probability as a Lyapunov function for retrieval. The second method makes a mean field assumption to optimize a tractable estimate of the full conditional probability distribution. In the absence of storage errors, both models become very similar to the Willshaw model.  相似文献   

18.
Until recently, the lack of ground truth data has hindered the application of discriminative structured prediction techniques to the stereo problem. In this paper we use ground truth data sets that we have recently constructed to explore different model structures and parameter learning techniques. To estimate parameters in Markov random fields (MRFs) via maximum likelihood one usually needs to perform approximate probabilistic inference. Conditional random fields (CRFs) are discriminative versions of traditional MRFs. We explore a number of novel CRF model structures including a CRF for stereo matching with an explicit occlusion model. CRFs require expensive inference steps for each iteration of optimization and inference is particularly slow when there are many discrete states. We explore belief propagation, variational message passing and graph cuts as inference methods during learning and compare with learning via pseudolikelihood. To accelerate approximate inference we have developed a new method called sparse variational message passing which can reduce inference time by an order of magnitude with negligible loss in quality. Learning using sparse variational message passing improves upon previous approaches using graph cuts and allows efficient learning over large data sets when energy functions violate the constraints imposed by graph cuts.  相似文献   

19.
Noise in textual data such as those introduced by multilinguality, misspellings, abbreviations, deletions, phonetic spellings, non-standard transliteration, etc. pose considerable problems for text-mining. Such corruptions are very common in instant messenger and short message service data and they adversely affect off-the-shelf text mining methods. Most techniques address this problem by supervised methods by making use of hand labeled corrections. But they require human generated labels and corrections that are very expensive and time consuming to obtain because of multilinguality and complexity of the corruptions. While we do not champion unsupervised methods over supervised when quality of results is the singular concern, we demonstrate that unsupervised methods can provide cost effective results without the need for expensive human intervention that is necessary to generate a parallel labeled corpora. A generative model based unsupervised technique is presented that maps non-standard words to their corresponding conventional frequent form. A hidden Markov model (HMM) over a “subsequencized” representation of words is used, where a word is represented as a bag of weighted subsequences. The approximate maximum likelihood inference algorithm used is such that the training phase involves clustering over vectors and not the customary and expensive dynamic programming (Baum–Welch algorithm) over sequences that is necessary for HMMs. A principled transformation of maximum likelihood based “central clustering” cost function of Baum–Welch into a “pairwise similarity” based clustering is proposed. This transformation makes it possible to apply “subsequence kernel” based methods that model delete and insert corruptions well. The novelty of this approach lies in that the expensive (Baum–Welch) iterations required for HMM, can be avoided through an approximation of the loglikelihood function and by establishing a connection between the loglikelihood and a pairwise distance. Anecdotal evidence of efficacy is provided on public and proprietary data.  相似文献   

20.
带测量误差的非线性退化过程建模与剩余寿命估计   总被引:8,自引:1,他引:7  
剩余寿命(Remaining useful lifetime, RUL)估计是设备视情维护和预测与健康管理(Prognostics and health management, PHM)中的一项关键问题. 采用退化过程建模进行剩余寿命估计的研究中,现有方法仅考虑了具有线性或可以线性化的退化轨迹的问题.本 文提出了一种基于扩散过程的非线性退化过程建模方法,在首达时间的意义下,推导出了剩余寿命的分布.该方法可以描述一般的非线性退化轨迹, 现有的线性退化建模方法是其特例.在参数的推断中,考虑到真实的退化过程受到测量误差的影响,难以直接测量得到, 因此,在退化建模的过程中引入了测量误差对退化观测数据的影响,通过观测数据,提出了一种退化模型未知参数的极大似然估计方法. 最后,通过激光发生器和陀螺仪的退化测量数据验证了本文方法明显优于线性建模方法,具有潜在的工程应用价值.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号