共查询到20条相似文献,搜索用时 218 毫秒
1.
2.
3.
4.
5.
讨论在一般的混合分布条件下,用EM算法,在最小熵原理的优化准则下的数据拟合问题。简单推导有限混合高斯分布的EM算法,并针对其收敛速度慢的缺点设计一种有效的选取参数初始值的方法。实验结果表明,该方法有助于EM算法以较快的速度在参数真值附近收敛。 相似文献
6.
余爱华 《电脑与微电子技术》2011,(15):3-7,31
讨论在一般的混合分布条件下,用EM算法,在最小熵原理的优化准则下的数据拟合问题。简单推导有限混合高斯分布的EM算法.并针对其收敛速度慢的缺点设计一种有效的选取参数初始值的方法。实验结果表明,该方法有助于EM算法以较快的速度在参数真值附近收敛。 相似文献
7.
8.
在机器学习中,一个广泛的应用是对模型的参数进行估计,即极大似然估计(MLE),EM算法是根据点估计中的MLE改进的一种迭代算法,是求极大似然估计的一种强有力的工具,但它收敛速度较慢,于是引入α-EM算法,克服了EM算法的缺陷.由于学习的过程中可能存在着大量的缺失数据及其动态模糊性,给出基于不完全数据的动态模糊极大似然估计算法并给出实例验证. 相似文献
9.
从概率统计方法出发,提出一种基于高斯混合模型聚类与递推最小二乘算法的非均匀采样系统的多模型建模方法.首先,采用高斯混合模型作为调度函数,使用最大期望(EM)算法迭代更新估计高斯混合模型中参数,从而通过每个子系统的高斯概率密度函数计算和比较来确定子系统的激活情况; 其次,采用递推最小二乘算法估计局部子系统参数;然后,使用鞅收敛定理对所提出的算法性能进行分析; 最后,通过非均匀采样系统的多模型建模来证明所提出方法的有效性. 相似文献
10.
吴婷 《网络安全技术与应用》2022,(4):47-49
高斯混合模型是一种含隐变量的概率图模型,其参数通常由EM算法迭代训练得到.本文在简单推导高斯混合模型的EM算法后,将使用高斯混合模型对鸢尾花(iris)数据集进行分类判别.同时,针对EM算法受初始值影响大的问题,使用了K均值聚类算法作为其初始值的估计方法.在得到K均值聚类算法和EM算法的分类判别结果后,对比两种算法的判... 相似文献
11.
This paper formulates a novel expectation maximization (EM) algorithm for the mixture of multivariate t-distributions. By introducing a new kind of “missing” data, we show that the empirically improved iterative algorithm, in
literature, for the mixture of multivariate t-distributions is in fact a type of EM algorithm; thus a theoretical analysis is established, which guarantees the empirical
algorithm converges to the maximization likelihood estimates of the mixture parameters. Simulated experiment and real experiments
on classification and image segmentation confirm the effectiveness of the improved EM algorithm. 相似文献
12.
一种带有色量测噪声的非线性系统辨识方法 总被引:2,自引:0,他引:2
利用最大似然判据, 本文提出了一种带有色量测噪声的非线性系统辨识方法. 首先, 利用量测差分方法将有色量测噪声白色化, 获得新的量测方程, 从而将带有色量测噪声的非线性系统辨识问题转化成带白色量测噪声和一步延迟状态的非线性系统辨识问题. 其次, 利用期望最大化(Expectation maximization, EM)算法提出了一种新的基于最大似然估计的非线性系统辨识方法, 该算法由期望步骤(Expectation step, E-step)和最大化步骤(Maximization step, M-step)两部分组成. 在期望步骤中, 基于当前估计的参数并利用带有色量测噪声的高斯近似滤波器和平滑器, 近似计算完整的对数似然函数的期望. 在最大化步骤中, 近似计算的似然函数期望值被最大化, 并且通过解析更新获得噪声参数估计, 通过Newton更新方法获得模型参数的估计. 最后, 数值仿真验证了本文提出算法的有效性. 相似文献
13.
Nicola Greggio Alexandre Bernardino Cecilia Laschi Paolo Dario José Santos-Victor 《Machine Vision and Applications》2012,23(4):773-789
The expectation maximization algorithm has been classically used to find the maximum likelihood estimates of parameters in probabilistic models with unobserved data, for instance, mixture models. A key issue in such problems is the choice of the model complexity. The higher the number of components in the mixture, the higher will be the data likelihood, but also the higher will be the computational burden and data overfitting. In this work, we propose a clustering method based on the expectation maximization algorithm that adapts online the number of components of a finite Gaussian mixture model from multivariate data or method estimates the number of components and their means and covariances sequentially, without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during expectation maximization steps. The coarse to fine nature of the algorithm reduce the overall number of computations to achieve a solution, which makes the method particularly suited to image segmentation applications whenever computational time is an issue. We show the effectiveness of the method in a series of experiments and compare it with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with images acquired from the iCub humanoid robot. 相似文献
14.
Gaussian mixture models (GMM), commonly used in pattern recognition and machine learning, provide a flexible probabilistic model for the data. The conventional expectation–maximization (EM) algorithm for the maximum likelihood estimation of the parameters of GMMs is very sensitive to initialization and easily gets trapped in local maxima. Stochastic search algorithms have been popular alternatives for global optimization but their uses for GMM estimation have been limited to constrained models using identity or diagonal covariance matrices. Our major contributions in this paper are twofold. First, we present a novel parametrization for arbitrary covariance matrices that allow independent updating of individual parameters while retaining validity of the resultant matrices. Second, we propose an effective parameter matching technique to mitigate the issues related with the existence of multiple candidate solutions that are equivalent under permutations of the GMM components. Experiments on synthetic and real data sets show that the proposed framework has a robust performance and achieves significantly higher likelihood values than the EM algorithm. 相似文献
15.
Mixture model based clustering (also simply called model-based clustering hereinafter) consists of fitting a mixture model to data and identifying each cluster with one of its components. This paper tackles the model selection and parameter estimation problems in model-based clustering so as to improve the clustering performance on the data sets whose true kernel distribution functions are not in the family of assumed ones, as well as with inherently overlapped clusters. Being tailored to clustering applications, an effective model selection criterion is first proposed. Unlike most criteria that measure the goodness-of-fit of the model only to generate data, the proposed one also evaluates whether the candidate model provides a reasonable partition for the observed data, which enforces a model with well-separated components. Accordingly, an improved method for the estimation of mixture parameters is derived, which aims to suppress the spurious estimates by the standard expectation maximization (EM) algorithm and enforce well-supported components in the mixture model. Finally, the estimation of mixture parameters and the model selection is integrated in a single algorithm which favors a compact mixture model with both the well-supported and well-separated components. Extensive experiments on synthetic and real-world data sets are carried out to show the effectiveness of the proposed approach to the mixture model based clustering. 相似文献
16.
Multi-level nonlinear mixed effects (ML-NLME) models have received a great deal of attention in recent years because of the flexibility they offer in handling the repeated-measures data arising from various disciplines. In this study, we propose both maximum likelihood and restricted maximum likelihood estimations of ML-NLME models with two-level random effects, using first order conditional expansion (FOCE) and the expectation–maximization (EM) algorithm. The FOCE–EM algorithm was compared with the most popular Lindstrom and Bates (LB) method in terms of computational and statistical properties. Basal area growth series data measured from Chinese fir (Cunninghamia lanceolata) experimental stands and simulated data were used for evaluation. The FOCE–EM and LB algorithms given the same parameter estimates and fit statistics for models that converged by both. However, FOCE–EM converged for all the models, while LB did not, especially for the models in which two-level random effects are simultaneously considered in several base parameters to account for between-group variation. We recommend the use of FOCE–EM in ML-NLME models, particularly when convergence is a concern in model selection. 相似文献
17.
The authors derive a new class of finite-dimensional recursive filters for linear dynamical systems. The Kalman filter is a special case of their general filter. Apart from being of mathematical interest, these new finite-dimensional filters can be used with the expectation maximization (EM) algorithm to yield maximum likelihood estimates of the parameters of a linear dynamical system. Important advantages of their filter-based EM algorithm compared with the standard smoother-based EM algorithm include: 1) substantially reduced memory requirements, and 2) ease of parallel implementation on a multiprocessor system. The algorithm has applications in multisensor signal enhancement of speech signals and also econometric modeling 相似文献
18.
针对设备退化过程中异常数据下的剩余有效寿命预测问题,提出了一种基于动态的期望最大化算法(EM)-分段隐半马尔可夫模型(SHSMM)预测方法。首先,基于SHSMM的理论框架,采用期望最大化参数自适应估计算法估计模型中的未知参数。其次,基于WGM(1,1)模型,提出动态前向后向灰色填充算法处理样本中的异常数据,并利用健康预测过程预测设备的剩余有效寿命。最后,通过实例分析对模型进行评价和验证。结果表明,提出的设备健康预测方法能有效解决异常数据的问题。 相似文献
19.
使用EM算法训练随机多层前馈网具有低开销、易于实现和全局收敛的特点,在EM算法的基础上提出了一种训练随机多层前馈网络的新方法AEM.AEM算法利用热力学系统的最大熵原理计算网络中隐变量的条件概率,借鉴退火过程,引入温度参数,减小了初始参数值对最终结果的影响.该算法既保持了原EM算法的优点,又有利于训练结果收敛到全局极小.从数学角度证明了该算法的收敛性,同时,实验也证明了该算法的正确性和有效性. 相似文献
20.
TOA source localization and DOA estimation algorithms using prior distribution for calibrated source
This paper presents an a priori probability density function (pdf)-based time-of-arrival (TOA) source localization algorithms. Range measurements are used to estimate the location parameter for TOA source localization. Previous information on the position of the calibrated source is employed to improve the existing likelihood-based localization method. The cost function where the prior distribution was combined with the likelihood function is minimized by the adaptive expectation maximization (EM) and space-alternating generalized expectation–maximization (SAGE) algorithms. The variance of the prior distribution does not need to be known a priori because it can be estimated using Bayes inference in the proposed adaptive EM algorithm. Note that the variance of the prior distribution should be known in the existing three-step WLS method [1]. The resulting positioning accuracy of the proposed methods was much better than the existing algorithms in regimes of large noise variances. Furthermore, the proposed algorithms can also effectively perform the localization in line-of-sight (LOS)/non-line-of-sight (NLOS) mixture situations. 相似文献