首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
王泽  曲政  潘章明 《计算机仿真》2010,27(5):105-108
针对粒度母体混合分布识别中参数优化求解问题,为进一步提高识别效率,利用一种改进的微粒群算法对粒度母体混合分布的参数进行优化。方法通过设置检验值,判断算法是否陷入局部最优解,并让陷入局部最优的粒子进入下一次迭代,避免微粒群算法在搜索过程中陷入局部最优的缺陷问题。在仿真实验部分,将方法估计的高斯混合模型的参数与迭代EM算法估计的模型参数做比较,结果表明,得到的模型参数接近真实的分布,使得粒度母体混合分布的识别率进一步提高。  相似文献   

2.
贾可新  何子述 《计算机工程》2011,37(19):153-156
基于Mahalanobis距离的EM(MDEM)算法存在过分裂问题。为此,提出一种竞争结束MDEM(CSMDEM)算法。该算法将最小描述长度准则作为竞争结束条件嵌入到MDEM算法中,能够在估计混合模型参数的同时选择模型阶数。实验结果表明,该算法具有较低的平均EM迭代次数,能够较好地拟合高斯混合模型。当其被应用到跳频网台分选时,能够以较高的正确率分选跳频信号。  相似文献   

3.
首先对存在过多0和1的观测数据提出了零一膨胀混合回归模型,由于EM算法一般会使得估计收敛到局部最优解上,所以提出了一种修正EM算法,对具有有限混合成分的零一膨胀二项回归模型(ZOIB)的参数进行估计。最后通过模拟研究说明该方法的有效性。  相似文献   

4.
EM算法研究与应用   总被引:2,自引:1,他引:1  
引入了可处理缺失数据的EM算法.EM算法是一种迭代算法,每一次迭代都能保证似然函数值增加,并且收敛到一个局部极大值.对EM算法的基本原理和实施步骤进行了分析.算法的命名,是因为算法的每一迭代包括两步:第一步求期望(Expectation Step),称为E步;第二步求极大值(Maximization Step),称为M步.EM算法主要用来计算基于不完全数据的极大似然估计.在此基础上,把EM算法融合到状态空间模型的参数估计问题.给出了基于Kalman平滑和算法的线性状态空问模型参数估计方法.  相似文献   

5.
讨论在一般的混合分布条件下,用EM算法,在最小熵原理的优化准则下的数据拟合问题。简单推导有限混合高斯分布的EM算法,并针对其收敛速度慢的缺点设计一种有效的选取参数初始值的方法。实验结果表明,该方法有助于EM算法以较快的速度在参数真值附近收敛。  相似文献   

6.
讨论在一般的混合分布条件下,用EM算法,在最小熵原理的优化准则下的数据拟合问题。简单推导有限混合高斯分布的EM算法.并针对其收敛速度慢的缺点设计一种有效的选取参数初始值的方法。实验结果表明,该方法有助于EM算法以较快的速度在参数真值附近收敛。  相似文献   

7.
PH分布数据拟合的数值加速EM算法   总被引:1,自引:0,他引:1       下载免费PDF全文
黄卓  潘晓  郭波 《计算机工程》2008,34(14):1-3
摘 要:针对Phase-type(PH)分布数据拟合EM算法收敛速度慢的问题,提出一种数值加速EM算法,通过增加每一步EM迭代的参数变化量达到加速的目的。用4个拟合实例与标准EM算法拟合进行对比,结果表明,该加速EM算法简单实用,保证了算法的收敛性,有效提高了PH分布数据拟合EM算法的收敛速度。  相似文献   

8.
在机器学习中,一个广泛的应用是对模型的参数进行估计,即极大似然估计(MLE),EM算法是根据点估计中的MLE改进的一种迭代算法,是求极大似然估计的一种强有力的工具,但它收敛速度较慢,于是引入α-EM算法,克服了EM算法的缺陷.由于学习的过程中可能存在着大量的缺失数据及其动态模糊性,给出基于不完全数据的动态模糊极大似然估计算法并给出实例验证.  相似文献   

9.
王宏伟  柴秀俊 《控制与决策》2021,36(12):2946-2954
从概率统计方法出发,提出一种基于高斯混合模型聚类与递推最小二乘算法的非均匀采样系统的多模型建模方法.首先,采用高斯混合模型作为调度函数,使用最大期望(EM)算法迭代更新估计高斯混合模型中参数,从而通过每个子系统的高斯概率密度函数计算和比较来确定子系统的激活情况; 其次,采用递推最小二乘算法估计局部子系统参数;然后,使用鞅收敛定理对所提出的算法性能进行分析; 最后,通过非均匀采样系统的多模型建模来证明所提出方法的有效性.  相似文献   

10.
高斯混合模型是一种含隐变量的概率图模型,其参数通常由EM算法迭代训练得到.本文在简单推导高斯混合模型的EM算法后,将使用高斯混合模型对鸢尾花(iris)数据集进行分类判别.同时,针对EM算法受初始值影响大的问题,使用了K均值聚类算法作为其初始值的估计方法.在得到K均值聚类算法和EM算法的分类判别结果后,对比两种算法的判...  相似文献   

11.
This paper formulates a novel expectation maximization (EM) algorithm for the mixture of multivariate t-distributions. By introducing a new kind of “missing” data, we show that the empirically improved iterative algorithm, in literature, for the mixture of multivariate t-distributions is in fact a type of EM algorithm; thus a theoretical analysis is established, which guarantees the empirical algorithm converges to the maximization likelihood estimates of the mixture parameters. Simulated experiment and real experiments on classification and image segmentation confirm the effectiveness of the improved EM algorithm.  相似文献   

12.
一种带有色量测噪声的非线性系统辨识方法   总被引:2,自引:0,他引:2  
黄玉龙  张勇刚  李宁  赵琳 《自动化学报》2015,41(11):1877-1892
利用最大似然判据, 本文提出了一种带有色量测噪声的非线性系统辨识方法. 首先, 利用量测差分方法将有色量测噪声白色化, 获得新的量测方程, 从而将带有色量测噪声的非线性系统辨识问题转化成带白色量测噪声和一步延迟状态的非线性系统辨识问题. 其次, 利用期望最大化(Expectation maximization, EM)算法提出了一种新的基于最大似然估计的非线性系统辨识方法, 该算法由期望步骤(Expectation step, E-step)和最大化步骤(Maximization step, M-step)两部分组成. 在期望步骤中, 基于当前估计的参数并利用带有色量测噪声的高斯近似滤波器和平滑器, 近似计算完整的对数似然函数的期望. 在最大化步骤中, 近似计算的似然函数期望值被最大化, 并且通过解析更新获得噪声参数估计, 通过Newton更新方法获得模型参数的估计. 最后, 数值仿真验证了本文提出算法的有效性.  相似文献   

13.
The expectation maximization algorithm has been classically used to find the maximum likelihood estimates of parameters in probabilistic models with unobserved data, for instance, mixture models. A key issue in such problems is the choice of the model complexity. The higher the number of components in the mixture, the higher will be the data likelihood, but also the higher will be the computational burden and data overfitting. In this work, we propose a clustering method based on the expectation maximization algorithm that adapts online the number of components of a finite Gaussian mixture model from multivariate data or method estimates the number of components and their means and covariances sequentially, without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during expectation maximization steps. The coarse to fine nature of the algorithm reduce the overall number of computations to achieve a solution, which makes the method particularly suited to image segmentation applications whenever computational time is an issue. We show the effectiveness of the method in a series of experiments and compare it with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with images acquired from the iCub humanoid robot.  相似文献   

14.
Gaussian mixture models (GMM), commonly used in pattern recognition and machine learning, provide a flexible probabilistic model for the data. The conventional expectation–maximization (EM) algorithm for the maximum likelihood estimation of the parameters of GMMs is very sensitive to initialization and easily gets trapped in local maxima. Stochastic search algorithms have been popular alternatives for global optimization but their uses for GMM estimation have been limited to constrained models using identity or diagonal covariance matrices. Our major contributions in this paper are twofold. First, we present a novel parametrization for arbitrary covariance matrices that allow independent updating of individual parameters while retaining validity of the resultant matrices. Second, we propose an effective parameter matching technique to mitigate the issues related with the existence of multiple candidate solutions that are equivalent under permutations of the GMM components. Experiments on synthetic and real data sets show that the proposed framework has a robust performance and achieves significantly higher likelihood values than the EM algorithm.  相似文献   

15.
Mixture model based clustering (also simply called model-based clustering hereinafter) consists of fitting a mixture model to data and identifying each cluster with one of its components. This paper tackles the model selection and parameter estimation problems in model-based clustering so as to improve the clustering performance on the data sets whose true kernel distribution functions are not in the family of assumed ones, as well as with inherently overlapped clusters. Being tailored to clustering applications, an effective model selection criterion is first proposed. Unlike most criteria that measure the goodness-of-fit of the model only to generate data, the proposed one also evaluates whether the candidate model provides a reasonable partition for the observed data, which enforces a model with well-separated components. Accordingly, an improved method for the estimation of mixture parameters is derived, which aims to suppress the spurious estimates by the standard expectation maximization (EM) algorithm and enforce well-supported components in the mixture model. Finally, the estimation of mixture parameters and the model selection is integrated in a single algorithm which favors a compact mixture model with both the well-supported and well-separated components. Extensive experiments on synthetic and real-world data sets are carried out to show the effectiveness of the proposed approach to the mixture model based clustering.  相似文献   

16.
Multi-level nonlinear mixed effects (ML-NLME) models have received a great deal of attention in recent years because of the flexibility they offer in handling the repeated-measures data arising from various disciplines. In this study, we propose both maximum likelihood and restricted maximum likelihood estimations of ML-NLME models with two-level random effects, using first order conditional expansion (FOCE) and the expectation–maximization (EM) algorithm. The FOCE–EM algorithm was compared with the most popular Lindstrom and Bates (LB) method in terms of computational and statistical properties. Basal area growth series data measured from Chinese fir (Cunninghamia lanceolata) experimental stands and simulated data were used for evaluation. The FOCE–EM and LB algorithms given the same parameter estimates and fit statistics for models that converged by both. However, FOCE–EM converged for all the models, while LB did not, especially for the models in which two-level random effects are simultaneously considered in several base parameters to account for between-group variation. We recommend the use of FOCE–EM in ML-NLME models, particularly when convergence is a concern in model selection.  相似文献   

17.
The authors derive a new class of finite-dimensional recursive filters for linear dynamical systems. The Kalman filter is a special case of their general filter. Apart from being of mathematical interest, these new finite-dimensional filters can be used with the expectation maximization (EM) algorithm to yield maximum likelihood estimates of the parameters of a linear dynamical system. Important advantages of their filter-based EM algorithm compared with the standard smoother-based EM algorithm include: 1) substantially reduced memory requirements, and 2) ease of parallel implementation on a multiprocessor system. The algorithm has applications in multisensor signal enhancement of speech signals and also econometric modeling  相似文献   

18.
针对设备退化过程中异常数据下的剩余有效寿命预测问题,提出了一种基于动态的期望最大化算法(EM)-分段隐半马尔可夫模型(SHSMM)预测方法。首先,基于SHSMM的理论框架,采用期望最大化参数自适应估计算法估计模型中的未知参数。其次,基于WGM(1,1)模型,提出动态前向后向灰色填充算法处理样本中的异常数据,并利用健康预测过程预测设备的剩余有效寿命。最后,通过实例分析对模型进行评价和验证。结果表明,提出的设备健康预测方法能有效解决异常数据的问题。  相似文献   

19.
使用EM算法训练随机多层前馈网具有低开销、易于实现和全局收敛的特点,在EM算法的基础上提出了一种训练随机多层前馈网络的新方法AEM.AEM算法利用热力学系统的最大熵原理计算网络中隐变量的条件概率,借鉴退火过程,引入温度参数,减小了初始参数值对最终结果的影响.该算法既保持了原EM算法的优点,又有利于训练结果收敛到全局极小.从数学角度证明了该算法的收敛性,同时,实验也证明了该算法的正确性和有效性.  相似文献   

20.
This paper presents an a priori probability density function (pdf)-based time-of-arrival (TOA) source localization algorithms. Range measurements are used to estimate the location parameter for TOA source localization. Previous information on the position of the calibrated source is employed to improve the existing likelihood-based localization method. The cost function where the prior distribution was combined with the likelihood function is minimized by the adaptive expectation maximization (EM) and space-alternating generalized expectation–maximization (SAGE) algorithms. The variance of the prior distribution does not need to be known a priori because it can be estimated using Bayes inference in the proposed adaptive EM algorithm. Note that the variance of the prior distribution should be known in the existing three-step WLS method [1]. The resulting positioning accuracy of the proposed methods was much better than the existing algorithms in regimes of large noise variances. Furthermore, the proposed algorithms can also effectively perform the localization in line-of-sight (LOS)/non-line-of-sight (NLOS) mixture situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号