首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback-Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation-maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion.  相似文献   

3.
为提高遥感图像分割的准确性与抗噪性,以学生t分布混合模型为基础,结合K-means与花粉算法的特点,将K-means算法局部寻优能力强以及花粉算法全局寻优能力强的优点相结合,提出一种基于K-means的学生t分布混合模型,用于遥感图像分割.该方法中,根据学生t分布与高斯分布以及柯西分布比较接近的特点,对花粉算法的执行过...  相似文献   

4.
王宏伟  柴秀俊 《控制与决策》2021,36(12):2946-2954
从概率统计方法出发,提出一种基于高斯混合模型聚类与递推最小二乘算法的非均匀采样系统的多模型建模方法.首先,采用高斯混合模型作为调度函数,使用最大期望(EM)算法迭代更新估计高斯混合模型中参数,从而通过每个子系统的高斯概率密度函数计算和比较来确定子系统的激活情况; 其次,采用递推最小二乘算法估计局部子系统参数;然后,使用鞅收敛定理对所提出的算法性能进行分析; 最后,通过非均匀采样系统的多模型建模来证明所提出方法的有效性.  相似文献   

5.
A spatially constrained mixture model for image segmentation   总被引:11,自引:0,他引:11  
Gaussian mixture models (GMMs) constitute a well-known type of probabilistic neural networks. One of their many successful applications is in image segmentation, where spatially constrained mixture models have been trained using the expectation-maximization (EM) framework. In this letter, we elaborate on this method and propose a new methodology for the M-step of the EM algorithm that is based on a novel constrained optimization formulation. Numerical experiments using simulated images illustrate the superior performance of our method in terms of the attained maximum value of the objective function and segmentation accuracy compared to previous implementations of this approach.  相似文献   

6.
A Greedy EM Algorithm for Gaussian Mixture Learning   总被引:7,自引:0,他引:7  
Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter initialization, and (iii) the algorithm can get trapped in one of the many local maxima of the likelihood function. In this paper we propose a greedy algorithm for learning a Gaussian mixture which tries to overcome these limitations. In particular, starting with a single component and adding components sequentially until a maximum number k, the algorithm is capable of achieving solutions superior to EM with k components in terms of the likelihood of a test set. The algorithm is based on recent theoretical results on incremental mixture density estimation, and uses a combination of global and local search each time a new component is added to the mixture. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

7.
Unsupervised learning of finite mixture models   总被引:38,自引:0,他引:38  
This paper proposes an unsupervised algorithm for learning a finite mixture model from multivariate data. The adjective "unsupervised" is justified by two properties of the algorithm: 1) it is capable of selecting the number of components and 2) unlike the standard expectation-maximization (EM) algorithm, it does not require careful initialization. The proposed method also avoids another drawback of EM for mixture fitting: the possibility of convergence toward a singular estimate at the boundary of the parameter space. The novelty of our approach is that we do not use a model selection criterion to choose one among a set of preestimated candidate models; instead, we seamlessly integrate estimation and model selection in a single algorithm. Our technique can be applied to any type of parametric mixture model for which it is possible to write an EM algorithm; in this paper, we illustrate it with experiments involving Gaussian mixtures. These experiments testify for the good performance of our approach  相似文献   

8.
图像分割是指将一幅图像分解为若干互不交迭的区域的集合。当用已有的改进高斯混合模型于图像分割时,如何加快其分割过程是一个有研究意义的课题。基于最新的噪音受益EM算法,通过人工加噪来加快已有的改进高斯混合模型的收敛速度,从而达到加快图像分割的目的。当添加的噪声满足噪音受益EM定理时,加性噪声加快了EM算法收敛到局部最大值的平均收敛速度。改进的高斯混合模型是EM算法的特例,因此,噪音受益EM定理同样适用于改进的高斯混合模型。实验表明,提出的算法进行图像分割时,其收敛速度明显加快,时间复杂度明显变小。  相似文献   

9.
This paper explores the use of the Artificial Bee Colony (ABC) algorithm to compute threshold selection for image segmentation. ABC is an evolutionary algorithm inspired by the intelligent behavior of honey-bees which has been successfully employed to solve complex optimization problems. In this approach, an image 1-D histogram is approximated through a Gaussian mixture model whose parameters are calculated by the ABC algorithm. In the model, each Gaussian function represents a pixel class and therefore a threshold point. Unlike the Expectation-Maximization (EM) algorithm, the ABC method shows fast convergence and low sensitivity to initial conditions. Remarkably, it also improves complex time-consuming computations commonly required by gradient-based methods. Experimental results over multiple images with different range of complexity validate the efficiency of the proposed technique with regard to segmentation accuracy, speed, and robustness. The paper also includes an experimental comparison to the EM and to one gradient-based method which ultimately demonstrates a better performance from the proposed algorithm.  相似文献   

10.
Spatially varying mixture models are characterized by the dependence of their mixing proportions on location (contextual mixing proportions) and they have been widely used in image segmentation. In this work, Gauss-Markov random field (MRF) priors are employed along with spatially varying mixture models to ensure the preservation of region boundaries in image segmentation. To preserve region boundaries, two distinct models for a line process involved in the MRF prior are proposed. The first model considers edge preservation by imposing a Bernoulli prior on the normally distributed local differences of the contextual mixing proportions. It is a discrete line process model whose parameters are computed by variational inference. The second model imposes Gamma prior on the Student’s-t distributed local differences of the contextual mixing proportions. It is a continuous line process whose parameters are also automatically estimated by the Expectation-Maximization (EM) algorithm. The proposed models are numerically evaluated and two important issues in image segmentation by mixture models are also investigated and discussed: the constraints to be imposed on the contextual mixing proportions to be probability vectors and the MRF optimization strategy in the frameworks of the standard and variational EM algorithm.  相似文献   

11.
航空发动机稳态数据受工作环境、测量方式、控制规律等多方面因素影响,其试验数据常存在异常值,会对稳态性能参数计算结果造成影响。基于数据来源的正态性假设,采用高斯混合模型对数据进行筛选分类。在模型求解算法方面,采用遗传优化算法克服期望极大(EM)算法的局部收敛性,使计算结果具有一定的全局最优性,并复合应用EM算法弥补遗传算法收敛速度慢的不足。使用赤池信息准则或贝叶斯信息准则(AIC/BIC)作为数据筛选结果评定指标,并验证该方法的可行性。在数据融合方面,提出基于最优模型参数相似理论进行数据融合的方法,融合后数值接近真实稳态数据值,可作为该稳态数据片段的特征值。  相似文献   

12.
We address the problem of probability density function estimation using a Gaussian mixture model updated with the expectation-maximization (EM) algorithm. To deal with the case of an unknown number of mixing kernels, we define a new measure for Gaussian mixtures, called total kurtosis, which is based on the weighted sample kurtoses of the kernels. This measure provides an indication of how well the Gaussian mixture fits the data. Then we propose a new dynamic algorithm for Gaussian mixture density estimation which monitors the total kurtosis at each step of the EM algorithm in order to decide dynamically on the correct number of kernels and possibly escape from local maxima. We show the potential of our technique in approximating unknown densities through a series of examples with several density estimation problems  相似文献   

13.
Clustering is a useful tool for finding structure in a data set. The mixture likelihood approach to clustering is a popular clustering method, in which the EM algorithm is the most used method. However, the EM algorithm for Gaussian mixture models is quite sensitive to initial values and the number of its components needs to be given a priori. To resolve these drawbacks of the EM, we develop a robust EM clustering algorithm for Gaussian mixture models, first creating a new way to solve these initialization problems. We then construct a schema to automatically obtain an optimal number of clusters. Therefore, the proposed robust EM algorithm is robust to initialization and also different cluster volumes with automatically obtaining an optimal number of clusters. Some experimental examples are used to compare our robust EM algorithm with existing clustering methods. The results demonstrate the superiority and usefulness of our proposed method.  相似文献   

14.
Gaussian mixture models (GMM), commonly used in pattern recognition and machine learning, provide a flexible probabilistic model for the data. The conventional expectation–maximization (EM) algorithm for the maximum likelihood estimation of the parameters of GMMs is very sensitive to initialization and easily gets trapped in local maxima. Stochastic search algorithms have been popular alternatives for global optimization but their uses for GMM estimation have been limited to constrained models using identity or diagonal covariance matrices. Our major contributions in this paper are twofold. First, we present a novel parametrization for arbitrary covariance matrices that allow independent updating of individual parameters while retaining validity of the resultant matrices. Second, we propose an effective parameter matching technique to mitigate the issues related with the existence of multiple candidate solutions that are equivalent under permutations of the GMM components. Experiments on synthetic and real data sets show that the proposed framework has a robust performance and achieves significantly higher likelihood values than the EM algorithm.  相似文献   

15.
With the wide applications of Gaussian mixture clustering, e.g., in semantic video classification [H. Luo, J. Fan, J. Xiao, X. Zhu, Semantic principal video shot classification via mixture Gaussian, in: Proceedings of the 2003 International Conference on Multimedia and Expo, vol. 2, 2003, pp. 189-192], it is a nontrivial task to select the useful features in Gaussian mixture clustering without class labels. This paper, therefore, proposes a new feature selection method, through which not only the most relevant features are identified, but the redundant features are also eliminated so that the smallest relevant feature subset can be found. We integrate this method with our recently proposed Gaussian mixture clustering approach, namely rival penalized expectation-maximization (RPEM) algorithm [Y.M. Cheung, A rival penalized EM algorithm towards maximizing weighted likelihood for density mixture clustering with automatic model selection, in: Proceedings of the 17th International Conference on Pattern Recognition, 2004, pp. 633-636; Y.M. Cheung, Maximum weighted likelihood via rival penalized EM for density mixture clustering with automatic model selection, IEEE Trans. Knowl. Data Eng. 17(6) (2005) 750-761], which is able to determine the number of components (i.e., the model order selection) in a Gaussian mixture automatically. Subsequently, the data clustering, model selection, and the feature selection are all performed in a single learning process. Experimental results have shown the efficacy of the proposed approach.  相似文献   

16.
基于CUDA的GMM模型快速训练方法及应用   总被引:1,自引:1,他引:0  
由于能够很好地近似描述任何分布,混合高斯模型(GMM)在模式在识别领域得到了广泛的应用.GMM模型参数通常使用迭代的期望最大化(EM)算法训练获得,当训练数据量非常庞大及模型混合数很大时,需要花费很长的训练时间.NVIDIA公司推出的统一计算设备架构(Computed unified device architecture,CUDA)技术通过在图形处理单元(GPU)并发执行多个线程能够实现大规模并行快速计算.本文提出一种基于CUDA,适用于特大数据量的GMM模型快速训练方法,包括用于模型初始化的K-means算法的快速实现方法,以及用于模型参数估计的EM算法的快速实现方法.文中还将这种训练方法应用到语种GMM模型训练中.实验结果表明,与Intel DualCore PentiumⅣ3.0 GHz CPU的一个单核相比,在NVIDIA GTS250 GPU上语种GMM模型训练速度提高了26倍左右.  相似文献   

17.
基于在线分裂合并EM算法的高斯混合模型分类方法*   总被引:2,自引:1,他引:1  
为了解决传统高斯混合模型中期望值EM处理必须具备足够数量的样本才能开始训练的问题,提出了一种新的高斯混合模型在线增量训练算法。本算法在Ueda等人提出的Split-and-Merge EM方法基础上对分裂合并准则的计算进行了改进,能够有效避免陷入局部极值并减少奇异值出现的情况;通过引入时间序列参数提出了增量EM训练方法,能够实现增量式的期望最大化训练,从而能够逐样本在线更新GMM模型参数。对合成数据和实际语音识别应用的实验结果表明,本算法具有较好的运算效率和分类准确性。  相似文献   

18.
This paper explores the use of the Learning Automata (LA) algorithm to compute threshold selection for image segmentation as it is a critical preprocessing step for image analysis, pattern recognition and computer vision. LA is a heuristic method which is able to solve complex optimization problems with interesting results in parameter estimation. Despite other techniques commonly seek through the parameter map, LA explores in the probability space providing appropriate convergence properties and robustness. The segmentation task is therefore considered as an optimization problem and the LA is used to generate the image multi-threshold separation. In this approach, one 1-D histogram of a given image is approximated through a Gaussian mixture model whose parameters are calculated using the LA algorithm. Each Gaussian function approximating the histogram represents a pixel class and therefore a threshold point. The method shows fast convergence avoiding the typical sensitivity to initial conditions such as the Expectation- Maximization (EM) algorithm or the complex time-consuming computations commonly found in gradient methods. Experimental results demonstrate the algorithm’s ability to perform automatic multi-threshold selection and show interesting advantages as it is compared to other algorithms solving the same task.  相似文献   

19.
陈聿  田博今  彭云竹  廖勇 《计算机应用》2005,40(11):3217-3223
为进一步提升电力系统客户的用户体验,针对现有聚类算法寻优能力差、紧凑性不足以及较难求解聚类数目最优值的问题,提出一种联合手肘法与期望最大化(EM)的高斯混合聚类算法,挖掘大量客户数据中的潜在信息。该算法通过EM算法迭代出良好的聚类结果,而针对传统的高斯混合聚类算法需要提前获取用户分群数量的缺点,利用手肘法合理找出客户的分群数量。案例分析表明,所提算法与层次聚类算法和K-Means算法相比,FM、AR指标的增幅均超过10%,紧凑度(CI)和分离度(DS)的降幅分别低于15%和25%,可见性能有较大提升。  相似文献   

20.
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号