共查询到19条相似文献,搜索用时 265 毫秒
1.
2.
该文针对现有的基于低秩表示的子空间聚类算法使用核范数来代替秩函数,不能有效地估计矩阵的秩和对高斯噪声敏感的缺陷,提出一种改进的算法,旨在提高算法准确率的同时,保持其在高斯噪声下的稳定性。在构建目标函数时,使用系数矩阵的核范数和Forbenius范数作为正则项,对系数矩阵的奇异值进行强凸的正则化后,采用非精确的增广拉格朗日乘子方法求解,最后对求得的系数矩阵进行后处理得到亲和矩阵,并采用经典的谱聚类方法进行聚类。在人工数据集、Extended Yale B数据库和PIE数据库上同流行的子空间聚类算法的实验对比证明了所提改进算法的有效性和对高斯噪声的鲁棒性。 相似文献
3.
4.
由于环境噪声的影响,实际应用中说话人识别系统性能会出现急剧下降。提出了一种基于高斯混合模型-通用背景模型和自适应并行模型组合的鲁棒性语音身份识别方法。自适应并行模型组合是一种噪声鲁棒性的特征补偿算法,能够有效减少训练环境与测试环境之间的不匹配现象,从而提高系统识别准确率和抗噪性能。首先,算法从测试语音中估计出噪声特征,然后用一个单高斯模型对噪声特征进行拟合得到噪声均值和协方差。最后,根据得出的噪声均值和协方差,调整训练好的高斯混合模型均值向量和协方差矩阵,使其尽可能地匹配测试环境。实验结果表明,该方法可以准确地重构干净语音的高斯混合模型参数,并且能够显著提高说话人识别的准确率,特别是在低信噪比情况下。 相似文献
5.
针对单输入多输出(Single-Input-Multiple-Output, SIMO)模型提出一种完全不需要信道阶数估计的直接盲均衡算法。文章利用接收数据的截短协方差矩阵和信号子空间的关系设计一种零延迟均衡器,并通过信道矩阵和均衡器系数的合响应特性克服了算法相位偏转的问题,最后得到一种对信道阶数估计鲁棒并且没有相位偏转的盲均衡算法。该算法不同于一般子空间类算法,不需要直接对接收信号的协方差矩阵进行信号子空间和噪声子空间的分解,因此对信道阶数估计具有很强的鲁棒性。文章给出了算法的Batch实现过程,同时为更好适应一般时变信道环境和实现实时处理的要求,通过递归迭代得到算法的自适应实现过程。仿真实验表明该算法几乎不受信道阶数过估计或欠估计的影响,同时该算法具有良好的均方误差(Mean Square Error, MSE)和误符号率SER(Symbol Error Rate, SER)性能,并且具有很快的收敛速度。 相似文献
6.
混合矩阵的估计是稀疏源盲分离的关键组成部分,其估计精度直接影响到源信号的估计精度.本文首先针对K-means聚类算法依赖初始值选取的问题,将微分进化算法思想引入到K-means聚类算法中,提出了一种改进的K-means聚类算法.利用该算法,对稀疏源混合信号数据进行聚类,保证了聚类结果的鲁棒性.然后利用霍夫变换,对每一类数据的聚类中心进行修正,从而估计出混合矩阵,提高了混合矩阵的估计精度.仿真实验表明,相比于经典的稀疏源混合矩阵盲估计算法,本文算法具有更强的鲁棒性和更高的估计精度. 相似文献
7.
在雷达目标检测中,杂波的协方差矩阵估计利用了待检测单元附近的杂波数据。本文考虑一种非均匀环境中,非高斯杂波下的杂波协方差矩阵估计问题,即假定待检测单元与参考单元的杂波协方差矩阵之间满足某种统计关系,并假定杂波数据满足复合高斯统计分布模型。在这种场景下,常规的杂波协方差矩阵估计方法会导致信号检测性能的下降。采用共轭先验分布作为非均匀非高斯场景的统计分布模型,利用贝叶斯方法,本文给出了基于Gibbs抽样的杂波协方差最小均方误差估计方法。计算机仿真结果表明,与常规的杂波协方差矩阵估计方法相比较,本文所给出的杂波协方差矩阵的估计算法能够在参考数据较少,累积脉冲个数较少时,非均匀场景中获得较好的检测性能。本文还分析了先验分布模型参数误差对检测性能的影响。 相似文献
8.
为了提高脉冲噪声环境下基于二阶协方差矩阵差分(COV-MD)的远近场混合源定位算法的估计性能,本文提出了基于分数低阶协方差矩阵差分(FLOC-MD)和基于压缩变换协方差矩阵差分(CTC-MD)的远近场混合源定位算法。所提出的算法首先利用一维MUSIC谱峰搜索获得远场源信号的方位角估计,然后利用矩阵差分法实现远近场信号源的分离得到扩展的近场源分数低阶协方差矩阵(或压缩变换协方差矩阵),最后在利用类旋转不变方法(ESPRIT-Like)估计得到的近场源方位角的基础上,再次利用一维MUSIC谱峰搜索获得近场源距离的估计。计算机仿真结果表明:CTC-MD算法和FLOC-MD算法在强脉冲和低信噪比情况下的估计性能都要明显优于COV-MD算法和其他基于二阶统计量的远近场混合源定位算法,同时CTC-MD算法的性能要好于FLOC-MD算法并且不依赖于脉冲噪声的先验信息。 相似文献
9.
《现代电子技术》2017,(19):177-181
传统聚类算法实现大数据集聚类时,耗费大量的时间和内存,无法适应大数据流的动态性,聚类稳定性较差。因此,提出基于优先聚类和高斯混合模型树的递增聚类方法。采用优先聚类算法对大数据集进行优先聚类,获取典型数据集,降低大数据集的数据复杂度,采用高斯混合模型树的递增聚类算法,将典型数据集中的数据插入到高斯混合模型树内,塑造数据集的高斯混合模型树,树的叶子节点和非叶子节点分别同单高斯数据分布和高斯混合模型分布对应,基于插入结果对高斯混合模型树实施调整,检测插入到模型树内的数据是否需要删除,并完成数据的删除操作,采用广度优先方法获取最佳的树节点作为最终的聚类结果。实验结果表明该算法取得了很好的效果,具有较高的可扩展性和稳定性。 相似文献
10.
11.
12.
Wavelet-Based SAR Image Despeckling and Information Extraction, Using Particle Filter 总被引:1,自引:0,他引:1
This paper proposes a new-wavelet-based synthetic aperture radar (SAR) image despeckling algorithm using the sequential Monte Carlo method. A model-based Bayesian approach is proposed. This paper presents two methods for SAR image despeckling. The first method, called WGGPF, models a prior with Generalized Gaussian (GG) probability density function (pdf) and the second method, called WGMPF, models prior with a Generalized Gaussian Markov random field (GGMRF). The likelihood pdf is modeled using a Gaussian pdf. The GGMRF model is used because it enables texture parameter estimation. The prior is modeled using GG pdf, when texture parameters are not needed. A particle filter is used for drawing particles from the prior for different shape parameters of GG pdf. When the GGMRF prior is used, the particles are drawn from prior in order to estimate noise-free wavelet coefficients and for those coefficients the texture parameter is changed in order to obtain the best textural parameters. The texture parameters are changed for a predefined set of shape parameters of GGMRF. The particles with the highest weights represents the final noise-free estimate with corresponding textural parameters. The despeckling algorithms are compared with the state-of-the-art methods using synthetic and real SAR data. The experimental results show that the proposed despeckling algorithms efficiently remove noise and proposed methods are comparable with the state-of-the-art methods regarding objective measurements. The proposed WGMPF preserves textures of the real, high-resolution SAR images well. 相似文献
13.
Component analysis approach to estimation of tissue intensity distributions of 3D images 总被引:1,自引:0,他引:1
Many segmentation algorithms in medical imaging rely on accurate modeling and estimation of tissue intensity probability density functions. Gaussian mixture modeling, currently the most common approach, has several drawbacks, such as reliance on a Gaussian model and iterative local optimization used to estimate the model parameters. It also does not take advantage of substantially larger amount of data provided by 3D acquisitions, which are becoming standard in clinical environment. We propose a novel and completely non-parametric algorithm to estimate the tissue intensity probabilities in 3D images. Instead of relying on traditional framework of iterating between classification and estimation, we pose the problem as an instance of a blind source separation problem, where the unknown distributions are treated as sources and histograms of image subvolumes as mixtures. The new approach performed well on synthetic data and real magnetic resonance imaging (MRI) scans of the brain, robustly capturing intensity distributions of even small image structures and partial volume voxels. 相似文献
14.
15.
16.
17.
基于高阶统计量的多模噪声中的信号检测 总被引:2,自引:0,他引:2
按照概率密度函数形状,给出了一种比较通用的非高斯噪声模型——多模噪声。多模噪声总体上属于非高斯噪声,但兼容了高斯噪声。改进了高阶统计量的双谱算法,给出一种基于双谱的多模噪声中信号的检测方法,并在此基础上结合无惯性非线性变换器和双谱技术,改进了传统的自适应幅频干扰抑制器,可以精确估计或检测信号。仿真表明该方法可以抑制高斯噪声,同时在强噪声和复杂背景下可以以较高的检测概率检测出信号,优于传统的似然比检测。 相似文献
18.
《IEEE transactions on image processing》2008,17(10):1755-1771
19.
S M Mahbubur Rahman M Omair Ahmad M N S Swamy 《IEEE transactions on image processing》2008,17(10):1755-1771
The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters. 相似文献