首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 156 毫秒
1.
基于局部线性逼近的流形学习算法   总被引:2,自引:1,他引:1  
流形学习方法是根据流形的定义提出的一种非线性数据降维方法,主要思想是发现嵌入在高维数据空间的低维光滑流形.局部线性嵌入算法是应用比较广泛的一种流形学习方法,传统的局部线性嵌入算法的一个主要缺点就是在处理稀疏源数据时会失效,而实际应用中很多情况还要面对处理源数据稀疏的问题.在分析局部线性嵌入算法的基础上提出了基于局部线性逼近思想的流形学习算法,其通过采用直接估计梯度值的方法达到局部线性逼近的目的,从而实现高维非线性数据的维数约简,最后在S-曲线上进行稀疏采样测试取得良好降维效果.  相似文献   

2.
基于流形学习的维数约简算法   总被引:1,自引:1,他引:0       下载免费PDF全文
姜伟  杨炳儒 《计算机工程》2010,36(12):25-27
介绍线性维数约简的主成分分析和多维尺度算法,描述几种经典的能发现嵌入在高维数据空间的低维光滑流形非线性维数约简算法,包括等距映射、局部线性嵌入、拉普拉斯特征映射、局部切空间排列、最大方差展开。与线性维数约简算法相比,非线性维数约简算法通过维数约简能够发现不同类型非线性高维数据的本质特征。  相似文献   

3.
几种流形学习算法的比较研究   总被引:1,自引:0,他引:1  
如何发现高维数据空间流形中有意义的低维嵌入信息是流形学习的主要目的。目前,大部分流形学习算法都是用于非线性维数约简或是数据可视化的,如等距映射(Isomap),局部线性嵌入算法(LLE),拉普拉斯特征映射算(laplacian Eigenmap)等等,文章对这三种流形学习算法进行实验分析与比较,目的在于了解这几种流形学习算法的特点,以便更好地进行数据的降维与分析。  相似文献   

4.
流形学习方法是根据流形的定义提出的一种非线性数据降维方法,主要思想是发现嵌入在高维数据空间的低维光滑流形。从分析基于流形学习理论的局部线性嵌入算法入手,针对传统的局部线性嵌入算法在源数据稀疏时会失效的缺点,提出了基于局部线性逼近思想的流形学习算法,并在S-曲线上采样测试取得良好降维效果。  相似文献   

5.
语音信号转换到频域后维数较高,流行学习方法可以自主发现高维数据中潜在低维结构的规律性,提出采用流形学习的方法对高维数据降维来进行汉语数字语音识别。采用流形学习中的局部线性嵌入算法提取语音频域上高维数据的低维流形结构特征,再将低维数据输入动态时间规整识别器进行识别。仿真实验结果表明,采用局部线性嵌入算法的汉语数字语音识别相较于常用声学特征MFCC维数要少,识别率提高了1.2%,有效提高了识别速度。  相似文献   

6.
高维数据流形的低维嵌入及嵌入维数研究   总被引:29,自引:0,他引:29  
发现高维数据空间流形中有意义的低维嵌入是一个经典难题.Isomap是提出的一种有效的基于流形理论的非线性降维方法,它不仅能够揭示高维观察数据的内在结构,还能够发现潜在的低维参教空间.Isomap的理论基础是假设在高维数据空间和低维参数空间存在等距映射,但并没有进行证明.首先给出了高维数据的连续流形和低维参数空间之间的等距映射存在性证明,然后区分了嵌入空间维数、高维数据空间的固有维数和流形维数,并证明存在环状流形高维数据空间的参数空间维数小于嵌入空间维数.最后提出一种环状流形的发现算法,判断高维数据空间是否存在环状流形,进而估计其固有维教及潜在空间维数.在多姿态三维对象的实验中证明了算法的有效性,并得到正确的低维参数空间.  相似文献   

7.
曹顺茂  叶世伟 《计算机仿真》2007,24(3):104-106,168
传统的流形学习算法能有效地学习出高维采样数据的低维嵌入坐标,但也存在一些不足,如不能处理稀疏的样本数据.针对这些缺点,提出了一种基于局部映射的直接求解线性嵌入算法(Solving Directly Linear Embedding,简称SDLE).通过假定低维流形的整体嵌入函数,将流形映射赋予局部光滑的约束,应用核方法将高维空间的坐标投影到特征空间,最后构造出在低维空间的全局坐标.SDLE算法解决了在源数据稀疏情况下的非线性维数约简问题,这是传统的流形学习算法没有解决的问题.通过实验说明了SDLE算法研究的有效性.  相似文献   

8.
基于放大因子和延伸方向研究流形学习算法   总被引:16,自引:0,他引:16  
何力  张军平  周志华 《计算机学报》2005,28(12):2000-2009
流形学习是一种新的非监督学习方法,可以有效地发现高维非线性数据集的内在维数和进行维数约简,近年来越来越受到机器学习和认知科学领域研究者的重视.虽然目前已经出现了很多有效的流形学习算法,如等度规映射(ISOMAP)、局部线性嵌套(Locally Linear Embedding,LLE)等,然而,对观测空间的高维数据与降维后的低维数据之间的定量关系,尚难以直观地进行分析.这一方面不利于对数据内在规律的深入探察,一方面也不利于对不同流形学习算法的降维效果进行直观比较.文中提出了一种方法,可以从放大因子和延伸方向这两个方面显示出观测空间的高维数据与降维后的低维数据之间的联系;比较了两种著名的流形学习算法(ISOMAP和LLE)的性能,得出了一些有意义的结论;提出了相应的算法从而实现了以上理论.对几组数据的实验表明了研究的有效性和意义.  相似文献   

9.
局部线性嵌入算法(LLE)是流形学习中非线性数据降维的重要方法之一。考虑数据点分布大多呈现不均匀性,LLE对近邻点的选取方式将会导致大量的信息丢失。根据其不足,提出一种基于数据点松紧度的局部线性嵌入改进算法——tLLE算法,针对数据点分布不均匀的数据集,tLLE算法能有效地进行维数约简,且具有比LLE更好的降维效果。在人造数据和现实数据上的嵌入以及分类识别结果表明了tLLE算法的有效性。  相似文献   

10.
《软件工程师》2017,(8):7-13
机器学习是近几年研究的热点,维数约简算法是机器学习的必要手段,本文从维数约简算法的定义讲起,介绍了几种典型的数据降维算法,其中包括线性降维和非线性降维,流形学习是非线性降维的代表算法。并且介绍了每个算法的构造过程及其特点,在此基础上分析了所有维数约简算法的执行效率时间和空间复杂度,并且给出了每个算法的特点和算法的核心思想,最后在此基础上给予总结,为后面研究者提供参考和借鉴。  相似文献   

11.
To improve effectively the performance on spoken emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech data lying on a nonlinear manifold embedded in a high-dimensional acoustic space. In this paper, a new supervised manifold learning algorithm for nonlinear dimensionality reduction, called modified supervised locally linear embedding algorithm (MSLLE) is proposed for spoken emotion recognition. MSLLE aims at enlarging the interclass distance while shrinking the intraclass distance in an effort to promote the discriminating power and generalization ability of low-dimensional embedded data representations. To compare the performance of MSLLE, not only three unsupervised dimensionality reduction methods, i.e., principal component analysis (PCA), locally linear embedding (LLE) and isometric mapping (Isomap), but also five supervised dimensionality reduction methods, i.e., linear discriminant analysis (LDA), supervised locally linear embedding (SLLE), local Fisher discriminant analysis (LFDA), neighborhood component analysis (NCA) and maximally collapsing metric learning (MCML), are used to perform dimensionality reduction on spoken emotion recognition tasks. Experimental results on two emotional speech databases, i.e. the spontaneous Chinese database and the acted Berlin database, confirm the validity and promising performance of the proposed method.  相似文献   

12.
传统数据降维算法分为线性或流形学习降维算法,但在实际应用中很难确定需要哪一类算法.设计一种综合的数据降维算法,以保证它的线性降维效果下限为主成分分析方法且在流形学习降维方面能揭示流形的数据结构.通过对高维数据构造马尔可夫转移矩阵,使越相似的节点转移概率越大,从而发现高维数据降维到低维流形的映射关系.实验结果表明,在人造...  相似文献   

13.
To effectively handle speech data lying on a nonlinear manifold embedded in a high-dimensional acoustic space, in this paper, an adaptive supervised manifold learning algorithm based on locally linear embedding (LLE) for nonlinear dimensionality reduction is proposed to extract the low-dimensional embedded data representations for phoneme recognition. The proposed method aims to make the interclass dissimilarity maximized, while the intraclass dissimilarity minimized in order to promote the discriminating power and generalization ability of the low-dimensional embedded data representations. The performance of the proposed method is compared with five well-known dimensionality reduction methods, i.e., principal component analysis, linear discriminant analysis, isometric mapping (Isomap), LLE as well as the original supervised LLE. Experimental results on three benchmarking speech databases, i.e., the Deterding database, the DARPA TIMIT database, and the ISOLET E-set database, demonstrate that the proposed method obtains promising performance on the phoneme recognition task, outperforming the other used methods.  相似文献   

14.
局部线性嵌入(LLE)算法是有效的非线性降维方法,时间复杂度低并具有强的流形表达能力.与其他降维方法相比,局部线性嵌入算法的优势在于只定义唯一的参数,即邻域数.因此算法的性能主要依靠此邻域参数的选取,这就产生问题:怎样选取邻域参数的最佳值.通过对两种自动选取最佳参数值的方法,即简单方法和分层方法进行试验比较与分析,归纳出在实践中确定邻域参数的启发式策略.  相似文献   

15.
流形学习中非线性维数约简方法概述   总被引:4,自引:1,他引:3  
较为详细地回顾了流形学习中非线性维数约简方法,分析了它们各自的优势和不足.与传统的线性维数约简方法相比较,可以发现非线性高维数据的本质维数,有利于进行维数约简和数据分析.最后展望了流形学习中非线性维数方法的未来研究方向,期望进一步拓展流形学习的应用领域.  相似文献   

16.
Owing to sparseness, directly clustering high-dimensional data is still a challenge problem. Therefore, obtaining their low-dimensional compact representation by dimensional reduction is an effective method for clustering high-dimensional data. Most of existing dimensionality reduction methods, however, are developed originally for classification (such as Linear Discriminant Analysis) or recovering the geometric structure (known as manifold) of high-dimensional data (such as Locally Linear Embedding) rather than clustering purpose. Hence, a novel nonlinear discriminant clustering by dimensional reduction based on spectral regularization is proposed. The contributions of the proposed method are two folds: (1) it can obtain nonlinear low-dimensional representation that can recover the intrinsic manifold structure as well as enhance the cluster structure of the original high-dimensional data; (2) the clustering results can also be obtained in the dimensionality reduction procedure. Firstly, the desired low-dimensional coordinates are represented as linear combinations of predefined smooth vectors with respect to the data manifold, which are characterized by a weighted graph. Then, the optimal combination coefficients and the optimal cluster assignment matrix are computed by maximizing the ratio between the between-cluster scatter and the total scatter simultaneously as well as preserving the smoothness of the cluster assignment matrix with respect to the data manifold. Finally, the optimization problem is solved in an iterative procedure, which is proved to be convergent. Experiments on UCI data sets and real world data sets demonstrated the effectiveness of the proposed method for both clustering and visualization high-dimensional data set.  相似文献   

17.
Accurate recognition of cancers based on microarray gene expressions is very important for doctors to choose a proper treatment. Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. This paper proposed the kernel method based locally linear embedding to selecting the optimal number of nearest neighbors, constructing uniform distribution manifold. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding is proposed to select the optimal number of nearest neighbors, constructing uniform distribution manifold. In addition, support vector machine which has given rise to the development of a new class of theoretically elegant learning machines will be used to classify and recognise genomic microarray. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results and comparisons demonstrate that the proposed method is effective approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号