首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对人脸识别问题,提出了一种中心近邻嵌入的学习算法,其与经典的局部线性嵌入和保局映射不同,它是一种有监督的线性降维方法。该方法首先通过计算各类样本中心,并引入中心近邻距离代替两样本点之间的直接距离作为权系数函数的输入;然后再保持中心近邻的几何结构不变的情况下把高维数据嵌入到低维坐标系中。通过中心近邻嵌入学习算法与其他3种人脸识别方法(即主成分分析、线形判别分析及保局映射)在ORL、Yale及UMIST人脸库上进行的比较实验结果表明,它在高维数据低维可视化和人脸识别效果等方面均较其他3种方法取得了更好的效果。  相似文献   

2.
To improve effectively the performance on spoken emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech data lying on a nonlinear manifold embedded in a high-dimensional acoustic space. In this paper, a new supervised manifold learning algorithm for nonlinear dimensionality reduction, called modified supervised locally linear embedding algorithm (MSLLE) is proposed for spoken emotion recognition. MSLLE aims at enlarging the interclass distance while shrinking the intraclass distance in an effort to promote the discriminating power and generalization ability of low-dimensional embedded data representations. To compare the performance of MSLLE, not only three unsupervised dimensionality reduction methods, i.e., principal component analysis (PCA), locally linear embedding (LLE) and isometric mapping (Isomap), but also five supervised dimensionality reduction methods, i.e., linear discriminant analysis (LDA), supervised locally linear embedding (SLLE), local Fisher discriminant analysis (LFDA), neighborhood component analysis (NCA) and maximally collapsing metric learning (MCML), are used to perform dimensionality reduction on spoken emotion recognition tasks. Experimental results on two emotional speech databases, i.e. the spontaneous Chinese database and the acted Berlin database, confirm the validity and promising performance of the proposed method.  相似文献   

3.
Though principle component analysis (PCA) and locality preserving projections (LPPs) are two of the most popular linear methods for face recognition, PCA can only see the Euclidean structure of the training set and LPP preserves the nonlinear submanifold structure hidden in the training set. In this paper, we propose the elastic preserving projections (EPPs) which by incorporating the merits of the local geometry and the global information of the training set. EPP outputs a sample subspace which simultaneously preserves the local geometrical structure and exploits the global information of the training set. Different from some other linear dimensionality reduction methods, EPP can be deemed as learning both the coordinates and the affinities between sample points. Furthermore, the effectiveness of our proposed algorithm is analyzed theoretically and confirmed by some experiments on several well-known face databases. The obtained results indicate that EPP significantly outperforms its other rival algorithms.  相似文献   

4.
In this paper, a novel data projection method, local and global principal component analysis (LGPCA) is proposed for process monitoring. LGPCA is a linear dimensionality reduction technique through preserving both of local and global information in the observation data. Beside preservation of the global variance information of Euclidean space that principal component analysis (PCA) does, LGPCA is characterized by capturing a good linear embedding that preserves local structure to find meaningful low-dimensional information hidden in the high-dimensional process data. LGPCA-based T2 (D) and squared prediction error (Q) statistic control charts are developed for on-line process monitoring. The validity and effectiveness of LGPCA-based monitoring method are illustrated through simulation processes and Tennessee Eastman process (TEP). The experimental results demonstrate that the proposed method effectively captures meaningful information hidden in the observations and shows superior process monitoring performance compared to those regular monitoring methods.  相似文献   

5.
提出了一种新的人脸识别算法。该算法采用Gabor小波和一种新颖的方式来提取人脸特征,利用局部线性嵌入(Locally Linear Embedding,LLE)算法来实现数据的非线性降维处理,最后训练基于欧式距离的最近邻分类器进行分类判决。在ORL人脸库中与PCA方法、Gabor小波+PCA方法和直接的LLE算法进行了实验比较,实验结果表明,提出的Gabor小波+LLE的方法具有更优的性能。  相似文献   

6.
Face recognition using laplacianfaces   总被引:47,自引:0,他引:47  
We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.  相似文献   

7.
Locality preserving embedding for face and handwriting digital recognition   总被引:1,自引:1,他引:0  
Most supervised manifold learning-based methods preserve the original neighbor relationships to pursue the discriminating power. Thus, structure information of the data distributions might be neglected and destroyed in low-dimensional space in a certain sense. In this paper, a novel supervised method, called locality preserving embedding (LPE), is proposed to feature extraction and dimensionality reduction. LPE can give a low-dimensional embedding for discriminative multi-class sub-manifolds and preserves principal structure information of the local sub-manifolds. In LPE framework, supervised and unsupervised ideas are combined together to learn the optimal discriminant projections. On the one hand, the class information is taken into account to characterize the compactness of local sub-manifolds and the separability of different sub-manifolds. On the other hand, at the same time, all the samples in the local neighborhood are used to characterize the original data distributions and preserve the structure in low-dimensional subspace. The most significant difference from existing methods is that LPE takes the distribution directions of local neighbor data into account and preserves them in low-dimensional subspace instead of only preserving the each local sub-manifold’s original neighbor relationships. Therefore, LPE optimally preserves both the local sub-manifold’s original neighborhood relationships and the distribution direction of local neighbor data to separate different sub-manifolds as far as possible. The criterion, similar to the classical Fisher criterion, is a Rayleigh quotient in form, and the optimal linear projections are obtained by solving a generalized Eigen equation. Furthermore, the framework can be directly used in semi-supervised learning, and the semi-supervised LPE and semi-supervised kernel LPE are given. The proposed LPE is applied to face recognition (on the ORL and Yale face databases) and handwriting digital recognition (on the USPS database). The experimental results show that LPE consistently outperforms classical linear methods, e.g., principal component analysis and linear discriminant analysis, and the recent manifold learning-based methods, e.g., marginal Fisher analysis and constrained maximum variance mapping.  相似文献   

8.
Principal component analysis (PCA) is one of the most widely used unsupervised dimensionality reduction methods in pattern recognition. It preserves the global covariance structure of data when labels of data are not available. However, in many practical applications, besides the large amount of unlabeled data, it is also possible to obtain partial supervision such as a few labeled data and pairwise constraints, which contain much more valuable information for discrimination than unlabeled data. Unfortunately, PCA cannot utilize that useful discriminant information effectively. On the other hand, traditional supervised dimensionality reduction methods such as linear discriminant analysis perform on only labeled data. When labeled data are insufficient, their performances will deteriorate. In this paper, we propose a novel discriminant PCA (DPCA) model to boost the discriminant power of PCA when both unlabeled and labeled data as well as pairwise constraints are available. The derived DPCA algorithm is efficient and has a closed form solution. Experimental results on several UCI and face data sets show that DPCA is superior to several established dimensionality reduction methods.  相似文献   

9.
非线性流形学习降维方法已经被广泛应用到人脸识别、入侵检测以及传感器网络等领域。然而,能够有效处理稀疏数据的流形学习算法很少。基于局部线性嵌入(LLE)算法的思想框架,提出一种扩大局部邻域的稀疏嵌入算法,通过对局部区域信息加强,使得在样本较少的情况下,达到丰富重叠信息的目的。在稀疏的人工和人脸数据集上的实验结果表明,所提算法产生了较好的嵌入及分类结果。  相似文献   

10.
Principal Component Analysis (PCA) is a well-known linear dimensionality reduction technique in the literature. It extracts global principal components (PCs) and lacks in capturing local variations in its global PCs. To overcome the issues of PCA, Feature Partitioning based PCA (FP-PCA) methods were proposed; they extract local PCs from subpatterns and they are not sensitive to global variations across the subpatterns. Subsequently, SubXPCA was proposed as a novel FP-PCA approach which extracts PCs by utilizing both global and local information; it was proved to be efficient in terms of computational time and classification. It is observed that there is no detailed theoretical investigation done on the properties of FP-PCA methods. Such theoretical analysis is essential to provide generalized and formal validation of the properties of the FP-PCA methods. In this paper, our focus is to show SubXPCA as an alternative to PCA and other FP-PCA methods by proving analytically the various properties of SubXPCA related to summarization of variance, feature orders, and subpattern sizes. We prove analytically that (i) SubXPCA approaches PCA in terms of summarizing variance with increase in number of local principal components of subpatterns; (ii) SubXPCA is robust against feature orders (permutations) of patterns and variety of partitions (subpattern sizes); (iii) SubXPCA shows higher dimensionality reduction as compared to other FP-PCA methods. These properties of SubXPCA are demonstrated empirically upon UCI Waveform and ORL face data sets.  相似文献   

11.
The human heart is a complex system that reveals many clues about its condition in its electrocardiogram (ECG) signal, and ECG supervising is the most important and efficient way of preventing heart attacks. ECG analysis and recognition are both important and tempting topics in modern medical research. The purpose of this paper is to develop an algorithm which investigates kernel method, locally linear embedding (LLE), principal component analysis (PCA), and support vector machine(SVM) algorithms for dimensionality reduction, features extraction, and classification for recognizing and classifying the given ECG signals. In order to do so, a nonlinear dimensionality reduction kernel method based LLE is proposed to reduce the high dimensions of the variational ECG signals, and the principal characteristics of the signals are extracted from the original database by means of the PCA, each signal representing a single and complete heart beat. SVM method is applied to classify the ECG data into several categories of heart diseases. Experimental results obtained demonstrated that the performance of the proposed method was similar and sometimes better when compared to other ECG recognition techniques, thus indicating a viable and accurate technique.  相似文献   

12.
Locally linear embedding (LLE) is a nonlinear dimensionality reduction method proposed recently. It can reveal the intrinsic distribution of data, which cannot be provided by classical linear dimensionality reduction methods. The application of LLE, however, is limited because of its lack of a parametric mapping between the observation and the low-dimensional output. And the large data set to be reduced is necessary. In this paper, we propose methods to establish the process of mapping from low-dimensional embedded space to high-dimensional space for LLE and validate their efficiency with the application of reconstruction of multi-pose face images. Furthermore, we propose that the high-dimensional structure of multi-pose face images is similar for the same kind of pose change mode of different persons. So given the structure information of data distribution which is obtained by leaning large numbers of multi-pose images in a training set, the support vector regression (SVR) method of statistical learning theory is used to learn the high-dimensional structure of someone based on small sets. The detailed learning method and algorithm are given and applied to reconstruct and synthesize face images in small set cases. The experiments prove that our idea and method is correct.  相似文献   

13.
流形学习方法中的LLE算法可以将高维数据在保持局部邻域结构的条件下降维到低维流形子空间中.并得到与原样本集具有相似局部结构的嵌入向量集合。LLE算法在数据降维处理过程中没有考虑样本的分类信息。针对这些问题进行研究,提出改进的有监督的局部线性嵌人算法(MSLLE),并利用MatLab对该改进算法的实现效果同LLE进行实验演示比较。通过实验演示表明,MSLLE算法较LLE算法可以有利于保持数据点本身内部结构。  相似文献   

14.
Dimensionality reduction is often required as a preliminary stage in many data analysis applications. In this paper, we propose a novel supervised dimensionality reduction method, called linear discriminant projection embedding (LDPE), for pattern recognition. LDPE first chooses a set of overlapping patches which cover all data points using a minimum set cover algorithm with geodesic distance constraint. Then, principal component analysis (PCA) is applied on each patch to obtain the data's local representations. Finally, patches alignment technique combined with modified maximum margin criterion (MMC) is used to yield the discriminant global embedding. LDPE takes both label information and structure of manifold into account, thus it can maximize the dissimilarities between different classes and preserve data's intrinsic structures simultaneously. The efficiency of the proposed algorithm is demonstrated by extensive experiments using three standard face databases (ORL, YALE and CMU PIE). Experimental results show that LDPE outperforms other classical and state of art algorithms.  相似文献   

15.
传统数据降维算法分为线性或流形学习降维算法,但在实际应用中很难确定需要哪一类算法.设计一种综合的数据降维算法,以保证它的线性降维效果下限为主成分分析方法且在流形学习降维方面能揭示流形的数据结构.通过对高维数据构造马尔可夫转移矩阵,使越相似的节点转移概率越大,从而发现高维数据降维到低维流形的映射关系.实验结果表明,在人造...  相似文献   

16.
To effectively handle speech data lying on a nonlinear manifold embedded in a high-dimensional acoustic space, in this paper, an adaptive supervised manifold learning algorithm based on locally linear embedding (LLE) for nonlinear dimensionality reduction is proposed to extract the low-dimensional embedded data representations for phoneme recognition. The proposed method aims to make the interclass dissimilarity maximized, while the intraclass dissimilarity minimized in order to promote the discriminating power and generalization ability of the low-dimensional embedded data representations. The performance of the proposed method is compared with five well-known dimensionality reduction methods, i.e., principal component analysis, linear discriminant analysis, isometric mapping (Isomap), LLE as well as the original supervised LLE. Experimental results on three benchmarking speech databases, i.e., the Deterding database, the DARPA TIMIT database, and the ISOLET E-set database, demonstrate that the proposed method obtains promising performance on the phoneme recognition task, outperforming the other used methods.  相似文献   

17.
L1范局部线性嵌入   总被引:1,自引:0,他引:1       下载免费PDF全文
数据降维问题存在于包括机器学习、模式识别、数据挖掘等多个信息处理领域。局部线性嵌入(LLE)是一种用于数据降维的无监督非线性流行学习算法,因其优良的性能,LLE得以广泛应用。针对传统的LLE对离群(或噪声)敏感的问题,提出一种鲁棒的基于L1范数最小化的LLE算法(L1-LLE)。通过L1范数最小化来求取局部重构矩阵,减小了重构矩阵能量,能有效克服离群(或噪声)干扰。利用现有优化技术,L1-LLE算法简单且易实现。证明了L1-LLE算法的收敛性。分别对人造和实际数据集进行应用测试,通过与传统LLE方法进行性能比较,结果显示L1-LLE方法是稳定、有效的。  相似文献   

18.
局部线性嵌入算法(LLE)因其较低的计算复杂度和高效性适用于很多降维问题,新的自适应局部线性嵌入(ALLE)算法对数据进行非线性降维,提取高维数据的本质特征,并保持了数据的全局几何结构特征,对比实验结果表明了该算法对于非理想数据的降维结果均优于LLE算法。  相似文献   

19.
Subspace manifold learning represents a popular class of techniques in statistical image analysis and object recognition. Recent research in the field has focused on nonlinear representations; locally linear embedding (LLE) is one such technique that has recently gained popularity. We present and apply a generalization of LLE that introduces sample weights. We demonstrate the application of the technique to face recognition, where a model exists to describe each face’s probability of occurrence. These probabilities are used as weights in the learning of the low-dimensional face manifold. Results of face recognition using this approach are compared against standard nonweighted LLE and PCA. A significant improvement in recognition rates is realized using weighted LLE on a data set where face occurrences follow the modeled distribution.  相似文献   

20.
In the past few years, some nonlinear dimensionality reduction (NLDR) or nonlinear manifold learning methods have aroused a great deal of interest in the machine learning community. These methods are promising in that they can automatically discover the low-dimensional nonlinear manifold in a high-dimensional data space and then embed the data points into a low-dimensional embedding space, using tractable linear algebraic techniques that are easy to implement and are not prone to local minima. Despite their appealing properties, these NLDR methods are not robust against outliers in the data, yet so far very little has been done to address the robustness problem. In this paper, we address this problem in the context of an NLDR method called locally linear embedding (LLE). Based on robust estimation techniques, we propose an approach to make LLE more robust. We refer to this approach as robust locally linear embedding (RLLE). We also present several specific methods for realizing this general RLLE approach. Experimental results on both synthetic and real-world data show that RLLE is very robust against outliers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号