首页 | 本学科首页   官方微博 | 高级检索  
     


Locally Minimizing Embedding and Globally Maximizing Variance: Unsupervised Linear Difference Projection for Dimensionality Reduction
Authors:Minghua Wan  Zhihui Lai  Zhong Jin
Affiliation:1.School of Computer Science and Technology,Nanjing University of Science and Technology,Nanjing,China;2.School of Information Engineering,Nanchang Hangkong University,Nanchang,China;3.Bio-Computing Research Center, Shenzhen Graduate School,Harbin Institute of Technology,Shenzhen,China
Abstract:Recently, many dimensionality reduction algorithms, including local methods and global methods, have been presented. The representative local linear methods are locally linear embedding (LLE) and linear preserving projections (LPP), which seek to find an embedding space that preserves local information to explore the intrinsic characteristics of high dimensional data. However, both of them still fail to nicely deal with the sparsely sampled or noise contaminated datasets, where the local neighborhood structure is critically distorted. On the contrary, principal component analysis (PCA), the most frequently used global method, preserves the total variance by maximizing the trace of feature variance matrix. But PCA cannot preserve local information due to pursuing maximal variance. In order to integrate the locality and globality together and avoid the drawback in LLE and PCA, in this paper, inspired by the dimensionality reduction methods of LLE and PCA, we propose a new dimensionality reduction method for face recognition, namely, unsupervised linear difference projection (ULDP). This approach can be regarded as the integration of a local approach (LLE) and a global approach (PCA), so that it has better performance and robustness in applications. Experimental results on the ORL, YALE and AR face databases show the effectiveness of the proposed method on face recognition.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号