首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Neighborhood preserving embedding (NPE) is a linear approximation to the locally linear embedding algorithm which can preserve the local neighborhood structure on the data manifold. However, in typical face recognition where the number of data samples is smaller than the dimension of data space, it is difficult to directly apply NPE to high dimensional matrices because of computational complexity. Moreover, in such case, NPE often suffers from the singularity problem of eigenmatrix, which makes the direct implementation of the NPE algorithm almost impossible. In practice, principal component analysis or singular value decomposition is applied as a preprocessing step to attack these problems. Nevertheless, this strategy may discard dimensions that contain important discriminative information and the eigensystem computation of NPE could be unstable. Towards a practical dimensionality reduction method for face data, we develop a new scheme in this paper, namely, the complete neighborhood preserving embedding (CNPE). CNPE transforms the singular generalized eigensystem computation of NPE into two eigenvalue decomposition problems. Moreover, a feasible and effective procedure is proposed to alleviate the computational burden of high dimensional matrix for typical face image data. Experimental results on the ORL face database and the Yale face database show that the proposed CNPE algorithm achieves better performance than other feature extraction methods, such as Eigenfaces, Fisherfaces and NPE, etc.  相似文献   

2.
We present a modular linear discriminant analysis (LDA) approach for face recognition. A set of observers is trained independently on different regions of frontal faces and each observer projects face images to a lower-dimensional subspace. These lower-dimensional subspaces are computed using LDA methods, including a new algorithm that we refer to as direct, weighted LDA or DW-LDA. DW-LDA combines the advantages of two recent LDA enhancements, namely direct LDA (D-LDA) and weighted pairwise Fisher criteria. Each observer performs recognition independently and the results are combined using a simple sum-rule. Experiments compare the proposed approach to other face recognition methods that employ linear dimensionality reduction. These experiments demonstrate that the modular LDA method performs significantly better than other linear subspace methods. The results also show that D-LDA does not necessarily perform better than the well-known principal component analysis followed by LDA approach. This is an important and significant counterpoint to previously published experiments that used smaller databases. Our experiments also indicate that the new DW-LDA algorithm is an improvement over D-LDA.  相似文献   

3.
An improved manifold learning method, called enhanced semi-supervised local Fisher discriminant analysis (ESELF), for face recognition is proposed. Motivated by the fact that statistically uncorrelated and parameter-free are two desirable and promising characteristics for dimension reduction, a new difference-based optimization objective function with unlabeled samples has been designed. The proposed method preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution and it can be computed based on eigen decomposition. Experiments on synthetic data and AT&T, Yale and CMU PIE face databases are performed to test and evaluate the proposed algorithm. The experimental results and comparisons demonstrate the effectiveness of the proposed method.  相似文献   

4.
Adaptive nonlinear manifolds and their applications to pattern recognition   总被引:1,自引:0,他引:1  
Dimensionality reduction has long been associated with retinotopic mapping for understanding cortical maps. Multisensory information is processed, fused and mapped to an essentially 2-D cortex in an information preserving manner. Data processing and projection techniques inspired by this biological mechanism are playing an increasingly important role in pattern recognition, computational intelligence, data mining, information retrieval and image recognition. Dimensionality reduction involves reduction of features or volume of data and has become an essential step of information processing in many fields. The topic of manifold learning has recently attracted a great deal of attention, and a number of advanced techniques for extracting nonlinear manifolds and reducing data dimensions have been proposed from statistics, geometry theory and adaptive neural networks. This paper provides an overview of this challenging and emerging topic and discusses various recent methods such as self-organizing map (SOM), kernel PCA, principal manifold, isomap, local linear embedding, and Laplacian eigenmap. Many of them can be considered in a learning manifold framework. The paper further elaborates on the biologically inspired SOM model and its metric preserving variant ViSOM under the framework of adaptive manifold; and their applications in dimensionality reduction with face recognition are investigated. The experiments demonstrate that adaptive ViSOM-based methods produce markedly improved performance over the others due to their metric scaling and preserving properties along the nonlinear manifold.  相似文献   

5.
Although 2DLDA algorithm obtains higher recognition accuracy, a vital unresolved problem of 2DLDA is that it needs huge feature matrix for the task of face recognition. To overcome this problem, this paper presents an efficient approach for face image feature extraction, namely, (2D)2LDA method. Experimental results on ORL and Yale database show that the proposed method obtains good recognition accuracy despite having less number of coefficients.  相似文献   

6.
We proposed an effective face recognition method based on the discriminative locality preserving vectors method (DLPV). Using the analysis of eigenspectrum modeling of locality preserving projections, we selected the reliable face variation subspace of LPP to construct the locality preserving vectors to characterize the data set. The discriminative locality preserving vectors (DLPV) method is based on the discriminant analysis on the locality preserving vectors. Furthermore, the theoretical analysis showed that the DLPV is viewed as a generalized discriminative common vector, null space linear discriminant analysis and null space discriminant locality preserving projections, which gave the intuitive motivation of our method. Extensive experimental results obtained on four well-known face databases (ORL, Yale, Extended Yale B and CMU PIE) demonstrated the effectiveness of the proposed DLPV method.  相似文献   

7.
Illumination variation that occurs on face images degrades the performance of face recognition. In this paper, we propose a novel approach to handling illumination variation for face recognition. Since most human faces are similar in shape, we can find the shadow characteristics, which the illumination variation makes on the faces depending on the direction of light. By using these characteristics, we can compensate for the illumination variation on face images. The proposed method is simple and requires much less computational effort than the other methods based on 3D models, and at the same time, provides a comparable recognition rate.  相似文献   

8.
Recently the underlying sparse representation structure in high dimensional data has received considerable attention in pattern recognition and computer vision. In this paper, we propose a novel semi-supervised dimensionality reduction (SDR) method, named Double Linear Regressions (DLR), to tackle the Single Labeled Image per Person (SLIP) face recognition problem. DLR simultaneously seeks the best discriminating subspace and preserves the sparse representation structure. Specifically, a Subspace Assumption based Label Propagation (SALP) method, which is accomplished using Linear Regressions (LR), is first presented to propagate the label information to the unlabeled data. Then, based on the propagated labeled dataset, a sparse representation regularization term is constructed via Linear Regressions (LR). Finally, DLR takes into account both the discriminating efficiency and the sparse representation structure by using the learned sparse representation regularization term as a regularization term of Linear Discriminant Analysis (LDA). The extensive and encouraging experimental results on three publicly available face databases (CMU PIE, Extended Yale B and AR) demonstrate the effectiveness of the proposed method.  相似文献   

9.
It is well-known that the applicability of both linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) to high-dimensional pattern classification tasks such as face recognition (FR) often suffers from the so-called “small sample size” (SSS) problem arising from the small number of available training samples compared to the dimensionality of the sample space. In this paper, we propose a new QDA like method that effectively addresses the SSS problem using a regularization technique. Extensive experimentation performed on the FERET database indicates that the proposed methodology outperforms traditional methods such as Eigenfaces, direct QDA and direct LDA in a number of SSS setting scenarios.  相似文献   

10.
The goal of face recognition is to distinguish persons via their facial images. Each person's images form a cluster, and a new image is recognized by assigning it to the correct cluster. Since the images are very high-dimensional, it is necessary to reduce their dimension. Linear discriminant analysis (LDA) has been shown to be effective at dimension reduction while preserving the cluster structure of the data. It is classically defined as an optimization problem involving covariance matrices that represent the scatter within and between clusters. The requirement that one of these matrices be nonsingular restricts its application to datasets in which the dimension of the data does not exceed the sample size. For face recognition, however, the dimension typically exceeds the number of images in the database, resulting in what is referred to as the small sample size problem. Recently, the applicability of LDA has been extended by using the generalized singular value decomposition (GSVD) to circumvent the nonsingularity requirement, thus making LDA directly applicable to face recognition data. Our experiments confirm that LDA/GSVD solves the small sample size problem very effectively as compared with other current methods.  相似文献   

11.
Dimensionality reduction methods (DRs) have commonly been used as a principled way to understand the high-dimensional data such as face images. In this paper, we propose a new unsupervised DR method called sparsity preserving projections (SPP). Unlike many existing techniques such as local preserving projection (LPP) and neighborhood preserving embedding (NPE), where local neighborhood information is preserved during the DR procedure, SPP aims to preserve the sparse reconstructive relationship of the data, which is achieved by minimizing a L1 regularization-related objective function. The obtained projections are invariant to rotations, rescalings and translations of the data, and more importantly, they contain natural discriminating information even if no class labels are provided. Moreover, SPP chooses its neighborhood automatically and hence can be more conveniently used in practice compared to LPP and NPE. The feasibility and effectiveness of the proposed method is verified on three popular face databases (Yale, AR and Extended Yale B) with promising results.  相似文献   

12.
In this paper, the impact of outliers on the performance of high-dimensional data analysis methods is studied in the context of face recognition. Most of the existing face recognition methods are based on PCA-like methods: faces are projected into a lower dimensional space in which similarity between faces is supposed to be more easily evaluated. These methods are, however, very sensitive to the quality of the face images used in the training and in the recognition phases. Their performance significantly drops when face images are not well centered or taken under variable illumination conditions. In this paper, we study this phenomenon for two face recognition methods, namely PCA and LDA2D, and we propose a filtering process that allows the automatic selection of noisy face images which are responsible for the performance degradation. This process uses two techniques. The first one is based on the recently proposed robust high-dimensional data analysis method called RobPCA. It is specific to the case of recognition from video sequences. The second technique is based on a novel and effective face classification technique. It allows isolating still face images that are not very precisely cropped, not well-centered or in a non-frontal pose. Experiments show that this filtering process significantly improves recognition rates by 10 to 30%.
Christophe GarciaEmail:
  相似文献   

13.
This paper proposes a new subspace method that is based on image covariance obtained from windowed features of images. A windowed input feature consists of a number of pixels, and the dimension of input space is determined by the number of windowed features. Each element of an image covariance matrix can be obtained from the inner product of two windowed features. The 2D-PCA and 2D-LDA methods are then obtained from principal component analysis and linear discriminant analysis, respectively, using the image covariance matrix. In the case of 2D-LDA, there is no need for PCA preprocessing and the dimension of subspace can be greater than the number of classes because the within-class and between-class image covariance matrices have full ranks. Comparative experiments are performed using the FERET, CMU, and ORL databases of facial images. The experimental results show that the proposed 2D-LDA provides the best recognition rate among several subspace methods in all of the tests.  相似文献   

14.
《Pattern recognition》2014,47(2):544-555
This paper proposes a novel method of supervised and unsupervised multi-linear neighborhood preserving projection (MNPP) for face recognition. Unlike conventional neighborhood preserving projections, the MNPP method operates directly on tensorial data rather than vectors or matrices, and solves problems of tensorial representation for multi-dimensional feature extraction, classification and recognition. As opposed to traditional approaches such as NPP and 2DNPP, which derive only one subspace, multiple interrelated subspaces are obtained in the MNPP method by unfolding the tensor over different tensorial directions. The number of subspaces derived by MNPP is determined by the order of the tensor space. This approach is used for face recognition and biometrical security classification problems involving higher order tensors. The performance of our proposed and existing techniques is analyzed using three benchmark facial datasets ORL, AR, and FERET. The obtained results show that the MNPP outperforms the standard approaches in terms of the error rate.  相似文献   

15.
Many pattern recognition applications involve the treatment of high-dimensional data and the small sample size problem. Principal component analysis (PCA) is a common used dimension reduction technique. Linear discriminate analysis (LDA) is often employed for classification. PCA plus LDA is a famous framework for discriminant analysis in high-dimensional space and singular cases. In this paper, we examine the theory of this framework and find out that even if there is no small sample size problem the PCA dimension reduction cannot guarantee the subsequent successful application of LDA. We thus develop an improved discriminate analysis method by introducing an inverse Fisher criterion and adding a constrain in PCA procedure so that the singularity phenomenon will not occur. Experiment results on face recognition suggest that this new approach works well and can be applied even when the number of training samples is one per class.  相似文献   

16.
In this paper an efficient feature extraction method named as locally linear discriminant embedding (LLDE) is proposed for face recognition. It is well known that a point can be linearly reconstructed by its neighbors and the reconstruction weights are under the sum-to-one constraint in the classical locally linear embedding (LLE). So the constrained weights obey an important symmetry: for any particular data point, they are invariant to rotations, rescalings and translations. The latter two are introduced to the proposed method to strengthen the classification ability of the original LLE. The data with different class labels are translated by the corresponding vectors and those belonging to the same class are translated by the same vector. In order to cluster the data with the same label closer, they are also rescaled to some extent. So after translation and rescaling, the discriminability of the data will be improved significantly. The proposed method is compared with some related feature extraction methods such as maximum margin criterion (MMC), as well as other supervised manifold learning-based approaches, for example ensemble unified LLE and linear discriminant analysis (En-ULLELDA), locally linear discriminant analysis (LLDA). Experimental results on Yale and CMU PIE face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.  相似文献   

17.
This study presents a novel kernel discriminant transformation (KDT) algorithm for face recognition based on image sets. As each image set is represented by a kernel subspace, we formulate a KDT matrix that maximizes the similarities of within-kernel subspaces, and simultaneously minimizes those of between-kernel subspaces. Although the KDT matrix cannot be computed explicitly in a high-dimensional feature space, we propose an iterative kernel discriminant transformation algorithm to solve the matrix in an implicit way. Another perspective of similarity measure, namely canonical difference, is also addressed for matching each pair of the kernel subspaces, and employed to simplify the formulation. The proposed face recognition system is demonstrated to outperform existing still-image-based as well as image set-based face recognition methods using the Yale Face database B, Labeled Faces in the Wild and a self-compiled database.  相似文献   

18.
A novel image classification algorithm named Adaptively Weighted Sub-directional Two-Dimensional Linear Discriminant Analysis (AWS2DLDA) is proposed in this paper. AWS2DLDA can extract the directional features of images in the frequency domain, and it is applied to face recognition. Some experiments are conducted to demonstrate the effectiveness of the proposed method. Experimental results confirm that the recognition rate of the proposed system is higher than the other popular algorithms.  相似文献   

19.
We propose an innovative technique, geometric linear discriminant analysis (Geometric LDA), to reduce the complexity of pattern recognition systems by using a linear transformation to lower the dimension of the observation space. We experimentally compare Geometric LDA to other dimensionality reduction methods found in the literature. We show that Geometric LDA produces the same and in many cases a significantly better linear transformation than other methods found in the literature.  相似文献   

20.
This paper addresses the small sample size problem in linear discriminant analysis, which occurs in face recognition applications. Belhumeur et al. [IEEE Trans. Pattern Anal. Mach. Intell. 19 (7) (1997) 711-720] proposed the FisherFace method. We find out that the FisherFace method might fail since after the PCA transform the corresponding within class covariance matrix can still be singular, this phenomenon is verified with the Yale face database. Hence we propose to use an inverse Fisher criteria. Our method works when the number of training images per class is one. Experiment results suggest that this new approach performs well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号