首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, a new discriminant analysis for feature extraction is derived from the perspective of least squares regression. To obtain great discriminative power between classes, all the data points in each class are expected to be regressed to a single vector, and the basic task is to find a transformation matrix such that the squared regression error is minimized. To this end, two least squares discriminant analysis methods are developed under the orthogonal or the uncorrelated constraint. We show that the orthogonal least squares discriminant analysis is an extension to the null space linear discriminant analysis, and the uncorrelated least squares discriminant analysis is exactly equivalent to the traditional linear discriminant analysis. Comparative experiments show that the orthogonal one is more preferable for real world applications.  相似文献   

2.
In graph embedding based methods, we usually need to manually choose the nearest neighbors and then compute the edge weights using the nearest neighbors via L2 norm (e.g. LLE). It is difficult and unstable to manually choose the nearest neighbors in high dimensional space. So how to automatically construct a graph is very important. In this paper, first, we give a L2-graph like L1-graph. L2-graph calculates the edge weights using the total samples, avoiding manually choosing the nearest neighbors; second, a L2-graph based feature extraction method is presented, called collaborative representation based projections (CRP). Like SPP, CRP aims to preserve the collaborative representation based reconstruction relationship of data. CRP utilizes a L2 norm graph to characterize the local compactness information. CRP maximizes the ratio between the total separability information and the local compactness information to seek the optimal projection matrix. CRP is much faster than SPP since CRP calculates the objective function with L2 norm while SPP calculate the objective function with L1 norm. Experimental results on FERET, AR, Yale face databases and the PolyU finger-knuckle-print database demonstrate that CRP works well in feature extraction and leads to a good recognition performance.  相似文献   

3.
Linear discriminant analysis (LDA) is a well-known feature extraction technique. In this paper, we point out that LDA is not perfect because it only utilises the discriminatory information existing in the first-order statistical moments and ignores the information contained in the second-order statistical moments. We enhance LDA using the idea of a K-L expansion technique and develop a new LDA-KL combined method, which can make full use of both sections of discriminatory information. The proposed method is tested on the Concordia University CENPARMI handwritten numeral database. The experimental results indicate that the proposed LDA-KL method is more powerful than the existing techniques of LDA, K-L expansion and their combination: OLDA-PCA. What is more, the proposed method is further generalised to suit for feature extraction in the complex feature space and can be an effective tool for feature fusion.An erratum to this article can be found at  相似文献   

4.
This work proposes a method to decompose the kernel within-class eigenspace into two subspaces: a reliable subspace spanned mainly by the facial variation and an unreliable subspace due to limited number of training samples. A weighting function is proposed to circumvent undue scaling of eigenvectors corresponding to the unreliable small and zero eigenvalues. Eigenfeatures are then extracted by the discriminant evaluation in the whole kernel space. These efforts facilitate a discriminative and stable low-dimensional feature representation of the face image. Experimental results on FERET, ORL and GT databases show that our approach consistently outperforms other kernel based face recognition methods.
Alex KotEmail:
  相似文献   

5.
Semi-supervised Gaussian mixture model (SGMM) has been successfully applied to a wide range of engineering and scientific fields, including text classification, image retrieval, and biometric identification. Recently, many studies have shown that naturally occurring data may reside on or near manifold structures in ambient space. In this paper, we study the use of SGMM for data sets containing multiple separated or intersecting manifold structures. We propose a new multi-manifold regularized, semi-supervised Gaussian mixture model (M2SGMM) for classifying multiple manifolds. Specifically, we model the data manifold using a similarity graph with local and geometrical consistency properties. The geometrical similarity is measured by a novel application of local tangent space. We regularize the model parameters of the SGMM by incorporating the enhanced Laplacian of the graph. Experiments demonstrate the effectiveness of the proposed approach.  相似文献   

6.
Two-dimensional local graph embedding discriminant analysis (2DLGEDA) and two-dimensional discriminant locality preserving projections (2DDLPP) were recently proposed to directly extract features form 2D face matrices to improve the performance of two-dimensional locality preserving projections (2DLPP). But all of them require a high computational cost and the learned transform matrices lack intuitive and semantic interpretations. In this paper, we propose a novel method called sparse two-dimensional locality discriminant projections (S2DLDP), which is a sparse extension of graph-based image feature extraction method. S2DLDP combines the spectral analysis and L1-norm regression using the Elastic Net to learn the sparse projections. Differing from the existing 2D methods such as 2DLPP, 2DDLP and 2DLGEDA, S2DLDP can learn the sparse 2D face profile subspaces (also called sparsefaces), which give an intuitive, semantic and interpretable feature subspace for face representation. We point out that using S2DLDP for face feature extraction is, in essence, to project the 2D face images on the semantic face profile subspaces, on which face recognition is also performed. Experiments on Yale, ORL and AR face databases show the efficiency and effectiveness of S2DLDP.  相似文献   

7.
A new method of feature fusion and its application in image recognition   总被引:9,自引:0,他引:9  
  相似文献   

8.
Linear discriminant analysis (LDA) is one of the most popular supervised feature extraction techniques used in machine learning and pattern classification. However, LDA only captures global geometrical structure information of the data and ignores the geometrical structure information of local data points. Though many articles have been published to address this issue, most of them are incomplete in the sense that only part of the local information is used. We show here that there are total three kinds of local information, namely, local similarity information, local intra-class pattern variation, and local inter-class pattern variation. We first propose a new method called enhanced within-class LDA (EWLDA) algorithm to incorporate the local similarity information, and then propose a complete framework called complete global–local LDA (CGLDA) algorithm to incorporate all these three kinds of local information. Experimental results on two image databases demonstrate the effectiveness of our algorithms.  相似文献   

9.
10.
Generalized linear discriminant analysis has been successfully used as a dimensionality reduction technique in many classification tasks. An analytical method for finding the optimal set of generalized discriminant vectors is proposed in this paper. Compared with other methods, the proposed method has the advantage of requiring less computational time and achieving higher recognition rates. The results of experiments conducted on the Olivetti Research Lab facial database show the effectiveness of the proposed method.  相似文献   

11.
Dimensionality reduction has many applications in pattern recognition, machine learning and computer vision. In this paper, we develop a general regularization framework for dimensionality reduction by allowing the use of different functions in the cost function. This is especially important as we can achieve robustness in the presence of outliers. It is shown that optimizing the regularized cost function is equivalent to solving a nonlinear eigenvalue problem under certain conditions, which can be handled by the self-consistent field (SCF) iteration. Moreover, this regularization framework is applicable in unsupervised or supervised learning by defining the regularization term which provides some types of prior knowledge of projected samples or projected vectors. It is also noted that some linear projection methods can be obtained from this framework by choosing different functions and imposing different constraints. Finally, we show some applications of our framework by various data sets including handwritten characters, face images, UCI data, and gene expression data.  相似文献   

12.
This paper proposes a new discriminant analysis with orthonormal coordinate axes of the feature space. In general, the number of coordinate axes of the feature space in the traditional discriminant analysis depends on the number of pattern classes. Therefore, the discriminatory capability of the feature space is limited considerably. The new discriminant analysis solves this problem completely. In addition, it is more powerful than the traditional one in so far as the discriminatory power and the mean error probability for coordinate axes are concerned. This is also shown by a numerical example.  相似文献   

13.
A fast method of feature extraction for kernel MSE   总被引:1,自引:0,他引:1  
In this paper, a fast method of selecting features for kernel minimum squared error (KMSE) is proposed to mitigate the computational burden in the case where the size of the training patterns is large. Compared with other existent algorithms of selecting features for KMSE, this iterative KMSE, viz. IKMSE, shows better property of enhancing the computational efficiency without sacrificing the generalization performance. Experimental reports on the benchmark data sets, nonlinear autoregressive model and real problem address the efficacy and feasibility of the proposed IKMSE. In addition, IKMSE can be easily extended to classification fields.  相似文献   

14.
Derived from the traditional manifold learning algorithms, local discriminant analysis methods identify the underlying submanifold structures while employing discriminative information for dimensionality reduction. Mathematically, they can all be unified into a graph embedding framework with different construction criteria. However, such learning algorithms are limited by the curse-of-dimensionality if the original data lie on the high-dimensional manifold. Different from the existing algorithms, we consider the discriminant embedding as a kernel analysis approach in the sample space, and a kernel-view based discriminant method is proposed for the embedded feature extraction, where both PCA pre-processing and the pruning of data can be avoided. Extensive experiments on the high-dimensional data sets show the robustness and outstanding performance of our proposed method.  相似文献   

15.
In this paper, we combine two kinds of features together by virtue of complex vectors and then use the developed generalized K-L transform (or expansion) for feature extraction. The experiments on NUST603 handwritten Chinese character database and CENPARMI handwritten digit database indicate that the proposed method can improve the recognition rate significantly.  相似文献   

16.
In this paper an efficient feature extraction method named as locally linear discriminant embedding (LLDE) is proposed for face recognition. It is well known that a point can be linearly reconstructed by its neighbors and the reconstruction weights are under the sum-to-one constraint in the classical locally linear embedding (LLE). So the constrained weights obey an important symmetry: for any particular data point, they are invariant to rotations, rescalings and translations. The latter two are introduced to the proposed method to strengthen the classification ability of the original LLE. The data with different class labels are translated by the corresponding vectors and those belonging to the same class are translated by the same vector. In order to cluster the data with the same label closer, they are also rescaled to some extent. So after translation and rescaling, the discriminability of the data will be improved significantly. The proposed method is compared with some related feature extraction methods such as maximum margin criterion (MMC), as well as other supervised manifold learning-based approaches, for example ensemble unified LLE and linear discriminant analysis (En-ULLELDA), locally linear discriminant analysis (LLDA). Experimental results on Yale and CMU PIE face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.  相似文献   

17.
Matrix-based methods such as generalized low rank approximations of matrices (GLRAM) have gained wide attention from researchers in pattern recognition and machine learning communities. In this paper, a novel concept of bilinear Lanczos components (BLC) is introduced to approximate the projection vectors obtained from eigen-based methods without explicit computing eigenvectors of the matrix. This new method sequentially reduces the reconstruction error for a Frobenius-norm based optimization criterion, and the resulting approximation performance is thus improved during successive iterations. In addition, a theoretical clue for selecting suitable dimensionality parameters without losing classification information is presented in this paper. The BLC approach realizes dimensionality reduction and feature extraction by using a small number of Lanczos components. Extensive experiments on face recognition and image classification are conducted to evaluate the efficiency and effectiveness of the proposed algorithm. Results show that the new approach is competitive with the state-of-the-art methods, while it has a much lower training cost.  相似文献   

18.
In existing Linear Discriminant Analysis (LDA) models, the class population mean is always estimated by the class sample average. In small sample size problems, such as face and palm recognition, however, the class sample average does not suffice to provide an accurate estimate of the class population mean based on a few of the given samples, particularly when there are outliers in the training set. To overcome this weakness, the class median vector is used to estimate the class population mean in LDA modeling. The class median vector has two advantages over the class sample average: (1) the class median (image) vector preserves useful details in the sample images, and (2) the class median vector is robust to outliers that exist in the training sample set. In addition, a weighting mechanism is adopted to refine the characterization of the within-class scatter so as to further improve the robustness of the proposed model. The proposed Median Fisher Discriminator (MFD) method was evaluated using the Yale and the AR face image databases and the PolyU (Polytechnic University) palmprint database. The experimental results demonstrated the robustness and effectiveness of the proposed method.  相似文献   

19.
This paper presents an orientation operator to extract image local orientation features. We show that a proper employment of image integration leads to an unbiased orientation estimate, based on which an orientation operator is proposed. The resulting discrete operator has flexibility in the scale selection as the scale change does not violate the bias minimization criteria. An analytical formula is developed to compare orientation biases of various discrete operators. The proposed operator shows lower bias than eight well-known gradient operators. Experiments further demonstrate higher orientation accuracy of the proposed operator than these gradient operators.  相似文献   

20.
Nonlinear kernel-based feature extraction algorithms have recently been proposed to alleviate the loss of class discrimination after feature extraction. When considering image classification, a kernel function may not be sufficiently effective if it depends only on an information resource from the Euclidean distance in the original feature space. This study presents an extended radial basis kernel function that integrates multiple discriminative information resources, including the Euclidean distance, spatial context, and class membership. The concepts related to Markov random fields (MRFs) are exploited to model the spatial context information existing in the image. Mutual closeness in class membership is defined as a similarity measure with respect to classification. Any dissimilarity from the additional information resources will improve the discrimination between two samples that are only a short Euclidean distance apart in the feature space. The proposed kernel function is used for feature extraction through linear discriminant analysis (LDA) and principal component analysis (PCA). Experiments with synthetic and natural images show the effectiveness of the proposed kernel function with application to image classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号