首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
Maximum margin criterion (MMC) based feature extraction is more efficient than linear discriminant analysis (LDA) for calculating the discriminant vectors since it does not need to calculate the inverse within-class scatter matrix. However, MMC ignores the discriminative information within the local structures of samples and the structural information embedding in the images. In this paper, we develop a novel criterion, namely Laplacian bidirectional maximum margin criterion (LBMMC), to address the issue. We formulate the image total Laplacian matrix, image within-class Laplacian matrix and image between-class Laplacian matrix using the sample similar weight that is widely used in machine learning. The proposed LBMMC based feature extraction computes the discriminant vectors by maximizing the difference between image between-class Laplacian matrix and image within-class Laplacian matrix in both row and column directions. Experiments on the FERET and Yale face databases show the effectiveness of the proposed LBMMC based feature extraction method.  相似文献   

2.
Kernel principal component analysis (KPCA) and kernel linear discriminant analysis (KLDA) are two commonly used and effective methods for dimensionality reduction and feature extraction. In this paper, we propose a KLDA method based on maximal class separability for extracting the optimal features of analog fault data sets, where the proposed KLDA method is compared with principal component analysis (PCA), linear discriminant analysis (LDA) and KPCA methods. Meanwhile, a novel particle swarm optimization (PSO) based algorithm is developed to tune parameters and structures of neural networks jointly. Our study shows that KLDA is overall superior to PCA, LDA and KPCA in feature extraction performance and the proposed PSO-based algorithm has the properties of convenience of implementation and better training performance than Back-propagation algorithm. The simulation results demonstrate the effectiveness of these methods.  相似文献   

3.
Principal component analysis (PCA) and linear discriminant analysis (LDA) are two important feature extraction methods and have been widely applied in a variety of areas. A limitation of PCA and LDA is that when dealing with image data, the image matrices must be first transformed into vectors, which are usually of very high dimensionality. This causes expensive computational cost and sometimes the singularity problem. Recently two methods called two-dimensional PCA (2DPCA) and two-dimensional LDA (2DLDA) were proposed to overcome this disadvantage by working directly on 2-D image matrices without a vectorization procedure. The 2DPCA and 2DLDA significantly reduce the computational effort and the possibility of singularity in feature extraction. In this paper, we show that these matrices based 2-D algorithms are equivalent to special cases of image block based feature extraction, i.e., partition each image into several blocks and perform standard PCA or LDA on the aggregate of all image blocks. These results thus provide a better understanding of the 2-D feature extraction approaches.  相似文献   

4.
In practice, many applications require a dimensionality reduction method to deal with the partially labeled problem. In this paper, we propose a semi-supervised dimensionality reduction framework, which can efficiently handle the unlabeled data. Under the framework, several classical methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), maximum margin criterion (MMC), locality preserving projections (LPP) and their corresponding kernel versions can be seen as special cases. For high-dimensional data, we can give a low-dimensional embedding result for both discriminating multi-class sub-manifolds and preserving local manifold structure. Experiments show that our algorithms can significantly improve the accuracy rates of the corresponding supervised and unsupervised approaches.  相似文献   

5.
Feature extraction is an important component of a pattern recognition system. It performs two tasks: transforming input parameter vector into a feature vector and/or reducing its dimensionality. A well-defined feature extraction algorithm makes the classification process more effective and efficient. Two popular methods for feature extraction are linear discriminant analysis (LDA) and principal component analysis (PCA). In this paper, the minimum classification error (MCE) training algorithm (which was originally proposed for optimizing classifiers) is investigated for feature extraction. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithm. LDA, PCA, and MCE and GMCE algorithms extract features through linear transformation. Support vector machine (SVM) is a recently developed pattern classification algorithm, which uses non-linear kernel functions to achieve non-linear decision boundaries in the parametric space. In this paper, SVM is also investigated and compared to linear feature extraction algorithms.  相似文献   

6.
Appearance-based methods, especially linear discriminant analysis (LDA), have been very successful in facial feature extraction, but the recognition performance of LDA is often degraded by the so-called "small sample size" (SSS) problem. One popular solution to the SSS problem is principal component analysis (PCA) + LDA (Fisherfaces), but the LDA in other low-dimensional subspaces may be more effective. In this correspondence, we proposed a novel fast feature extraction technique, bidirectional PCA (BDPCA) plus LDA (BDPCA + LDA), which performs an LDA in the BDPCA subspace. Two face databases, the ORL and the Facial Recognition Technology (FERET) databases, are used to evaluate BDPCA + LDA. Experimental results show that BDPCA + LDA needs less computational and memory requirements and has a higher recognition accuracy than PCA + LDA.  相似文献   

7.
对特征抽取方法进行了研究,提出一种新的特征抽取方法,克服了Roman W等提出的特征抽取方法中缺乏鉴别信息的缺点。通过对高维的人脸数据用PCA和LDA降维,利用粗糙集理论中的属性约简算法进行进一步的维数压缩。实验结果表明,该方法具有良好的性能。  相似文献   

8.
Locality-preserved maximum information projection.   总被引:3,自引:0,他引:3  
Dimensionality reduction is usually involved in the domains of artificial intelligence and machine learning. Linear projection of features is of particular interest for dimensionality reduction since it is simple to calculate and analytically analyze. In this paper, we propose an essentially linear projection technique, called locality-preserved maximum information projection (LPMIP), to identify the underlying manifold structure of a data set. LPMIP considers both the within-locality and the between-locality in the processing of manifold learning. Equivalently, the goal of LPMIP is to preserve the local structure while maximize the out-of-locality (global) information of the samples simultaneously. Different from principal component analysis (PCA) that aims to preserve the global information and locality-preserving projections (LPPs) that is in favor of preserving the local structure of the data set, LPMIP seeks a tradeoff between the global and local structures, which is adjusted by a parameter alpha, so as to find a subspace that detects the intrinsic manifold structure for classification tasks. Computationally, by constructing the adjacency matrix, LPMIP is formulated as an eigenvalue problem. LPMIP yields orthogonal basis functions, and completely avoids the singularity problem as it exists in LPP. Further, we develop an efficient and stable LPMIP/QR algorithm for implementing LPMIP, especially, on high-dimensional data set. Theoretical analysis shows that conventional linear projection methods such as (weighted) PCA, maximum margin criterion (MMC), linear discriminant analysis (LDA), and LPP could be derived from the LPMIP framework by setting different graph models and constraints. Extensive experiments on face, digit, and facial expression recognition show the effectiveness of the proposed LPMIP method.  相似文献   

9.
Algorithms on streaming data have attracted increasing attention in the past decade. Among them, dimensionality reduction algorithms are greatly interesting due to the desirability of real tasks. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two of the most widely used dimensionality reduction approaches. However, PCA is not optimal for general classification problems because it is unsupervised and ignores valuable label information for classification. On the other hand, the performance of LDA is degraded when encountering limited available low-dimensional spaces and singularity problem. Recently, Maximum Margin Criterion (MMC) was proposed to overcome the shortcomings of PCA and LDA. Nevertheless, the original MMC algorithm could not satisfy the streaming data model to handle large-scale high-dimensional data set. Thus an effective, efficient and scalable approach is needed. In this paper, we propose a supervised incremental dimensionality reduction algorithm and its extension to infer adaptive low-dimensional spaces by optimizing the maximum margin criterion. Experimental results on a synthetic dataset and real datasets demonstrate the superior performance of our proposed algorithm on streaming data.  相似文献   

10.
人脸特征提取是人脸识别流程最重要的步骤,特征的好坏直接影响了识别效果。为了得到更好的人脸识别效果,需要充分利用样本的信息。为了充分利用训练样本和测试样本包含的信息,提出了利用样本散度矩阵将主成分分析PCA算法和线性判别分析LDA算法加权组合的半监督LDA(SLDA)特征提取算法。同时,受组合优化问题的启发,利用二进制遗传算法对半监督特征提取算法得到的特征空间进行优化。在ORL人脸数据库上的实验结果表明:与人脸识别经典算法和部分改进算法相比,SLDA算法获得了更高的识别率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号