首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Linear subspace analysis methods have been successfully applied to extract features for face recognition.But they are inadequate to represent the complex and nonlinear variations of real face images,such as illumination,facial expression and pose variations,because of their linear properties.In this paper,a nonlinear subspace analysis method,Kernel-based Nonlinear Discriminant Analysis (KNDA),is presented for face recognition,which combines the nonlinear kernel trick with the linear subspace analysis method-Fisher Linear Discriminant Analysis (FLDA).First,the kernel trick is used to project the input data into an implicit feature space,then FLDA is performed in this feature space.Thus nonlinear discriminant features of the input data are yielded.In addition,in order to reduce the computational complexity,a geometry-based feature vectors selection scheme is adopted.Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA),which combines the kernel trick with linear Principal Component Analysis (PCA).Experiments are performed with the polynomial kernel,and KNDA is compared with KPCA and FLDA.Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.  相似文献   

2.
人脸的性别分类   总被引:7,自引:0,他引:7  
人脸的性别分类是指根据人脸的图像判别其性别的模式识别问题.系统地研究了不同的特征提取方法和分类方法在性别分类问题上的性能,其中包括主分量分析(PCA)、Fishel线性鉴别分析(FLD)、最佳特征提取、Adaboost算法、支持向量机(SVM).给出了在9姿态人脸库、FERET人脸库和一个网络图片人脸库上的对比实验结果.实验表明人脸中的性别信息集中存在于某个子空间中,因此,在分类前对样本进行适当的压缩降维不但不会明显降低分类器的性能,而且可以大大减少分类的时间开销.最后介绍了将性别分类器与自动人脸检测和特征提取平台集成起来的基于人脸图像的性别判别系统.  相似文献   

3.
Feature extraction is an important component of a pattern recognition system. It performs two tasks: transforming input parameter vector into a feature vector and/or reducing its dimensionality. A well-defined feature extraction algorithm makes the classification process more effective and efficient. Two popular methods for feature extraction are linear discriminant analysis (LDA) and principal component analysis (PCA). In this paper, the minimum classification error (MCE) training algorithm (which was originally proposed for optimizing classifiers) is investigated for feature extraction. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithm. LDA, PCA, and MCE and GMCE algorithms extract features through linear transformation. Support vector machine (SVM) is a recently developed pattern classification algorithm, which uses non-linear kernel functions to achieve non-linear decision boundaries in the parametric space. In this paper, SVM is also investigated and compared to linear feature extraction algorithms.  相似文献   

4.
The current discriminant analysis method design is generally independent of classifiers, thus the connection between discriminant analysis methods and classifiers is loose. This paper provides a way to design discriminant analysis methods that are bound with classifiers. We begin with a local mean based nearest neighbor (LM-NN) classifier and use its decision rule to supervise the design of a discriminator. Therefore, the derived discriminator, called local mean based nearest neighbor discriminant analysis (LM-NNDA), matches the LM-NN classifier optimally in theory. In contrast to that LM-NNDA is a NN classifier induced discriminant analysis method, we further show that the classical Fisher linear discriminant analysis (FLDA) is a minimum distance classifier (i.e. nearest Class-mean classifier) induced discriminant analysis method. The proposed LM-NNDA method is evaluated using the CENPARMI handwritten numeral database, the NUST603 handwritten Chinese character database, the ETH80 object category database and the FERET face image database. The experimental results demonstrate the performance advantage of LM-NNDA over other feature extraction methods with respect to the LM-NN (or NN) classifier.  相似文献   

5.
采用近红外光谱分析法对不同种类的苹果样品进行分类,提出一种基于非相关判别转换的苹果近红外光谱定性分析新方法。实验分别采用主成分分析、Fisher判别分析和非相关判别转换三种方法对苹果光谱数据进行特征提取,并使用K-近邻分类算法建立三种苹果分类识别模型,最后使用"留一"交叉验证法进行模型检验。结果表明,使用非相关判别转换方法建立的模型正确识别率优于使用主成分分析和Fisher判别分析建立的模型。  相似文献   

6.
Classification of high-dimensional statistical data is usually not amenable to standard pattern recognition techniques because of an underlying small sample size problem. To address the problem of high-dimensional data classification in the face of a limited number of samples, a novel principal component analysis (PCA) based feature extraction/classification scheme is proposed. The proposed method yields a piecewise linear feature subspace and is particularly well-suited to difficult recognition problems where achievable classification rates are intrinsically low. Such problems are often encountered in cases where classes are highly overlapped, or in cases where a prominent curvature in data renders a projection onto a single linear subspace inadequate. The proposed feature extraction/classification method uses class-dependent PCA in conjunction with linear discriminant feature extraction and performs well on a variety of real-world datasets, ranging from digit recognition to classification of high-dimensional bioinformatics and brain imaging data.  相似文献   

7.
Kernel principal component analysis (KPCA) and kernel linear discriminant analysis (KLDA) are two commonly used and effective methods for dimensionality reduction and feature extraction. In this paper, we propose a KLDA method based on maximal class separability for extracting the optimal features of analog fault data sets, where the proposed KLDA method is compared with principal component analysis (PCA), linear discriminant analysis (LDA) and KPCA methods. Meanwhile, a novel particle swarm optimization (PSO) based algorithm is developed to tune parameters and structures of neural networks jointly. Our study shows that KLDA is overall superior to PCA, LDA and KPCA in feature extraction performance and the proposed PSO-based algorithm has the properties of convenience of implementation and better training performance than Back-propagation algorithm. The simulation results demonstrate the effectiveness of these methods.  相似文献   

8.
一种有效的手写体汉字组合特征的抽取与识别算法   总被引:2,自引:0,他引:2  
基于特征融合的思想,从有利于模式分类的角度,推广了典型相关分析的理论,建立了广义的典型相关分析用于图像识别的理论框架。在该框架下,首先利用广义的典型相关判据准则函数,求取两组特征矢量的广义投影矢量集,构成一对变换矩阵;然后根据所提出的新的特征融合策略,对两种手写体汉字特征进行融合,所抽取的模式的相关特征矩阵,在普通分类器下取得了良好的分类效果,优于已有的特征融合方法及基于单一特征的PCA 方法和FLDA 方法。  相似文献   

9.
Geometric Mean for Subspace Selection   总被引:2,自引:0,他引:2  
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.  相似文献   

10.
提出了主元和线性判别的集成分析算法以实施模拟故障数据的特征提取过程和方法。该集成分析方法首先对模拟故障数据进行主元分析,然后在主元变换空间实行线性判别分析,最后将所获得的最优判别特征模式应用于模式分类器进行故障诊断。仿真结果表明,所提出的方法能够充分利用线性方法的计算简便优势,增强单一主元分析或线性判别分析的特征提取性能,获取故障数据集的本质特征,简化模式分类器的结构,降低系统运行的计算成本。  相似文献   

11.
The feature extraction is an important preprocessing step of the classification procedure particularly in high-dimensional data with limited number of training samples. Conventional supervised feature extraction methods, for example, linear discriminant analysis (LDA), generalized discriminant analysis, and non-parametric weighted feature extraction ones, need to calculate scatter matrices. In these methods, within-class and between-class scatter matrices are used to formulate the criterion of class separability. Because of the limited number of training samples, the accurate estimation of these matrices is not possible. So the classification accuracy of these methods falls in a small sample size situation. To cope with this problem, a new supervised feature extraction method namely, feature extraction using attraction points (FEUAP) has been recently proposed in which no statistical moments are used. Thus, it works well using limited training samples. To take advantage of this method and LDA one, this article combines them by a dyadic scheme. In the proposed scheme, the similar classes are grouped hierarchically by the k-means algorithm so that a tree with some nodes is constructed. Then the class of each pixel is determined from this scheme. To determine the class of each pixel, depending on the node of the tree, we use FEUAP or LDA for a limited or large number of training samples, respectively. The experimental results demonstrate the better performance of the proposed hybrid method in comparison with other supervised feature extraction methods in a small sample size situation.  相似文献   

12.
Abstract: Feature extraction helps to maximize the useful information within a feature vector, by reducing the dimensionality and making the classification effective and simple. In this paper, a novel feature extraction method is proposed: genetic programming (GP) is used to discover features, while the Fisher criterion is employed to assign fitness values. This produces non‐linear features for both two‐class and multiclass recognition, reflecting the discriminating information between classes. Compared with other GP‐based methods which need to generate c discriminant functions for solving c‐class (c>2) pattern recognition problems, only one single feature, obtained by a single GP run, appears to be highly satisfactory in this approach. The proposed method is experimentally compared with some non‐linear feature extraction methods, such as kernel generalized discriminant analysis and kernel principal component analysis. Results demonstrate the capability of the proposed approach to transform information from the high‐dimensional feature space into a single‐dimensional space by automatically discovering the relationships between data, producing improved performance.  相似文献   

13.
Standard unsupervised linear feature extraction methods find orthonormal (PCA) or statistically independent (ICA) latent variables that are good for data representation. These representative features may not be optimal for the classification tasks, thus requiring a search of linear projections that can give a good discriminative model. A semi-supervised linear feature extraction method, namely dICA, had recently been proposed which jointly maximizes the Fisher linear discriminant (FLD) and negentropy of the extracted features [Dhir and Lee in Discriminant independent component analysis. In: Proceedings of the international conference intelligent data engineering and automated learning, LNCS 5788:219–225 (Full paper is submitted to IEEE Trans. NN) 2009]. Motivated by the independence and unit covariance of the extracted dICA features, maximizing the determinant of between-class scatter of the features matrix is theoretically the same as the maximization of FLD. This also reduces the computational complexity of the algorithm. In this paper, we concentrate on text databases that follow inherent exponential distribution. Approximation and the maximization of negentropy for data with asymmetric distribution is discussed. Experiments on the text categorization problem show improvements in classification performance and data reconstruction.  相似文献   

14.
This paper deals with the Chilean red wine varietal classification problem. The problem is solved here by using one of the simplest statistical classification methods based on quadratic discriminant analysis (QDA) together with a new recently introduced nonlinear feature extraction technique called quadratic Fisher transformation. Classification is based on liquid chromatograms of polyphenolic compounds present in wine samples, obtained from a high performance liquid chromatograph with diode alignment detector. For comparison purposes three other feature extraction methods are studied: linear Fisher transformation, Fourier transform and wavelet transform, maintaining QDA as classification scheme. From experimental results it is possible to conclude that when using quadratic discriminant analysis as classification method, the percentage of correct classification was improved from 91% (obtained for the case of wavelet extraction) to 99% when employing quadratic Fisher transformation as feature extraction method.  相似文献   

15.
一种基于ICA和模糊LDA的特征提取方法   总被引:1,自引:0,他引:1  
独立成分分析(ICA)和线性鉴别分析(LDA)是两种经典的特征提取方法.为了更好地解决人脸识别中的特征提取问题,在已有的两种方法进行特征抽取的基础上引入模糊技术,抽取重叠(离群)样本中有助于分类的特征.首先用ICA进行初次特征提取,然后采用模糊k近邻方法得到相应的样本分布信息,最后在此基础上用模糊LDA进行二次特征提取,得到有效的特征向量集.在3个人脸数据库上的实验结果表明本文方法的有效性.  相似文献   

16.
PCA-LDA算法在性别鉴别中的应用   总被引:4,自引:0,他引:4       下载免费PDF全文
何国辉  甘俊英 《计算机工程》2006,32(19):208-210
结合主元分析(Principal Components Analysis, PCA)与线性鉴别分析(Linear Discriminant Analysis, LDA)的特点,提出用于性别鉴别的PCA-LDA算法。该算法通过PCA算法求得训练样本的特征子空间,并在此基础上计算LDA算法的特征子空间。将PCA算法与LDA算法的特征子空间进行融合,获得PCA-LDA算法的融合特征空间。训练样本与测试样本分别朝融合特征空间投影,从而得到识别特征。利用最近邻准则即可完成性别鉴别。基于ORL(Olivetti Research Laboratory)人脸数据库的实验结果表明,PCA-LDA算法比PCA算法识别性能好,在性别鉴别中是一种有效的方法。  相似文献   

17.
基于PCA和GMM的图像分类算法   总被引:1,自引:0,他引:1  
讨论了目标图像类和非目标图像类的分类方法.按统计学原理,如果图像类属于目标图像类,则提取图像中目标图像的特征,否则提取整幅图像的底层特征,基于主分量分析(PCA)的图像特征降维方法和高斯混合模型(GMM)分类器,提出了一种图像分类算法,该算法在标准的Corel图像库上进行了测试,并与其它基于GMM的方法进行了比较,实验结果表明了提出算法的有效性.  相似文献   

18.
Efficient and robust feature extraction by maximum margin criterion   总被引:15,自引:0,他引:15  
In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. Principal component analysis (PCA) and linear discriminant analysis (LDA) are the two most popular linear dimensionality reduction methods. However, PCA is not very effective for the extraction of the most discriminant features, and LDA is not stable due to the small sample size problem . In this paper, we propose some new (linear and nonlinear) feature extractors based on maximum margin criterion (MMC). Geometrically, feature extractors based on MMC maximize the (average) margin between classes after dimensionality reduction. It is shown that MMC can represent class separability better than PCA. As a connection to LDA, we may also derive LDA from MMC by incorporating some constraints. By using some other constraints, we establish a new linear feature extractor that does not suffer from the small sample size problem, which is known to cause serious stability problems for LDA. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in the paper. Our extensive experiments demonstrate that the new feature extractors are effective, stable, and efficient.  相似文献   

19.
人脸特征提取是人脸识别流程最重要的步骤,特征的好坏直接影响了识别效果。为了得到更好的人脸识别效果,需要充分利用样本的信息。为了充分利用训练样本和测试样本包含的信息,提出了利用样本散度矩阵将主成分分析PCA算法和线性判别分析LDA算法加权组合的半监督LDA(SLDA)特征提取算法。同时,受组合优化问题的启发,利用二进制遗传算法对半监督特征提取算法得到的特征空间进行优化。在ORL人脸数据库上的实验结果表明:与人脸识别经典算法和部分改进算法相比,SLDA算法获得了更高的识别率。  相似文献   

20.
In this paper, a modified Fisher linear discriminant analysis (FLDA) is proposed and aims to not only overcome the rank limitation of FLDA, that is, at most only finding a discriminant vector for 2-class problem based on Fisher discriminant criterion, but also relax singularity of the within-class scatter matrix and finally improves classification performance of FLDA. Experiments on nine publicly available datasets show that the proposed method has better or comparable performance on all the datasets than FLDA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号