首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
For classifying large data sets, we propose a discriminant kernel that introduces a nonlinear mapping from the joint space of input data and output label to a discriminant space. Our method differs from traditional ones, which correspond to map nonlinearly from the input space to a feature space. The induced distance of our discriminant kernel is Eu- clidean and Fisher separable, as it is defined based on distance vectors of the feature space to distance vectors on the discriminant space. Unlike the support vector machines or the kernel Fisher discriminant analysis, the classifier does not need to solve a quadric program- ming problem or eigen-decomposition problems. Therefore, it is especially appropriate to the problems of processing large data sets. The classifier can be applied to face recognition, shape comparison and image classification benchmark data sets. The method is significantly faster than other methods and yet it can deliver comparable classification accuracy.  相似文献   

2.
Kernel discriminant analysis (KDA) is a widely used tool in feature extraction community. However, for high-dimensional multi-class tasks such as face recognition, traditional KDA algorithms have the limitation that the Fisher criterion is nonoptimal with respect to classification rate. Moreover, they suffer from the small sample size problem. This paper presents a variant of KDA called kernel-based improved discriminant analysis (KIDA), which can effectively deal with the above two problems. In the proposed framework, origin samples are projected firstly into a feature space by an implicit nonlinear mapping. After reconstructing between-class scatter matrix in the feature space by weighted schemes, the kernel method is used to obtain a modified Fisher criterion directly related to classification error. Finally, simultaneous diagonalization technique is employed to find lower-dimensional nonlinear features with significant discriminant power. Experiments on face recognition task show that the proposed method is superior to the traditional KDA and LDA.  相似文献   

3.
Linear discriminant analysis (LDA) is a dimension reduction method which finds an optimal linear transformation that maximizes the class separability. However, in undersampled problems where the number of data samples is smaller than the dimension of data space, it is difficult to apply LDA due to the singularity of scatter matrices caused by high dimensionality. In order to make LDA applicable, several generalizations of LDA have been proposed recently. In this paper, we present theoretical and algorithmic relationships among several generalized LDA algorithms and compare their computational complexities and performances in text classification and face recognition. Towards a practical dimension reduction method for high dimensional data, an efficient algorithm is proposed, which reduces the computational complexity greatly while achieving competitive prediction accuracies. We also present nonlinear extensions of these LDA algorithms based on kernel methods. It is shown that a generalized eigenvalue problem can be formulated in the kernel-based feature space, and generalized LDA algorithms are applied to solve the generalized eigenvalue problem, resulting in nonlinear discriminant analysis. Performances of these linear and nonlinear discriminant analysis algorithms are compared extensively.  相似文献   

4.
Kernel pooled local subspaces for classification.   总被引:1,自引:0,他引:1  
We investigate the use of subspace analysis methods for learning low-dimensional representations for classification. We propose a kernel-pooled local discriminant subspace method and compare it against competing techniques: kernel principal component analysis (KPCA) and generalized discriminant analysis (GDA) in classification problems. We evaluate the classification performance of the nearest-neighbor rule with each subspace representation. The experimental results using several data sets demonstrate the effectiveness and performance superiority of the kernel-pooled subspace method over competing methods such as KPCA and GDA in some classification problems.  相似文献   

5.
This paper addresses two problems in linear discriminant analysis (LDA) of face recognition. The first one is the problem of recognition of human faces under pose and illumination variations. It is well known that the distribution of face images with different pose, illumination, and face expression is complex and nonlinear. The traditional linear methods, such as LDA, will not give a satisfactory performance. The second problem is the small sample size (S3) problem. This problem occurs when the number of training samples is smaller than the dimensionality of feature vector. In turn, the within-class scatter matrix will become singular. To overcome these limitations, this paper proposes a new kernel machine-based one-parameter regularized Fisher discriminant (K1PRFD) technique. K1PRFD is developed based on our previously developed one-parameter regularized discriminant analysis method and the well-known kernel approach. Therefore, K1PRFD consists of two parameters, namely the regularization parameter and kernel parameter. This paper further proposes a new method to determine the optimal kernel parameter in RBF kernel and regularized parameter in within-class scatter matrix simultaneously based on the conjugate gradient method. Three databases, namely FERET, Yale Group B, and CMU PIE, are selected for evaluation. The results are encouraging. Comparing with the existing LDA-based methods, the proposed method gives superior results.  相似文献   

6.
王昕  刘颖  范九伦 《计算机科学》2012,39(9):262-265
核Fisher判别分析法是一种有效的非线性判别分析法。传统的核Fisher判别分析仅选用单个核函数,在人脸特征提取方面仍显不足。鉴于此,提出多核Fisher判别分析法,即通过将多个单核Fisher判别得到的投影进行加权组合得到加权投影,以加权投影为依据进行特征提取和分类。实验表明,在进行人脸特征提取和分类时,多核Fisher判别分析法优于单核Fisher判别分析法。  相似文献   

7.
Kernel Fisher discriminant analysis (KFDA) extracts a nonlinear feature from a sample by calculating as many kernel functions as the training samples. Thus, its computational efficiency is inversely proportional to the size of the training sample set. In this paper we propose a more approach to efficient nonlinear feature extraction, FKFDA (fast KFDA). This FKFDA consists of two parts. First, we select a portion of training samples based on two criteria produced by approximating the kernel principal component analysis (AKPCA) in the kernel feature space. Then, referring to the selected training samples as nodes, we formulate FKFDA to improve the efficiency of nonlinear feature extraction. In FKFDA, the discriminant vectors are expressed as linear combinations of nodes in the kernel feature space, and the extraction of a feature from a sample only requires calculating as many kernel functions as the nodes. Therefore, the proposed FKFDA has a much faster feature extraction procedure compared with the naive kernel-based methods. Experimental results on face recognition and benchmark datasets classification suggest that the proposed FKFDA can generate well classified features.  相似文献   

8.
Face recognition using kernel direct discriminant analysis algorithms   总被引:22,自引:0,他引:22  
Techniques that can introduce low-dimensional feature representation with enhanced discriminatory power is of paramount importance in face recognition (FR) systems. It is well known that the distribution of face images, under a perceivable variation in viewpoint, illumination or facial expression, is highly nonlinear and complex. It is, therefore, not surprising that linear techniques, such as those based on principle component analysis (PCA) or linear discriminant analysis (LDA), cannot provide reliable and robust solutions to those FR problems with complex face variations. In this paper, we propose a kernel machine-based discriminant analysis method, which deals with the nonlinearity of the face patterns' distribution. The proposed method also effectively solves the so-called "small sample size" (SSS) problem, which exists in most FR tasks. The new algorithm has been tested, in terms of classification error rate performance, on the multiview UMIST face database. Results indicate that the proposed methodology is able to achieve excellent performance with only a very small set of features being used, and its error rate is approximately 34% and 48% of those of two other commonly used kernel FR approaches, the kernel-PCA (KPCA) and the generalized discriminant analysis (GDA), respectively.  相似文献   

9.
一种用于人脸识别的非线性鉴别特征融合方法   总被引:2,自引:0,他引:2  
最近,在人脸等图像识别领域,用于抽取非线性特征的核方法如核Fisher鉴别分析(KFDA)已经取得成功并得到了广泛应用,但现有的核方法都存在这样的问题,即构造特征空间中的核矩阵所耗费的计算量非常大.而且,抽取得到的单类特征往往不能获得到令人满意的识别结果.提出了一种用于人脸识别的非线性鉴别特征融合方法,即首先利用小波变换和奇异值分解对原始输入样本进行降雏变换,抽取同一样本空间的两类特征,然后利用复向量将这两类特征组合在一起,构成一复特征向量空间,最后在该空间中进行最优鉴别特征抽取.在ORL标准人脸库上的试验结果表明所提方法不仅在识别性能上优于现有的核Fisher鉴别分析方法,而且,在ORL人脸库上的特征抽取速度提高了近8倍.  相似文献   

10.
Multiple kernel learning (MKL) has recently become a hot topic in kernel methods. However, many MKL algorithms suffer from high computational cost. Moreover, standard MKL algorithms face the challenge of the rapid development of distributed computational environment such as cloud computing. In this study, a framework for parallel multiple kernel learning (PMKL) using hybrid alternating direction method of multipliers (H-ADMM) is developed to integrate the MKL algorithms and the multiprocessor system. The global problem with multiple kernel is divided into multiple local problems each of which is optimized in a local processor with a single kernel. An H-ADMM is proposed to make the local processors coordinate with each other to achieve the global optimal solution. The results of computational experiments show that PMKL exhibits high classification accuracy and fast computational speed.  相似文献   

11.
Linear regression uses the least square algorithm to solve the solution of linear regression equation. Linear regression classification (LRC) shows good classification performance on face image data. However, when the axes of linear regression of class-specific samples have intersections, LRC could not well classify the samples that distribute around intersections. Moreover, the LRC could not perform well at the situation of severe lighting variations. This paper proposes a new classification method, kernel linear regression classification (KLRC), based on LRC and the kernel trick. KLRC is a nonlinear extension of LRC and can offset the drawback of LRC. KLRC implicitly maps the data into a high-dimensional kernel space by using the nonlinear mapping determined by a kernel function. Through this mapping, KLRC is able to make the data more linearly separable and can perform well for face recognition with varying lighting. For comparison, we conduct on three standard databases under some evaluation protocols. The proposed methodology not only outperforms LRC but also takes the better performance than typical kernel methods such as kernel linear discriminant analysis and kernel principal component analysis.  相似文献   

12.
Linear subspace analysis methods have been successfully applied to extract features for face recognition.But they are inadequate to represent the complex and nonlinear variations of real face images,such as illumination,facial expression and pose variations,because of their linear properties.In this paper,a nonlinear subspace analysis method,Kernel-based Nonlinear Discriminant Analysis (KNDA),is presented for face recognition,which combines the nonlinear kernel trick with the linear subspace analysis method-Fisher Linear Discriminant Analysis (FLDA).First,the kernel trick is used to project the input data into an implicit feature space,then FLDA is performed in this feature space.Thus nonlinear discriminant features of the input data are yielded.In addition,in order to reduce the computational complexity,a geometry-based feature vectors selection scheme is adopted.Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA),which combines the kernel trick with linear Principal Component Analysis (PCA).Experiments are performed with the polynomial kernel,and KNDA is compared with KPCA and FLDA.Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.  相似文献   

13.
提出了一种新的非线性特征抽取方法——隐空间中参数化直接鉴别分析。其主要思想是利用一核函数将原始输入空间非线性变换到隐空间,针对在该隐空间中类内散布矩阵总是奇异等问题,利用参数化直接鉴别分析进行特征抽取。与现有的核特征抽取方法不同的是,该方法不需要核函数满足Mercer 定理,从而增加了核函数的选择范围。更为重要的是,由于在隐空间中采用了参数化直接鉴别分析,不仅保留了参数化直接鉴别分析的优点,而且有效地抽取了样本的非线性特征;在该方法中提出了一个更为合理的加权系数矩阵,提高了分类性能。在FERET人脸数据库子库上的实验结果验证了该方法的有效性。  相似文献   

14.
提出了一种基于低密度分割几何距离的半监督KFDA(kernel Fisher discriminant analysis)算法(semisupervised KFDA based on low density separation geometry distance,简称SemiGKFDA).该算法以低密度分割几何距离作为相似性度量,通过大量无标签样本,提高KFDA算法的泛化能力.首先,利用核函数将原始空间样本数据映射到高维特征空间中;然后,通过有标签样本和无标签样本构建低密度分割几何距离测度上的内蕴结构一致性假设,使其作为正则化项整合到费舍尔判别分析的目标函数中;最后,通过求解最小化目标函数获得最优投影矩阵.人工数据集和UCI数据集上的实验表明,该算法与KFDA及其改进算法相比,在分类性能上有显著提高.此外,将该算法与其他算法应用到人脸识别问题中进行对比,实验结果表明,该算法具有更高的识别精度.  相似文献   

15.
There are two fundamental problems with the Fisher linear discriminant analysis for face recognition. One is the singularity problem of the within-class scatter matrix due to small training sample size. The other is that it cannot efficiently describe complex nonlinear variations of face images because of its linear property. In this letter, a kernel scatter-difference-based discriminant analysis is proposed to overcome these two problems. We first use the nonlinear kernel trick to map the input data into an implicit feature space F. Then a scatter-difference-based discriminant rule is defined to analyze the data in F. The proposed method can not only produce nonlinear discriminant features but also avoid the singularity problem of the within-class scatter matrix. Extensive experiments show encouraging recognition performance of the new algorithm.  相似文献   

16.
Feature extraction is among the most important problems in face recognition systems. In this paper, we propose an enhanced kernel discriminant analysis (KDA) algorithm called kernel fractional-step discriminant analysis (KFDA) for nonlinear feature extraction and dimensionality reduction. Not only can this new algorithm, like other kernel methods, deal with nonlinearity required for many face recognition tasks, it can also outperform traditional KDA algorithms in resisting the adverse effects due to outlier classes. Moreover, to further strengthen the overall performance of KDA algorithms for face recognition, we propose two new kernel functions: cosine fractional-power polynomial kernel and non-normal Gaussian RBF kernel. We perform extensive comparative studies based on the YaleB and FERET face databases. Experimental results show that our KFDA algorithm outperforms traditional kernel principal component analysis (KPCA) and KDA algorithms. Moreover, further improvement can be obtained when the two new kernel functions are used.  相似文献   

17.
抽取最佳鉴别特征是人脸识别中的重要一步。对小样本的高维人脸图像样本,由于各种抽取非线性鉴别特征的方法均存在各自的问题,为此提出了一种求解核的Fisher非线性最佳鉴别特征的新方法,该方法首先在特征空间用类间散度阵和类内散度阵作为Fisher准则,来得到最佳非线性鉴别特征,然后针对此方法存在的病态问题,进一步在类内散度阵的零空间中求解最佳非线性鉴别矢量。基于ORL人脸数据库的实验表明,该新方法抽取的非线性最佳鉴别特征明显优于Fisher线性鉴别分析(FLDA)的线性特征和广义鉴别分析(GDA)的非线性特征。  相似文献   

18.
新的非线性鉴别特征抽取方法及人脸识别   总被引:1,自引:0,他引:1  
在非线性空间中采用新的最大散度差鉴别准则,提出了一种新的核最大散度差鉴别分析方法.该方法不仅有效地抽取了人脸图像的非线性鉴别特征,而且从根本上避免了以往核Fisher鉴别分析中训练样本总数较多时,通常存在的核散布矩阵奇异的问题,计算复杂度大大降低,识别速度有了明显的提高.在ORL人脸数据库上的实验结果验证了该算法的有效性.  相似文献   

19.
We present a novel method of nonlinear discriminant analysis involving a set of locally linear transformations called "Locally Linear Discriminant Analysis" (LLDA). The underlying idea is that global nonlinear data structures are locally linear and local structures can be linearly aligned. Input vectors are projected into each local feature space by linear transformations found to yield locally linearly transformed classes that maximize the between-class covariance while minimizing the within-class covariance. In face recognition, linear discriminant analysis (LIDA) has been widely adopted owing to its efficiency, but it does not capture nonlinear manifolds of faces which exhibit pose variations. Conventional nonlinear classification methods based on kernels such as generalized discriminant analysis (GDA) and support vector machine (SVM) have been developed to overcome the shortcomings of the linear method, but they have the drawback of high computational cost of classification and overfitting. Our method is for multiclass nonlinear discrimination and it is computationally highly efficient as compared to GDA. The method does not suffer from overfitting by virtue of the linear base structure of the solution. A novel gradient-based learning algorithm is proposed for finding the optimal set of local linear bases. The optimization does not exhibit a local-maxima problem. The transformation functions facilitate robust face recognition in a low-dimensional subspace, under pose variations, using a single model image. The classification results are given for both synthetic and real face data.  相似文献   

20.
Foley-Sammon optimal discriminant vectors using kernel approach   总被引:4,自引:0,他引:4  
A new nonlinear feature extraction method called kernel Foley-Sammon optimal discriminant vectors (KFSODVs) is presented in this paper. This new method extends the well-known Foley-Sammon optimal discriminant vectors (FSODVs) from linear domain to a nonlinear domain via the kernel trick that has been used in support vector machine (SVM) and other commonly used kernel-based learning algorithms. The proposed method also provides an effective technique to solve the so-called small sample size (SSS) problem which exists in many classification problems such as face recognition. We give the derivation of KFSODV and conduct experiments on both simulated and real data sets to confirm that the KFSODV method is superior to the previous commonly used kernel-based learning algorithms in terms of the performance of discrimination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号