首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
针对文本情感分类准确率不高的问题,提出基于CCA-VSM分类器和KFD的多级文本情感分类方法。采用典型相关性分析对文档的权重特征向量和词性特征向量进行降维,在约简向量集上构建向量空间模型,根据模型之间的差异度设计VSM分类器,筛选出与测试文档差异度较小的R个模型作为核Fisher判别的输入,最终判别出文档的情感观点。实验结果表明:该方法比传统支持向量机有较高的分类准确率和较快的分类速度,权重特征和词性特征对分类准确率的影响较大。  相似文献   

2.
This paper proposes a novel method for breast cancer diagnosis using the feature generated by genetic programming (GP). We developed a new feature extraction measure (modified Fisher linear discriminant analysis (MFLDA)) to overcome the limitation of Fisher criterion. GP as an evolutionary mechanism provides a training structure to generate features. A modified Fisher criterion is developed to help GP optimize features that allow pattern vectors belonging to different categories to distribute compactly and disjoint regions. First, the MFLDA is experimentally compared with some classical feature extraction methods (principal component analysis, Fisher linear discriminant analysis, alternative Fisher linear discriminant analysis). Second, the feature generated by GP based on the modified Fisher criterion is compared with the features generated by GP using Fisher criterion and an alternative Fisher criterion in terms of the classification performance. The classification is carried out by a simple classifier (minimum distance classifier). Finally, the same feature generated by GP is compared with a original feature set as the inputs to multi-layer perceptrons and support vector machine. Results demonstrate the capability of this method to transform information from high-dimensional feature space into one-dimensional space and automatically discover the relationship among data, to improve classification accuracy.  相似文献   

3.
A novel fuzzy nonlinear classifier, called kernel fuzzy discriminant analysis (KFDA), is proposed to deal with linear non-separable problem. With kernel methods KFDA can perform efficient classification in kernel feature space. Through some nonlinear mapping the input data can be mapped implicitly into a high-dimensional kernel feature space where nonlinear pattern now appears linear. Different from fuzzy discriminant analysis (FDA) which is based on Euclidean distance, KFDA uses kernel-induced distance. Theoretical analysis and experimental results show that the proposed classifier compares favorably with FDA.  相似文献   

4.
It is widely recognized that whether the selected kernel matches the data determines the performance of kernel-based methods. Ideally it is expected that the data is linearly separable in the kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a cost function to optimize the kernel function. However, the data may not be linearly separable even after kernel transformation in many applications, e.g., the data may exist as multimodally distributed structure, in this case, a nonlinear classifier is preferred, and obviously Fisher criterion is not a suitable choice as kernel optimization rule. Motivated by this issue, we propose a localized kernel Fisher criterion, instead of traditional Fisher criterion, as the kernel optimization rule to increase the local margins between embedded classes in kernel induced feature space. Experimental results based on some benchmark data and measured radar high-resolution range profile (HRRP) data show that the classification performance can be improved by using the proposed method.  相似文献   

5.
提出了一种基于低密度分割几何距离的半监督KFDA(kernel Fisher discriminant analysis)算法(semisupervised KFDA based on low density separation geometry distance,简称SemiGKFDA).该算法以低密度分割几何距离作为相似性度量,通过大量无标签样本,提高KFDA算法的泛化能力.首先,利用核函数将原始空间样本数据映射到高维特征空间中;然后,通过有标签样本和无标签样本构建低密度分割几何距离测度上的内蕴结构一致性假设,使其作为正则化项整合到费舍尔判别分析的目标函数中;最后,通过求解最小化目标函数获得最优投影矩阵.人工数据集和UCI数据集上的实验表明,该算法与KFDA及其改进算法相比,在分类性能上有显著提高.此外,将该算法与其他算法应用到人脸识别问题中进行对比,实验结果表明,该算法具有更高的识别精度.  相似文献   

6.
This paper presents the implementation of a new text document classification framework that uses the Support Vector Machine (SVM) approach in the training phase and the Euclidean distance function in the classification phase, coined as Euclidean-SVM. The SVM constructs a classifier by generating a decision surface, namely the optimal separating hyper-plane, to partition different categories of data points in the vector space. The concept of the optimal separating hyper-plane can be generalized for the non-linearly separable cases by introducing kernel functions to map the data points from the input space into a high dimensional feature space so that they could be separated by a linear hyper-plane. This characteristic causes the implementation of different kernel functions to have a high impact on the classification accuracy of the SVM. Other than the kernel functions, the value of soft margin parameter, C is another critical component in determining the performance of the SVM classifier. Hence, one of the critical problems of the conventional SVM classification framework is the necessity of determining the appropriate kernel function and the appropriate value of parameter C for different datasets of varying characteristics, in order to guarantee high accuracy of the classifier. In this paper, we introduce a distance measurement technique, using the Euclidean distance function to replace the optimal separating hyper-plane as the classification decision making function in the SVM. In our approach, the support vectors for each category are identified from the training data points during training phase using the SVM. In the classification phase, when a new data point is mapped into the original vector space, the average distances between the new data point and the support vectors from different categories are measured using the Euclidean distance function. The classification decision is made based on the category of support vectors which has the lowest average distance with the new data point, and this makes the classification decision irrespective of the efficacy of hyper-plane formed by applying the particular kernel function and soft margin parameter. We tested our proposed framework using several text datasets. The experimental results show that this approach makes the accuracy of the Euclidean-SVM text classifier to have a low impact on the implementation of kernel functions and soft margin parameter C.  相似文献   

7.
为了提高高光谱遥感影像的分类精度,充分利用影像的光谱和局部信息,文中提出小波核局部Fisher判别分析的高光谱遥感影像特征提取方法.通过小波核函数将数据集从低维原始空间映射至高维特征空间,考虑到数据的局部信息,利用加权矩阵计算散度矩阵,对局部Fisher判别准则函数求解最优特征矩阵,使不同类别的样本在高维特征空间中的可分离性更佳.在2个公开高光谱数据集上的实验表明,文中方法的总体分类精度和Kappa系数都有所提高.  相似文献   

8.
王昕  刘颖  范九伦 《计算机科学》2012,39(9):262-265
核Fisher判别分析法是一种有效的非线性判别分析法。传统的核Fisher判别分析仅选用单个核函数,在人脸特征提取方面仍显不足。鉴于此,提出多核Fisher判别分析法,即通过将多个单核Fisher判别得到的投影进行加权组合得到加权投影,以加权投影为依据进行特征提取和分类。实验表明,在进行人脸特征提取和分类时,多核Fisher判别分析法优于单核Fisher判别分析法。  相似文献   

9.
Large-margin methods, such as support vector machines (SVMs), have been very successful in classification problems. Recently, maximum margin discriminant analysis (MMDA) was proposed that extends the large-margin idea to feature extraction. It often outperforms traditional methods such as kernel principal component analysis (KPCA) and kernel Fisher discriminant analysis (KFD). However, as in the SVM, its time complexity is cubic in the number of training points m, and is thus computationally inefficient on massive data sets. In this paper, we propose an (1+epsilon)(2)-approximation algorithm for obtaining the MMDA features by extending the core vector machine. The resultant time complexity is only linear in m, while its space complexity is independent of m. Extensive comparisons with the original MMDA, KPCA, and KFD on a number of large data sets show that the proposed feature extractor can improve classification accuracy, and is also faster than these kernel-based methods by over an order of magnitude.  相似文献   

10.
提出一种新的基于核Fisher判别分析的意识任务识别新方法。该方法首先通过核函数建立一个非线性映射,把原空间的样本点投影到一个高维特征空间,然后在特征空间应用线性Fisher判别。利用不同意识任务生成的脑电数据对KFDA和FDA进行比较,最后用线性支持向量机进行分类和识别,并与非线性支持向量机进行了比较,结果表明KFDA的识别率明显优于后二者。  相似文献   

11.
Generalized discriminant analysis using a kernel approach   总被引:100,自引:0,他引:100  
Baudat G  Anouar F 《Neural computation》2000,12(10):2385-2404
We present a new method that we call generalized discriminant analysis (GDA) to deal with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. In the transformed space, linear properties make it easy to extend and generalize the classical linear discriminant analysis (LDA) to nonlinear discriminant analysis. The formulation is expressed as an eigenvalue problem resolution. Using a different kernel, one can cover a wide class of nonlinearities. For both simulated data and alternate kernels, we give classification results, as well as the shape of the decision function. The results are confirmed using real data to perform seed classification.  相似文献   

12.
一种用于人脸识别的非线性鉴别特征融合方法   总被引:2,自引:0,他引:2  
最近,在人脸等图像识别领域,用于抽取非线性特征的核方法如核Fisher鉴别分析(KFDA)已经取得成功并得到了广泛应用,但现有的核方法都存在这样的问题,即构造特征空间中的核矩阵所耗费的计算量非常大.而且,抽取得到的单类特征往往不能获得到令人满意的识别结果.提出了一种用于人脸识别的非线性鉴别特征融合方法,即首先利用小波变换和奇异值分解对原始输入样本进行降雏变换,抽取同一样本空间的两类特征,然后利用复向量将这两类特征组合在一起,构成一复特征向量空间,最后在该空间中进行最优鉴别特征抽取.在ORL标准人脸库上的试验结果表明所提方法不仅在识别性能上优于现有的核Fisher鉴别分析方法,而且,在ORL人脸库上的特征抽取速度提高了近8倍.  相似文献   

13.
The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for classification, as introduced by Vapnik, a nonlinear decision boundary is obtained by mapping the input vector first in a nonlinear way to a high-dimensional kernel-induced feature space in which a linear large margin classifier is constructed. Practical expressions are formulated in the dual space in terms of the related kernel function, and the solution follows from a (convex) quadratic programming (QP) problem. In least-squares SVMs (LS-SVMs), the SVM problem formulation is modified by introducing a least-squares cost function and equality instead of inequality constraints, and the solution follows from a linear system in the dual space. Implicitly, the least-squares formulation corresponds to a regression formulation and is also related to kernel Fisher discriminant analysis. The least-squares regression formulation has advantages for deriving analytic expressions in a Bayesian evidence framework, in contrast to the classification formulations used, for example, in gaussian processes (GPs). The LS-SVM formulation has clear primal-dual interpretations, and without the bias term, one explicitly constructs a model that yields the same expressions as have been obtained with GPs for regression. In this article, the Bayesian evidence framework is combined with the LS-SVM classifier formulation. Starting from the feature space formulation, analytic expressions are obtained in the dual space on the different levels of Bayesian inference, while posterior class probabilities are obtained by marginalizing over the model parameters. Empirical results obtained on 10 public domain data sets show that the LS-SVM classifier designed within the Bayesian evidence framework consistently yields good generalization performances.  相似文献   

14.
Standard support vector machines (SVMs) training algorithms have O(l 3) computational and O(l 2) space complexities, where l is the training set size. It is thus computationally infeasible on very large data sets. To alleviate the computational burden in SVM training, we propose an algorithm to train SVMs on a bound vectors set that is extracted based on Fisher projection. For linear separate problems, we use linear Fisher discriminant to compute the projection line, while for non-linear separate problems, we use kernel Fisher discriminant to compute the projection line. For each case, we select a certain ratio samples whose projections are adjacent to those of the other class as bound vectors. Theoretical analysis shows that the proposed algorithm is with low computational and space complexities. Extensive experiments on several classification benchmarks demonstrate the effectiveness of our approach.  相似文献   

15.
首先利用核函数技术将原始样本隐式地映射到高维特征空间;然后在高维空间里利用再生核理论建立基于Fisher鉴别极小准则的2个等价模型;最后在该空间的核类间散布矩阵的非零空间和零空间中应用Fisher极小鉴别准则求取核鉴别矢量.在人脸库上的实验结果验证了该算法的有效性.  相似文献   

16.
提出了一种新的以Bhattacharyya距离为准则的核空间特征提取算法.该算法的核心思想是把样本非线性映射到高维核空间.在核空间中寻找一组最优特征向量,然后把样本线性映射到低维特征空间,使类别间的Bhattacharyya距离最大。从而保证Bayes分类误差上界最小.采用核函数技术,把特征提取问题转化为一个QP(Quadratic Programming)优化问题.保证了算法的全局收敛性和快速性.此算法具有两个优点:(1)该算法提取的特征对数据分类来说更有效;(2)对于给定的模式分类问题,算法可以预测出在不损失分类精度情况下所必须的特征向量数目的上界,并能够提取出分类有效特征.实验结果表明,该算法的性能与理论分析的结论相吻合,优于目前常用的特征提取算法.  相似文献   

17.
提出了一种新的非线性鉴别分析算法——极小化类内散布的大间距非线性鉴别分析。该算法的主要思想是将原始样本映射到更高维的空间中,利用核技术对传统的大间距分类算法进行改进,在新的高维空间中利用再生核技术寻找核鉴别矢量,使得在这个新的空间中核类内散度尽可能的小。在ORL人脸数据库上进行实验,分析了识别率及识别时间,结果表明该方法具有一定优势。  相似文献   

18.
There are two fundamental problems with the Fisher linear discriminant analysis for face recognition. One is the singularity problem of the within-class scatter matrix due to small training sample size. The other is that it cannot efficiently describe complex nonlinear variations of face images because of its linear property. In this letter, a kernel scatter-difference-based discriminant analysis is proposed to overcome these two problems. We first use the nonlinear kernel trick to map the input data into an implicit feature space F. Then a scatter-difference-based discriminant rule is defined to analyze the data in F. The proposed method can not only produce nonlinear discriminant features but also avoid the singularity problem of the within-class scatter matrix. Extensive experiments show encouraging recognition performance of the new algorithm.  相似文献   

19.
尽管基于Fisher准则的线性鉴别分析被公认为特征抽取的有效方法之一,并被成功地用于人脸识别,但是由于光照变化、人脸表情和姿势变化,实际上的人脸图像分布是十分复杂的,因此,抽取非线性鉴别特征显得十分必要。为了能利用非线性鉴别特征进行人脸识别,提出了一种基于核的子空间鉴别分析方法。该方法首先利用核函数技术将原始样本隐式地映射到高维(甚至无穷维)特征空间;然后在高维特征空间里,利用再生核理论来建立基于广义Fisher准则的两个等价模型;最后利用正交补空间方法求得最优鉴别矢量来进行人脸识别。在ORL和NUST603两个人脸数据库上,对该方法进行了鉴别性能实验,得到了识别率分别为94%和99.58%的实验结果,这表明该方法与核组合方法的识别结果相当,且明显优于KPCA和Kernel fisherfaces方法的识别结果。  相似文献   

20.
Kernel discriminant analysis (KDA) is a widely used tool in feature extraction community. However, for high-dimensional multi-class tasks such as face recognition, traditional KDA algorithms have the limitation that the Fisher criterion is nonoptimal with respect to classification rate. Moreover, they suffer from the small sample size problem. This paper presents a variant of KDA called kernel-based improved discriminant analysis (KIDA), which can effectively deal with the above two problems. In the proposed framework, origin samples are projected firstly into a feature space by an implicit nonlinear mapping. After reconstructing between-class scatter matrix in the feature space by weighted schemes, the kernel method is used to obtain a modified Fisher criterion directly related to classification error. Finally, simultaneous diagonalization technique is employed to find lower-dimensional nonlinear features with significant discriminant power. Experiments on face recognition task show that the proposed method is superior to the traditional KDA and LDA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号