首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
该文是对当前支持向量机在文本分类上的应用进行研究。先介绍了支持向量机的基本方法,再通过对不同方法的支持向量机分类算法的比较,进行一个总体的描述和概括。并对未来发展发向做了一个预测。  相似文献   

2.
该文对多类分类支持向量机、模糊支持向量机、小波变换支持向量机及主动支持向量机在遥感图像分类中应用的情况进行了阐述及总结,并给出了支持向量机在遥感图像分类中应用的发展趋势。  相似文献   

3.
机械故障诊断本质上是一个模式分类问题.支持向量机由于解决分类问题有着较好的表现,得到了日益广泛的应用.针对支持向量机的参数对分类性能的影响,采用粒子群算法对支持向量机的惩罚因子和径向基核函数进行优化,使支持向量机的分类性能最优,并将其应用于实例,得到了较好的分类正确率.  相似文献   

4.
建立了一种基于支持向量机的地表水环境质量分类模型,并将其用于浙江省主要市界交界面的地表水环境质量分类。该模型采用径向基核函数,以一对多方式实现多分类。分别以网格搜索、粒子群优化和遗传算法三种优化方法对支持向量机的控制参数进行寻优。实验表明,采用网格搜索法确定支持向量机控制参数可以得到最好的水质分类结果,分类准确率可达到82%,由此证明以支持向量机对水质进行分类是可行的。  相似文献   

5.
快速的支持向量机多类分类研究   总被引:1,自引:0,他引:1       下载免费PDF全文
研究了支持向量机多类算法DAGSVM(Direct Acyclic Graph SVM)的速度优势,提出了结合DAGSVM和简化支持向量技术的一种快速支持向量机多类分类方法。该方法一方面减少了一次分类所需的两类支持向量机的数量,另一方面减少了支持向量的数量。实验采用UCI和Statlog数据库的多类数据,并和四种多类方法进行比较,结果表明该方法能有效地加快分类速度。  相似文献   

6.
多分类孪生支持向量机研究进展   总被引:3,自引:0,他引:3  
孪生支持向量机因其简单的模型、快速的训练速度和优秀的性能而受到广泛关注.该算法最初是为解决二分类问题而提出的,不能直接用于解决现实生活中普遍存在的多分类问题.近来,学者们致力于将二分类孪生支持向量机扩展为多分类方法并提出了多种多分类孪生支持向量机.多分类孪生支持向量机的研究已经取得了一定的进展.本文主要工作是回顾多分类孪生支持向量机的发展,对多分类孪生支持向量机进行合理归类,分析各个类型的多分类孪生支持向量机的理论和几何意义.本文以多分类孪生支持向量机的子分类器组织结构为依据,将多分类孪生支持向量机分为:基于“一对多”策略的多分类孪生支持向量机、基于“一对一”策略的多分类孪生支持向量机、基于“一对一对余”策略的多分类孪生支持向量机、基于二叉树结构的多分类孪生支持向量机和基于“多对一”策略的多分类孪生支持向量机.基于有向无环图的多分类孪生支持向量机训练过程与基于“一对一”策略的多分类孪生支持向量机类似,但是其决策方式有其特殊的优缺点,因此本文将其也独立为一类.本文分析和总结了这六种类型的多分类孪生支持向量机的算法思想、理论基础.此外,还通过实验对比了分类性能.本文工作为各种多分类孪生支持向量机之间建立了联系比较,使得初学者能够快速理解不同多分类孪生支持向量机之间的本质区别,也对实际应用中选取合适的多分类孪生支持向量机起到一定的指导作用.  相似文献   

7.
多类支持向量机文本分类方法   总被引:8,自引:3,他引:5  
文本分类是数据挖掘的基础和核心,支持向量机(SVM)是解决文本分类问题的最好算法之一.传统的支持向量机是两类分类问题,如何有效地将其推广到多类分类问题仍是一项有待研究的课题.介绍了支持向量机的基本原理,对现有主要的多类支持向量机文本分类算法进行了讨论和比较.提出了多类支持向量机文本分类中存在的问题和今后的发展.  相似文献   

8.
支持向量机的优化算法对准确检索所需信息资料很重要.传统支持向量机参数寻优方法速度慢、运算量大,具有一定的盲目性.针对准确快速检索到所需信息,为提高支持向量机算法的性能,提出了一种采用免疫算法对支持向量机参数进行优化的文本分类方法(IA-SVM).将支持向量机模型参数作为抗体的基因设计了抗体的编码方案,利用人工免疫算法对支持向量机的惩罚因子和径向基核函数进行优化搜索,使支持向量机的分类性能最优.实验结果表明,IA-SVM算法减少了对支持向量机参数选择的盲目性,在文本分类问题上明显提高了分类正确率和检索速度.  相似文献   

9.
基于SVM的中文文本自动分类研究   总被引:1,自引:0,他引:1  
详细介绍了进行文本分类的过程,并着重介绍了一种新的基于结构风险最小化理论的分类算法——支持向量机,通过实验比较支持向量机算法和传统的KNN算法应用于文本分类的效果,证实了支持向量机在处理文本分类问题上的优越性。  相似文献   

10.
多类支持向量机分类器对比研究   总被引:3,自引:0,他引:3  
为了解决多类支持向量机的选型问题,降低多类分类问题的难度,对4种常用的多类支持向量机进行了对比研究。从多类支持向量机的构造原理出发,对多类支持向量机的训练复杂度、测试复杂度和分类准确率进行了理论分析。在此基础上,利用标准数据集对多类支持向量机进行试验分析,结果表明,导向无环图支持向量机的分类准确率最高,二叉树支持向量机的实时性最优。  相似文献   

11.
This paper compares the classification performance of linear-system- and neural-network-based models in handwritten-digit classification and face recognition. In inputs to a linear classifier, nonlinear inputs are generated based on linear inputs, using different forms of generating products. Using a genetic algorithm, linear and nonlinear inputs to the linear classifier are selected to improve classification performance. Results show that an appropriate set of linear and nonlinear inputs to the linear classifier were selected, improving significantly its classification performance in both problems. It is also shown that the linear classifier reached a classification performance similar to or better than those obtained by nonlinear neural-network classifiers with linear inputs.  相似文献   

12.
The kernel function method in support vector machine (SVM) is an excellent tool for nonlinear classification. How to design a kernel function is difficult for an SVM nonlinear classification problem, even for the polynomial kernel function. In this paper, we propose a new kind of polynomial kernel functions, called semi-tensor product kernel (STP-kernel), for an SVM nonlinear classification problem by semi-tensor product of matrix (STP) theory. We have shown the existence of the STP-kernel function and verified that it is just a polynomial kernel. In addition, we have shown the existence of the reproducing kernel Hilbert space (RKHS) associated with the STP-kernel function. Compared to the existing methods, it is much easier to construct the nonlinear feature mapping for an SVM nonlinear classification problem via an STP operator.  相似文献   

13.
Nonlinear classification models have better classification performance than the linear classifiers. However, for many nonlinear classification problems, piecewise-linear discriminant functions can approximate nonlinear discriminant functions. In this study, we combine the algorithm of data envelopment analysis (DEA) with classification information, and propose a novel DEA-based classifier to construct a piecewise-linear discriminant function, in this classifier, the nonnegative conditions of DEA model are loosed and class information is added; Finally, experiments are performed using a UCI data set to demonstrate the accuracy and efficiency of the proposed model.  相似文献   

14.
A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.  相似文献   

15.
首次从线性可分性的角度探讨了人脸图像的性别鉴别问题。通过对常用线性与非线性特征抽取方法以及一类改进的非线性特征抽取方法的对比分析及不同情况下性别鉴别的实验对比,较全面地考察了各种特征抽取方法所对应的数据的线性可分性及分类效果。首次提出从人脸肤色等角度考虑人脸图像的性别鉴别问题,并给出了指示意义较强的鉴别方法与方案建议。  相似文献   

16.
We present a novel method of nonlinear discriminant analysis involving a set of locally linear transformations called "Locally Linear Discriminant Analysis" (LLDA). The underlying idea is that global nonlinear data structures are locally linear and local structures can be linearly aligned. Input vectors are projected into each local feature space by linear transformations found to yield locally linearly transformed classes that maximize the between-class covariance while minimizing the within-class covariance. In face recognition, linear discriminant analysis (LIDA) has been widely adopted owing to its efficiency, but it does not capture nonlinear manifolds of faces which exhibit pose variations. Conventional nonlinear classification methods based on kernels such as generalized discriminant analysis (GDA) and support vector machine (SVM) have been developed to overcome the shortcomings of the linear method, but they have the drawback of high computational cost of classification and overfitting. Our method is for multiclass nonlinear discrimination and it is computationally highly efficient as compared to GDA. The method does not suffer from overfitting by virtue of the linear base structure of the solution. A novel gradient-based learning algorithm is proposed for finding the optimal set of local linear bases. The optimization does not exhibit a local-maxima problem. The transformation functions facilitate robust face recognition in a low-dimensional subspace, under pose variations, using a single model image. The classification results are given for both synthetic and real face data.  相似文献   

17.
In this brief, prior knowledge over general nonlinear sets is incorporated into nonlinear kernel classification problems as linear constraints in a linear program. These linear constraints are imposed at arbitrary points, not necessarily where the prior knowledge is given. The key tool in this incorporation is a theorem of the alternative for convex functions that converts nonlinear prior knowledge implications into linear inequalities without the need to kernelize these implications. Effectiveness of the proposed formulation is demonstrated on publicly available classification data sets, including a cancer prognosis data set. Nonlinear kernel classifiers for these data sets exhibit marked improvements upon the introduction of nonlinear prior knowledge compared to nonlinear kernel classifiers that do not utilize such knowledge.  相似文献   

18.
Learning a proper distance metric is an important problem in document classification, because the similarities of samples in many problems are usually measured by distance metric. In this paper, we address the nonlinear metric leaning problem with applying in the document classification. First, we propose a new representation about nonlinear metric by using a linear combination of some basic kernels. Second, we give a linear metric learning method by a triplet constraint and k-nearest neighbors, and then we develop it to a nonlinear method based on multiple kernel by above nonlinear metric. Further, the corresponding problem can be rewritten as an unconstrained optimization problem on positive definite matrices groups. At last, to ensure the learned distance matrix must be a positive definite matrix, we provide an improved intrinsic steepest descent algorithm with adaptive step-size to solve this unconstrained optimization. The experimental results show that our proposed method is effective on some document classification problems.  相似文献   

19.
Dimensionality reduction is the process of mapping high-dimension patterns to a lower dimension subspace. When done prior to classification, estimates obtained in the lower dimension subspace are more reliable. For some classifiers, there is also an improvement in performance due to the removal of the diluting effect of redundant information. A majority of the present approaches to dimensionality reduction are based on scatter matrices or other statistics of the data which do not directly correlate to classification accuracy. The optimality criteria of choice for the purposes of classification is the Bayes error. Usually however, Bayes error is difficult to express analytically. We propose an optimality criteria based on an approximation of the Bayes error and use it to formulate a linear and a nonlinear method of dimensionality reduction. The nonlinear method we propose, relies on using a multilayered perceptron which produces as output the lower dimensional representation. It thus differs from autoassociative like multilayered perceptrons which have been proposed and used for dimensionality reduction. Our results show that the nonlinear method is, as anticipated, superior to the linear method in that it can perform unfolding of a nonlinear manifold. In addition, the nonlinear method we propose provides substantially better lower dimension representation (for classification purposes) than Fisher's linear discriminant (FLD) and two other nonlinear methods of dimensionality reduction that are often used.  相似文献   

20.
在脉象信号分析识别中,时域、频域等分析方法难以挖掘脉象信号的非线性信息,且传统机器学习方法需要人工定义特征,无法进行特征的自学习。提出一种基于无阈值递归图和卷积神经网络的脉象分析识别方法。基于非线性动力学理论,将脉象信号转换为无阈值递归图,通过VGG-16卷积神经网络实现递归图非线性特征的自动提取,并建立脉象分类模型。实验结果表明,该方法分类准确率可达98.14%,与已有的脉象分类方法相比有所提升。该研究为脉象信号分类提供了一种新的思路和方法,对脉诊客观化具有一定的实用价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号