首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Hush  Don  Scovel  Clint 《Machine Learning》2003,51(1):51-71
This paper studies the convergence properties of a general class of decomposition algorithms for support vector machines (SVMs). We provide a model algorithm for decomposition, and prove necessary and sufficient conditions for stepwise improvement of this algorithm. We introduce a simple rate certifying condition and prove a polynomial-time bound on the rate of convergence of the model algorithm when it satisfies this condition. Although it is not clear that existing SVM algorithms satisfy this condition, we provide a version of the model algorithm that does. For this algorithm we show that when the slack multiplier C satisfies 1/2 C mL, where m is the number of samples and L is a matrix norm, then it takes no more than 4LC 2 m 4/ iterations to drive the criterion to within of its optimum.  相似文献   

2.
A Simple Decomposition Method for Support Vector Machines   总被引:21,自引:0,他引:21  
The decomposition method is currently one of the major methods for solving support vector machines. An important issue of this method is the selection of working sets. In this paper through the design of decomposition methods for bound-constrained SVM formulations we demonstrate that the working set selection is not a trivial task. Then from the experimental analysis we propose a simple selection of the working set which leads to faster convergences for difficult cases. Numerical experiments on different types of problems are conducted to demonstrate the viability of the proposed method.  相似文献   

3.
In this article, we propose some methods for deriving symbolic interpretation of data in the form of rule based learning systems by using Support Vector Machines (SVM). First, Radial Basis Function Neural Networks (RBFNN) learning techniques are explored, as is usual in the literature, since the local nature of this paradigm makes it a suitable platform for performing rule extraction. By using support vectors from a learned SVM it is possible in our approach to use any standard Radial Basis Function (RBF) learning technique for the rule extraction, whilst avoiding the overlapping between classes problem. We will show that merging node centers and support vectors explanation rules can be obtained in the form of ellipsoids and hyper-rectangles. Next, in a dual form, following the framework developed for RBFNN, we construct an algorithm for SVM. Taking SVM as the main paradigm, geometry in the input space is defined from a combination of support vectors and prototype vectors obtained from any clustering algorithm. Finally, randomness associated with clustering algorithms or RBF learning is avoided by using only a learned SVM to define the geometry of the studied region. The results obtained from a certain number of experiments on benchmarks in different domains are also given, leading to a conclusion on the viability of our proposal.  相似文献   

4.
Pavel Laskov 《Machine Learning》2002,46(1-3):315-349
The article presents a general view of a class of decomposition algorithms for training Support Vector Machines (SVM) which are motivated by the method of feasible directions. The first such algorithm for the pattern recognition SVM has been proposed in Joachims, T. (1999, Schölkopf et al. (Eds.) Advances in kernel methods-Support vector learning (pp. 185–208). MIT Press). Its extension to the regression SVM—the maximal inconsistency algorithm—has been recently presented by the author (Laskov, 2000, Solla, Leen, & Müller (Eds.) Advances in neural information processing systems 12 (pp. 484–490). MIT Press). A detailed account of both algorithms is carried out, complemented by theoretical investigation of the relationship between the two algorithms. It is proved that the two algorithms are equivalent for the pattern recognition SVM, and the feasible direction interpretation of the maximal inconsistency algorithm is given for the regression SVM. The experimental results demonstrate an order of magnitude decrease of training time in comparison with training without decomposition, and, most importantly, provide experimental evidence of the linear convergence rate of the feasible direction decomposition algorithms.  相似文献   

5.
支持向量机多类分类算法研究   总被引:33,自引:4,他引:33  
提出一种新的基于二叉树结构的支持向量(SVM)多类分类算法.该算法解决了现有主要算法所存在的不可分区域问题.为了获得较高的推广能力,必须让样本分布广的类处于二叉树的上层节点,才能获得更大的划分空间.所以,该算法采用最小超立方体和最小超球体类包含作为二叉树的生成算法.实验结果表明,该算法具有一定的优越性.  相似文献   

6.
We study the typical properties of polynomial Support Vector Machines within a Statistical Mechanics approach that takes into account the number of high order features relative to the input space dimension. We analyze the effect of different features' normalizations on the generalization error, for different kinds of learning tasks. If the normalization is adequately selected, hierarchical learning of features of increasing order takes place as a function of the training set size. Otherwise, the performance worsens, and there is no hierarchical learning at all.  相似文献   

7.
支持向量机经过实践证明在小样本的情况下具有良好的泛化能力。但是在手写体数字识别的实验中,支持向量机被发现其在分类阶段的速度明显比神经网络要慢,因此在不影响支持向量机泛化能力的前提下简化支持向量机的决策函数。从而提高SVM的分类速度是很有意义的研究。利用迭代学习的方法来简化支持向量机的决策函数,实验证明本文的方法能够极大的简化SVM的决策函数,该方法易于实施。  相似文献   

8.
支持向量机分解算法研究   总被引:1,自引:0,他引:1  
分解算法是目前大量数据下支持向量机最主要训练方法。各种分解算法的区别在于工作集的大小、产生原则以及子QP问题的求解方法不同。介绍分解算法的产生以及发展过程,以及相应的工作集选择算法,重点指出分解算法在子工作集优化方法、工作集的选择策略所采用的新的方法以及有关收敛性的证明。  相似文献   

9.
支持向量机在多类分类问题中的推广   总被引:51,自引:4,他引:51  
支持向量机(SVMs)最初是用以解决两类分类问题,不能直接用于多类分类,如何有效地将其推广到多类分类问题是一个正在研究的问题。该文总结了现有主要的支持向量机多类分类算法,系统地比较了各算法的训练速度、分类速度和推广能力,并分析它们的不足和有待解决的问题。  相似文献   

10.
11.
基于支持向量机的多分类增量学习算法   总被引:8,自引:0,他引:8  
朱美琳  杨佩 《计算机工程》2006,32(17):77-79
支持向量机被成功地应用在分类和回归问题中,但是由于其需要求解二次规划,使得支持向量机在求解大规模数据上具有一定的缺陷,尤其是对于多分类问题,现有的支持向量机算法具有太高的算法复杂性。该文提出一种基于支持向量机的增量学习算法,适合多分类问题,并将之用于解决实际问题。  相似文献   

12.
最小二乘双支持向量机的在线学习算法   总被引:1,自引:0,他引:1  
针对具有两个非并行分类超平面的最小二乘双支持向量机,提出了一种在线学习算法。通过利用矩阵求逆分解引理,所提在线学习算法能充分利用历史的训练结果,避免了大型矩阵的求逆计算过程,从而降低了计算的复杂性。仿真结果验证了所提学习算法的有效性。  相似文献   

13.
介绍分析了SVM基础理论和目前多类SVM分类算法及其优缺点,提出了一种边界向量抽取算法,并基于该算法改进了1ar和1a1两种多类SVM算法。实验结果表明该边界向量抽取算法可以有效的减少训练样本的数量,在保持分类器推广能力的条件下缩短SVM的训练时间,特别是在大样本训练数据时1arΔ可以提供最好的训练性能。  相似文献   

14.
一种文本分类的在线SVM学习算法   总被引:5,自引:4,他引:5  
本文提出了一种用于文本分类的RBF 支持向量机在线学习算法。利用RBF 核函数的局部性,该算法仅对新训练样本的某一大小邻域内且位于“可能带”中的训练样本集进行重新训练,以实现对现有SVM的更新。为高效的实现该邻域大小的自适应确定,使用ξa 泛化错误估计在所有现有训练样本集上对当前SVM的泛化错误进行定性估计。同时引入泛化能力进化因子,使得结果SVM在分类效果上具有自动调整能力,并防止分类能力的退化。在TREC - 5 真实语料上的对比测试结果表明,该算法显著地加速了增量学习的过程而同时保证结果SVM的分类效果。  相似文献   

15.
1.引言包括感知器、神经网络等在内的学习方法都是基于经验风险最小(ERM)原则的,而在实际的基于小样本的学习系统中,这些学习方法在经验风险最小的情况下并不能保证期望风险最小化。对于线性不可分情况不能给出是否分段线性可分的可靠信息。如果简单地引入非线性变换,则容易导致过学习现象。这显然不是我们所希望的。  相似文献   

16.
基于奇异值分解和支持向量机的人脸检测   总被引:3,自引:0,他引:3  
人脸检测在自动人脸鉴别工作中具有重要的意义。由于人脸图像特征的复杂性和多样性,使得人脸模式分类器的训练十分困难。本文提出了一种基于支持向量机(SVM)的人脸检测算法,使用了奇异值分解对训练样本进行特征提取,再由SVM分类器进行分类,有效的降低了训练难度,采用二阶多项式作为SVM分类器的核函数,实验结果表明,该方法是十分有效的。  相似文献   

17.
The Maximal Discrepancy (MD) is a powerful statistical method, which has been proposed for model selection and error estimation in classification problems. This approach is particularly attractive when dealing with small sample problems, since it avoids the use of a separate validation set. Unfortunately, the MD method requires a bounded loss function, which is usually avoided by most learning algorithms, including the Support Vector Machine (SVM), because it gives rise to a non-convex optimization problem. We derive in this work a new approach for rigorously applying the MD technique to the error estimation of the SVM and, at the same time, preserving the original SVM framework.  相似文献   

18.
When dealing with pattern recognition problems one encounters different types of prior knowledge. It is important to incorporate such knowledge into classification method at hand. A very common type of prior knowledge is many data sets are on some kinds of manifolds. Distance based classification methods can make use of this by a modified distance measure called geodesic distance. We introduce a new kind of kernels for support vector machines which incorporate geodesic distance and therefore are applicable in cases such transformation invariance is known. Experiments results show that the performance of our method is comparable to that of other state-of-the-art method.  相似文献   

19.
This paper presents a novel active learning approach for transductive support vector machines with applications to text classification. The concept of the centroid of the support vectors is proposed so that the selective sampling based on measuring the distance from the unlabeled samples to the centroid is feasible and simple to compute. With additional hypothesis, active learning offers better performance with comparison to regular inductive SVMs and transductive SVMs with random sampling,and it is even competitive to transductive SVMs on all available training data. Experimental results prove that our approach is efficient and easy to implement.  相似文献   

20.
基于测地距离的支持向量机分类算法   总被引:1,自引:0,他引:1  
全勇  杨杰 《自动化学报》2005,31(2):202-208
When dealing with pattern recognition problems one encounters different types of prior knowledge. It is important to incorporate such knowledge into classification method at hand. A very common type of prior knowledge is many data sets are on some kinds of manifolds. Distance based classification methods can make use of this by a modified distance measure called geodesic distance. We introduce a new kind of kernels for support vector machines which incorporate geodesic distance and therefore are applicable in cases such transformation invariance is known. Experiments results show that the performance of our method is comparable to that of other state-of-the-art method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号