首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 47 毫秒
1.
郑逢德  张鸿宾 《计算机科学》2011,38(12):247-249,254
提出一种快速的支撑向量回归算法。首先将支撑向量回归的带有两组约束的二次规划问题转化为两个小的分别带有一组约束的二次规划问题,而每一个小的二次规划问题又采用一种快速迭代算法求解,该迭代算法能从任何初始点快速收敛,避免了二次优化问题求解,因此能显著提高训练速度。在多个标准数据集上的实验表明,该算法比传统支撑向量机快很多,同时具有良好的泛化性能。  相似文献   

2.
为了解决最小二乘支撑向量机(LSSVM)优化问题需要耗费大量时间的问题,提出了利用牛顿优化法来解决LSSVM优化问题的方法(称为Newton-LSSVM)。首先把LSSVM优化问题转化为无约束化优化问题的形式,然后再采用牛顿优化法来迭代求解。实验结果表明,该方法在大幅度减少LSSVM算法的训练时间开销的同时,能够获得与采用传统优化方式求解LSSVM优化问题一样的泛化能力。  相似文献   

3.
胡文军  王士同  王娟  颜七笙 《软件学报》2012,23(12):3059-3073
通过ISE准则逼近真实密度差的L2-核分类器没有显式地考虑到分类间隔,在一定程度上不利于提高分类器精度;同时,权向量的求解最终转化为一个二次规划问题,导致L2-核分类器训练速度较慢,特别是对于较大样本.基于这两个问题,利用样本间的密度差构造了分类间隔并最大化此间隔,而此问题最终转化为一个对数优化问题,故称其为最大间隔对数向量机(maximum margin logistic vector machine,简称MMLVM),进而利用梯度下降法求解最优权.同时,分别从权的全局最优性、一般化误差界及算法复杂度这3方面进行了理论分析.最后,人工和UCI,PIE及USPS数据集的实验结果表明,算法理论正确,解决了上述两个问题并获得了较好的效果.  相似文献   

4.
文传军  柯佳 《计算机工程与应用》2012,48(29):177-180,209
针对多类分类问题,提出一种超球支持向量机算法——广义最大间隔球形支持向量机,该算法利用两同心超球将正负类样本分隔开来,最大化两超球半径的差异,从而挖掘正负类样本的鉴别信息,同时对超球类支持向量机算法判决规则进行改进,引入模糊隶属度补充判决,弥补二类分类器投票决策的缺陷.理论分析了算法的相关性质,通过仿真实验验证了该算法的有效性.  相似文献   

5.
针对大数据集如何有效地进行训练的问题,基于最大向量夹角间隔分类器(maximum vector-angular margin classifier,MAMC),提出了求解最优向量d的不同方法来得到中心向量夹角间隔分类器(central vector-angular margin classifier,CAMC),进而证明了CAMC等价于最小包围球问题(minimum enclosed ball,MEB).但是鉴于MEB对参数的敏感性,又提出了正则化核向量机(regularized core vector machine,RCVM),将CAMC与RCVM结合得到中心向量夹角间隔正则化核向量机(regularized core vector machine with central vector-angular margin,CAMCVM).基于基准数据集的实验表明,CAMC具有更好的分类性能且CAMCVM可以有效快速地训练大规模数据集.  相似文献   

6.
首先介绍了数据挖掘的基本概念,然后系统地研究了支撑向量机学习算法,着重分析了支撑向量机的算法的特点。并阐述了支撑向量机的关键技术一核函数。最后讨论了支撑向量中学习算法在数据挖掘中的应用。  相似文献   

7.
多项式光滑的支撑向量机   总被引:40,自引:0,他引:40  
数据分类问题是数据挖掘研究的一个热门课题.它是根据对数据样本集合建模,得到最优的分类器,从而可以对未知数据进行分类.支撑向量机是二分类问题的一个分类模型,模型的结果表现为支撑向量.Lee和Mangasarian在2001年提出了使用Sigmoid函数的积分函数作光滑的支撑向量机模型SSVM.该文研究了用多项式函数作光滑的支撑向量机(PSSVM)模型,并提出了两个用于光滑多项式的函数.根据模型特点,应用BFGS方法以及Newton Armijo方法进行求解,数值实验结果表明PSSVM模型在分类性能上优于SSVM模型.  相似文献   

8.
边界邻近支持向量机   总被引:6,自引:0,他引:6  
针对训练大样本支持向量机内存开销大、训练速度慢的缺点,提出了一种改进的算法—边界邻近支持向量机。实验表明在分类效果相同情况下,改进算法训练速度明显提高。  相似文献   

9.
为获得更好的分类性能,对传统模糊支持向量机(FSVM)进行扩展,提出一种总间隔v-模糊支持向量机(TM-v-FSVM)。通过使用差异成本及引入总间隔和模糊隶属度,同时解决不平衡训练样本问题和传统软间隔分类机的过拟合问题,从而提升学习机的泛化能力。采用UCI实际数据集进行模式分类实验,结果表明TM-v-FSVM具有稳定的分类性能。  相似文献   

10.
基于无监督聚类的约简支撑向量机   总被引:1,自引:0,他引:1  
为解决标准支撑向量机算法所面临的巨大的计算量问题,Lee和Mangasarian提出了约简支撑向量机算法;但他们选取的“支撑向量”是从训练样本里面任意选的,其分类结果受随机性影响比较大。该文利用简单的无监督聚类算法,在样本空间中选取了一些具有较强代表性的样本作为“支撑向量”,再运用约简支撑向量机算法,有效地减少了运算量。实验验证文中方法可以用较少的“支撑向量”来得到较高的识别率,同时运行时间也大大缩短。  相似文献   

11.
This paper proposes a new classifier called density-induced margin support vector machines (DMSVMs). DMSVMs belong to a family of SVM-like classifiers. Thus, DMSVMs inherit good properties from support vector machines (SVMs), e.g., unique and global solution, and sparse representation for the decision function. For a given data set, DMSVMs require to extract relative density degrees for all training data points. These density degrees can be taken as relative margins of corresponding training data points. Moreover, we propose a method for estimating relative density degrees by using the K nearest neighbor method. We also show the upper bound on the leave-out-one error of DMSVMs for a binary classification problem and prove it. Promising results are obtained on toy as well as real-world data sets.  相似文献   

12.
Radius margin bounds for support vector machines with the RBF kernel   总被引:14,自引:0,他引:14  
Chung KM  Kao WC  Sun CL  Wang LL  Lin CJ 《Neural computation》2003,15(11):2643-2681
An important approach for efficient support vector machine (SVM) model selection is to use differentiable bounds of the leave-one-out (loo) error. Past efforts focused on finding tight bounds of loo (e.g., radius margin bounds, span bounds). However, their practical viability is still not very satisfactory. Duan, Keerthi, and Poo (2003) showed that radius margin bound gives good prediction for L2-SVM, one of the cases we look at. In this letter, through analyses about why this bound performs well for L2-SVM, we show that finding a bound whose minima are in a region with small loo values may be more important than its tightness. Based on this principle, we propose modified radius margin bounds for L1-SVM (the other case) where the original bound is applicable only to the hard-margin case. Our modification for L1-SVM achieves comparable performance to L2-SVM. To study whether L1- or L2-SVM should be used, we analyze other properties, such as their differentiability, number of support vectors, and number of free support vectors. In this aspect, L1-SVM possesses the advantage of having fewer support vectors. Their implementations are also different, so we discuss related issues in detail.  相似文献   

13.
Embedding feature selection in nonlinear support vector machines (SVMs) leads to a challenging non-convex minimization problem, which can be prone to suboptimal solutions. This paper develops an effective algorithm to directly solve the embedded feature selection primal problem. We use a trust-region method, which is better suited for non-convex optimization compared to line-search methods, and guarantees convergence to a minimizer. We devise an alternating optimization approach to tackle the problem efficiently, breaking it down into a convex subproblem, corresponding to standard SVM optimization, and a non-convex subproblem for feature selection. Importantly, we show that a straightforward alternating optimization approach can be susceptible to saddle point solutions. We propose a novel technique, which shares an explicit margin variable to overcome saddle point convergence and improve solution quality. Experiment results show our method outperforms the state-of-the-art embedded SVM feature selection method, as well as other leading filter and wrapper approaches.  相似文献   

14.
In this paper, a novel Support Vector Machine (SVM) variant, which makes use of robust statistics, is proposed. We investigate the use of statistically robust location and dispersion estimators, in order to enhance the performance of SVMs and test it in two-class and multi-class classification problems. Moreover, we propose a novel method for class specific multi-class SVM, which makes use of the covariance matrix of only one class, i.e., the class that we are interested in separating from the others, while ignoring the dispersion of other classes. We performed experiments in artificial data, as well as in many real world publicly available databases used for classification. The proposed approach performs better than other SVM variants, especially in cases where the training data contain outliers. Finally, we applied the proposed method for facial expression recognition in three well known facial expression databases, showing that it outperforms previously published attempts.  相似文献   

15.
Fuzzy functions with support vector machines   总被引:1,自引:0,他引:1  
A new fuzzy system modeling (FSM) approach that identifies the fuzzy functions using support vector machines (SVM) is proposed. This new approach is structurally different from the fuzzy rule base approaches and fuzzy regression methods. It is a new alternate version of the earlier FSM with fuzzy functions approaches. SVM is applied to determine the support vectors for each fuzzy cluster obtained by fuzzy c-means (FCM) clustering algorithm. Original input variables, the membership values obtained from the FCM together with their transformations form a new augmented set of input variables. The performance of the proposed system modeling approach is compared to previous fuzzy functions approaches, standard SVM, LSE methods using an artificial sparse dataset and a real-life non-sparse dataset. The results indicate that the proposed fuzzy functions with support vector machines approach is a feasible and stable method for regression problems and results in higher performances than the classical statistical methods.  相似文献   

16.
Support Vector Machine (SVM) employs Structural Risk Minimization (SRM) principle to generalize better than conventional machine learning methods employing the traditional Empirical Risk Minimization (ERM) principle. When applying SVM to response modeling in direct marketing, however, one has to deal with the practical difficulties: large training data, class imbalance and scoring from binary SVM output. For the first difficulty, we propose a way to alleviate or solve it through a novel informative sampling. For the latter two difficulties, we provide guidelines within SVM framework so that one can readily use the paper as a quick reference for SVM response modeling: use of different costs for different classes and use of distance to decision boundary, respectively. This paper also provides various evaluation measures for response models in terms of accuracies, lift chart analysis, and computational efficiency.  相似文献   

17.
18.
《Pattern recognition letters》1999,20(11-13):1183-1190
The basic principles of the support vector machine (SVM) are analyzed. Two approaches to constructing a kernel function which takes into account some local properties of a problem are considered. The first one deals with interactions between neighboring pixels in an image and the second with proximity of the objects in the input space. In the former case, this is equivalent to feature selection and the efficiency of this approach is demonstrated by an application to Texture Recognition. In the latter case, this approach may be considered as either a kind of local algorithm or as a mixture of local and global ones. We demonstrate that the use of such kernels increases the domain of SVM applications.  相似文献   

19.
Distributed support vector machines   总被引:2,自引:0,他引:2  
A truly distributed (as opposed to parallelized) support vector machine (SVM) algorithm is presented. Training data are assumed to come from the same distribution and are locally stored in a number of different locations with processing capabilities (nodes). In several examples, it has been found that a reasonably small amount of information is interchanged among nodes to obtain an SVM solution, which is better than that obtained when classifiers are trained only with the local data and comparable (although a little bit worse) to that of the centralized approach (obtained when all the training data are available at the same place). We propose and analyze two distributed schemes: a "na/spl inodot//spl uml/ve" distributed chunking approach, where raw data (support vectors) are communicated, and the more elaborated distributed semiparametric SVM, which aims at further reducing the total amount of information passed between nodes while providing a privacy-preserving mechanism for information sharing. We show the feasibility of our proposal by evaluating the performance of the algorithms in benchmarks with both synthetic and real-world datasets.  相似文献   

20.
We propose new support vector machines (SVMs) that incorporate the geometric distribution of an input data set by associating each data point with a possibilistic membership, which measures the relative strength of the self class membership. By using a possibilistic distance measure based on the possibilistic membership, we reformulate conventional SVMs in three ways. The proposed methods are shown to have better classification performance than conventional SVMs in various tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号