首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
在训练集不足的情况下,SVM算法有待改进,以提高其评价的准确性。采用凹半监督支持向量机,利用少量标注样本和大量未标注样本进行机器学习,提高了模型预测的精度。  相似文献   

2.
3.
Fabien  Grard 《Neurocomputing》2008,71(7-9):1578-1594
For classification, support vector machines (SVMs) have recently been introduced and quickly became the state of the art. Now, the incorporation of prior knowledge into SVMs is the key element that allows to increase the performance in many applications. This paper gives a review of the current state of research regarding the incorporation of two general types of prior knowledge into SVMs for classification. The particular forms of prior knowledge considered here are presented in two main groups: class-invariance and knowledge on the data. The first one includes invariances to transformations, to permutations and in domains of input space, whereas the second one contains knowledge on unlabeled data, the imbalance of the training set or the quality of the data. The methods are then described and classified into the three categories that have been used in literature: sample methods based on the modification of the training data, kernel methods based on the modification of the kernel and optimization methods based on the modification of the problem formulation. A recent method, developed for support vector regression, considers prior knowledge on arbitrary regions of the input space. It is exposed here when applied to the classification case. A discussion is then conducted to regroup sample and optimization methods under a regularization framework.  相似文献   

4.
支持向量机的新发展   总被引:77,自引:3,他引:77       下载免费PDF全文
Vapnik等学者首先提出了实现统计学习理论中结构风险最小化原则的实用算法一支持向量机,比较成功地解决了模式分类问题,其后,机器学习界兴起了研究统计学习理论和支持向量机的热湖,引人瞩目的研究分支有从最优化技术出发改进或改造支持向量机,依据统计学习理论和支持向量机的优点设计新的非线性机器学习算法等,对此,较为系统地回顾了近lO年来算法研究领域的新发展。  相似文献   

5.
The well-known sequential minimal optimization (SMO) algorithm is the most commonly used algorithm for numerical solutions of the support vector learning problems. At each iteration in the traditional SMO algorithm, also called 2PSMO algorithm in this paper, it jointly optimizes only two chosen parameters. The two parameters are selected either heuristically or randomly, whilst the optimization with respect to the two chosen parameters is performed analytically. The 2PSMO algorithm is naturally generalized to the three-parameter sequential minimal optimization (3PSMO) algorithm in this paper. At each iteration of this new algorithm, it jointly optimizes three chosen parameters. As in 2PSMO algorithm, the three parameters are selected either heuristically or randomly, whilst the optimization with respect to the three chosen parameters is performed analytically. Consequently, the main difference between these two algorithms is that the optimization is performed at each iteration of the 2PSMO algorithm on a line segment, whilst that of the 3PSMO algorithm on a two-dimensional region consisting of infinitely many line segments. This implies that the maximum can be attained more efficiently by 3PSMO algorithm. Main updating formulae of both algorithms for each support vector learning problem are presented. To assess the efficiency of the 3PSMO algorithm compared with the 2PSMO algorithm, 14 benchmark datasets, 7 for classification and 7 for regression, will be tested and numerical performances are compared. Simulation results demonstrate that the 3PSMO outperforms the 2PSMO algorithm significantly in both executing time and computation complexity.  相似文献   

6.
使用超椭球参数化坐标的支持向量机   总被引:1,自引:0,他引:1  
基于n维超椭球面坐标变换公式,构造一类核函数--n维超椭球坐标变换核.由于是同维映射,且增大了类间距离,这类核函数在一定程度上改善了支持向量机的性能.与其他核函数(如高斯核)相比,将所构造的核函数用于支持向量机,仅产生了很少的支持向量,因而大大加快了学习速度,改善了泛化性能.数值实验结果表明了所构造的核函数的有效性和正确性.  相似文献   

7.
提出一种基于Help-Training的半监督支持向量回归算法,包含最小二乘支持向量回归(LS-SVR)和近邻(NN)两种类型学习器.主学习器LS-SVR通过选择高置信度的未标记样本加以标记,并将其添加到已标记样本集,使训练样本的规模不断扩大,以提高LS-SVR的函数逼近性能.辅学习器NN用以协助LS-SVR从训练样本比较密集的区域选取未标记样本加以置信度评估,可以减弱噪声对学习效果的负面影响.实验结果表明所提算法具有良好的回归估计性能,学习精度较高.  相似文献   

8.
This paper proposes a new classifier called density-induced margin support vector machines (DMSVMs). DMSVMs belong to a family of SVM-like classifiers. Thus, DMSVMs inherit good properties from support vector machines (SVMs), e.g., unique and global solution, and sparse representation for the decision function. For a given data set, DMSVMs require to extract relative density degrees for all training data points. These density degrees can be taken as relative margins of corresponding training data points. Moreover, we propose a method for estimating relative density degrees by using the K nearest neighbor method. We also show the upper bound on the leave-out-one error of DMSVMs for a binary classification problem and prove it. Promising results are obtained on toy as well as real-world data sets.  相似文献   

9.
10.
直推式支持向量机(TSVM)是在利用有标签样本的同时,考虑无标签样本对分类器的影响,并且结合支持向量机算法,实现一种高效的分类算法。它在包含少量有标签样本的训练集和大量无标签样本的测试集上,具有良好的效果。但是它有算法时间复杂度比较高,需要预先设置正负例比例等不足。通过对原有算法的改进,新算法在时间复杂度上明显下降,同时算法效果没有明显的影响。  相似文献   

11.
12.
Determining the kernel and error penalty parameters for support vector machines (SVMs) is very problem-dependent in practice. A popular method to deciding the kernel parameters is the grid search method. In the training process, classifiers are trained with different kernel parameters, and only one of the classifiers is required for the testing process. This makes the training process time-consuming. In this paper we propose using the inter-cluster distances in the feature spaces to choose the kernel parameters. Calculating such distance costs much less computation time than training the corresponding SVM classifiers; thus the proper kernel parameters can be chosen much faster. Experiment results show that the inter-cluster distance can choose proper kernel parameters with which the testing accuracy of trained SVMs is competitive to the standard ones, and the training time can be significantly shortened.  相似文献   

13.
针对支持向量机类增量学习过程中参与训练的两类样本数量不平衡而导致的错分问题,给出了一种加权类增量学习算法,将新增类作为正类,原有类作为负类,利用一对多方法训练子分类器,训练时根据训练样本所占的比例对类加权值,提高了小类别样本的分类精度。实验证明了该方法的有效性。  相似文献   

14.
Asymptotic efficiency of kernel support vector machines (SVM)   总被引:1,自引:0,他引:1  
The paper analyzes the asymptotic properties of Vapnik’s SVM-estimates of a regression function as the size of the training sample tends to infinity. The estimation problem is considered as infinite-dimensional minimization of a regularized empirical risk functional in a reproducing kernel Hilbert space. The rate of convergence of the risk functional on SVM-estimates to its minimum value is established. The sufficient conditions for the uniform convergence of SVM-estimates to a true regression function with unit probability are given. Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 81–97, July–August 2009  相似文献   

15.
Support vector machines for interval discriminant analysis   总被引:1,自引:0,他引:1  
C.  D.  L.  J.A. 《Neurocomputing》2008,71(7-9):1220-1229
The use of data represented by intervals can be caused by imprecision in the input information, incompleteness in patterns, discretization procedures, prior knowledge insertion or speed-up learning. All the existing support vector machine (SVM) approaches working on interval data use local kernels based on a certain distance between intervals, either by combining the interval distance with a kernel or by explicitly defining an interval kernel. This article introduces a new procedure for the linearly separable case, derived from convex optimization theory, inserting information directly into the standard SVM in the form of intervals, without taking any particular distance into consideration.  相似文献   

16.
一种支持向量机的组合核函数   总被引:11,自引:0,他引:11  
张冰  孔锐 《计算机应用》2007,27(1):44-46
核函数是支持向量机的核心,不同的核函数将产生不同的分类效果,核函数也是支持向量机理论中比较难理解的一部分。通过引入核函数,支持向量机可以很容易地实现非线性算法。首先探讨了核函数的本质,说明了核函数与所映射空间之间的关系,进一步给出了核函数的构成定理和构成方法,说明了核函数分为局部核函数与全局核函数两大类,并指出了两者的区别和各自的优势。最后,提出了一个新的核函数——组合核函数,并将该核函数应用于支持向量机中,并进行了人脸识别实验,实验结果也验证了该核函数的有效性。  相似文献   

17.
一种支持向量机的混合核函数   总被引:2,自引:0,他引:2  
核函数是支持向量机的核心,不同的核函数将产生不同的分类效果.由于普通核函数各有其利弊,为了得到学习能力和泛化能力较强的核函数,根据核函数的基本性质,两个核函数之和仍然是核函数,将局部核函数和全局核函数线性组合构成新的核函数--混合核函数.该核函数吸取了局部核函数和全局核函数的优点.利用混合核函数进行流程企业供应链预测实验,仿真结果验证了该核函数的有效性和正确性.  相似文献   

18.
e-Learning这种能满足个性化、适应性学习要求的重要学习方式,要求能协作感知学习者的学习情况,能依据学习情况自动推送个性化学习资源-将支持向量机这种机器学习方法应用到e-Learning中,并结合e-Learning系统的应用情况,对于学习样本的选取和预处理,以及支持向量机训练算法等进行了应用研究.解决了学习者学习情况评价分类,根据分类结果实现个性化学习资源的主动推送问题.  相似文献   

19.
肖建鹏  张来顺  任星 《计算机应用》2008,28(7):1642-1644
针对直推式支持向量机在进行大数据量分类时出现精度低、学习速度慢和回溯式学习多的问题,提出了一种基于增量学习的直推式支持向量机分类算法,将增量学习引入直推式支持向量机,使其在训练过程中仅保留有用样本而抛弃无用样本,从而减少学习时间,提高分类速度。实验结果表明,该算法具有较快的分类速度和较高的分类精度。  相似文献   

20.
目的 高光谱图像包含了丰富的空间、光谱和辐射信息,能够用于精细的地物分类,但是要达到较高的分类精度,需要解决高维数据与有限样本之间存在矛盾的问题,并且降低因噪声和混合像元引起的同物异谱的影响。为有效解决上述问题,提出结合超像元和子空间投影支持向量机的高光谱图像分类方法。方法 首先采用简单线性迭代聚类算法将高光谱图像分割成许多无重叠的同质性区域,将每一个区域作为一个超像元,以超像元作为图像分类的最小单元,利用子空间投影算法对超像元构成的图像进行降维处理,在低维特征空间中执行支持向量机分类。本文高光谱图像空谱综合分类模型,对几何特征空间下的超像元分割与光谱特征空间下的子空间投影支持向量机(SVMsub),采用分割后进行特征融合的处理方式,将像元级别转换为面向对象的超像元级别,实现高光谱图像空谱综合分类。结果 在AVIRIS(airbone visible/infrared imaging spectrometer)获取的Indian Pines数据和Reflective ROSIS(optics system spectrographic imaging system)传感器获取的University of Pavia数据实验中,子空间投影算法比对应的非子空间投影算法的分类精度高,特别是在样本数较少的情况下,分类效果提升明显;利用马尔可夫随机场或超像元融合空间信息的算法比对应的没有融合空间信息的算法的分类精度高;在两组数据均使用少于1%的训练样本情况下,同时融合了超像元和子空间投影的支持向量机算法在两组实验中分类精度均为最高,整体分类精度高出其他相关算法4%左右。结论 利用超像元处理可以有效融合空间信息,降低同物异谱对分类结果的不利影响;采用子空间投影能够将高光谱数据变换到低维空间中,实现有限训练样本条件下的高精度分类;结合超像元和子空间投影支持向量机的算法能够得到较高的高光谱图像分类精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号