共查询到20条相似文献,搜索用时 15 毫秒
1.
在训练集不足的情况下,SVM算法有待改进,以提高其评价的准确性。采用凹半监督支持向量机,利用少量标注样本和大量未标注样本进行机器学习,提高了模型预测的精度。 相似文献
2.
3.
Incorporating prior knowledge in support vector machines for classification: A review 总被引:1,自引:0,他引:1
For classification, support vector machines (SVMs) have recently been introduced and quickly became the state of the art. Now, the incorporation of prior knowledge into SVMs is the key element that allows to increase the performance in many applications. This paper gives a review of the current state of research regarding the incorporation of two general types of prior knowledge into SVMs for classification. The particular forms of prior knowledge considered here are presented in two main groups: class-invariance and knowledge on the data. The first one includes invariances to transformations, to permutations and in domains of input space, whereas the second one contains knowledge on unlabeled data, the imbalance of the training set or the quality of the data. The methods are then described and classified into the three categories that have been used in literature: sample methods based on the modification of the training data, kernel methods based on the modification of the kernel and optimization methods based on the modification of the problem formulation. A recent method, developed for support vector regression, considers prior knowledge on arbitrary regions of the input space. It is exposed here when applied to the classification case. A discussion is then conducted to regroup sample and optimization methods under a regularization framework. 相似文献
4.
5.
Yih-Lon LinAuthor Vitae Jer-Guang HsiehAuthor VitaeHsu-Kun WuAuthor Vitae Jyh-Horng JengAuthor Vitae 《Neurocomputing》2011,74(17):3467-3475
The well-known sequential minimal optimization (SMO) algorithm is the most commonly used algorithm for numerical solutions of the support vector learning problems. At each iteration in the traditional SMO algorithm, also called 2PSMO algorithm in this paper, it jointly optimizes only two chosen parameters. The two parameters are selected either heuristically or randomly, whilst the optimization with respect to the two chosen parameters is performed analytically. The 2PSMO algorithm is naturally generalized to the three-parameter sequential minimal optimization (3PSMO) algorithm in this paper. At each iteration of this new algorithm, it jointly optimizes three chosen parameters. As in 2PSMO algorithm, the three parameters are selected either heuristically or randomly, whilst the optimization with respect to the three chosen parameters is performed analytically. Consequently, the main difference between these two algorithms is that the optimization is performed at each iteration of the 2PSMO algorithm on a line segment, whilst that of the 3PSMO algorithm on a two-dimensional region consisting of infinitely many line segments. This implies that the maximum can be attained more efficiently by 3PSMO algorithm. Main updating formulae of both algorithms for each support vector learning problem are presented. To assess the efficiency of the 3PSMO algorithm compared with the 2PSMO algorithm, 14 benchmark datasets, 7 for classification and 7 for regression, will be tested and numerical performances are compared. Simulation results demonstrate that the 3PSMO outperforms the 2PSMO algorithm significantly in both executing time and computation complexity. 相似文献
6.
7.
8.
This paper proposes a new classifier called density-induced margin support vector machines (DMSVMs). DMSVMs belong to a family of SVM-like classifiers. Thus, DMSVMs inherit good properties from support vector machines (SVMs), e.g., unique and global solution, and sparse representation for the decision function. For a given data set, DMSVMs require to extract relative density degrees for all training data points. These density degrees can be taken as relative margins of corresponding training data points. Moreover, we propose a method for estimating relative density degrees by using the K nearest neighbor method. We also show the upper bound on the leave-out-one error of DMSVMs for a binary classification problem and prove it. Promising results are obtained on toy as well as real-world data sets. 相似文献
10.
11.
12.
Kuo-Ping Wu Author Vitae Author Vitae 《Pattern recognition》2009,42(5):710-154
Determining the kernel and error penalty parameters for support vector machines (SVMs) is very problem-dependent in practice. A popular method to deciding the kernel parameters is the grid search method. In the training process, classifiers are trained with different kernel parameters, and only one of the classifiers is required for the testing process. This makes the training process time-consuming. In this paper we propose using the inter-cluster distances in the feature spaces to choose the kernel parameters. Calculating such distance costs much less computation time than training the corresponding SVM classifiers; thus the proper kernel parameters can be chosen much faster. Experiment results show that the inter-cluster distance can choose proper kernel parameters with which the testing accuracy of trained SVMs is competitive to the standard ones, and the training time can be significantly shortened. 相似文献
13.
针对支持向量机类增量学习过程中参与训练的两类样本数量不平衡而导致的错分问题,给出了一种加权类增量学习算法,将新增类作为正类,原有类作为负类,利用一对多方法训练子分类器,训练时根据训练样本所占的比例对类加权值,提高了小类别样本的分类精度。实验证明了该方法的有效性。 相似文献
14.
Asymptotic efficiency of kernel support vector machines (SVM) 总被引:1,自引:0,他引:1
The paper analyzes the asymptotic properties of Vapnik’s SVM-estimates of a regression function as the size of the training
sample tends to infinity. The estimation problem is considered as infinite-dimensional minimization of a regularized empirical
risk functional in a reproducing kernel Hilbert space. The rate of convergence of the risk functional on SVM-estimates to
its minimum value is established. The sufficient conditions for the uniform convergence of SVM-estimates to a true regression
function with unit probability are given.
Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 81–97, July–August 2009 相似文献
15.
Support vector machines for interval discriminant analysis 总被引:1,自引:0,他引:1
The use of data represented by intervals can be caused by imprecision in the input information, incompleteness in patterns, discretization procedures, prior knowledge insertion or speed-up learning. All the existing support vector machine (SVM) approaches working on interval data use local kernels based on a certain distance between intervals, either by combining the interval distance with a kernel or by explicitly defining an interval kernel. This article introduces a new procedure for the linearly separable case, derived from convex optimization theory, inserting information directly into the standard SVM in the form of intervals, without taking any particular distance into consideration. 相似文献
16.
一种支持向量机的组合核函数 总被引:11,自引:0,他引:11
核函数是支持向量机的核心,不同的核函数将产生不同的分类效果,核函数也是支持向量机理论中比较难理解的一部分。通过引入核函数,支持向量机可以很容易地实现非线性算法。首先探讨了核函数的本质,说明了核函数与所映射空间之间的关系,进一步给出了核函数的构成定理和构成方法,说明了核函数分为局部核函数与全局核函数两大类,并指出了两者的区别和各自的优势。最后,提出了一个新的核函数——组合核函数,并将该核函数应用于支持向量机中,并进行了人脸识别实验,实验结果也验证了该核函数的有效性。 相似文献
17.
一种支持向量机的混合核函数 总被引:2,自引:0,他引:2
核函数是支持向量机的核心,不同的核函数将产生不同的分类效果.由于普通核函数各有其利弊,为了得到学习能力和泛化能力较强的核函数,根据核函数的基本性质,两个核函数之和仍然是核函数,将局部核函数和全局核函数线性组合构成新的核函数--混合核函数.该核函数吸取了局部核函数和全局核函数的优点.利用混合核函数进行流程企业供应链预测实验,仿真结果验证了该核函数的有效性和正确性. 相似文献
18.
e-Learning这种能满足个性化、适应性学习要求的重要学习方式,要求能协作感知学习者的学习情况,能依据学习情况自动推送个性化学习资源-将支持向量机这种机器学习方法应用到e-Learning中,并结合e-Learning系统的应用情况,对于学习样本的选取和预处理,以及支持向量机训练算法等进行了应用研究.解决了学习者学习情况评价分类,根据分类结果实现个性化学习资源的主动推送问题. 相似文献
19.
20.
目的 高光谱图像包含了丰富的空间、光谱和辐射信息,能够用于精细的地物分类,但是要达到较高的分类精度,需要解决高维数据与有限样本之间存在矛盾的问题,并且降低因噪声和混合像元引起的同物异谱的影响。为有效解决上述问题,提出结合超像元和子空间投影支持向量机的高光谱图像分类方法。方法 首先采用简单线性迭代聚类算法将高光谱图像分割成许多无重叠的同质性区域,将每一个区域作为一个超像元,以超像元作为图像分类的最小单元,利用子空间投影算法对超像元构成的图像进行降维处理,在低维特征空间中执行支持向量机分类。本文高光谱图像空谱综合分类模型,对几何特征空间下的超像元分割与光谱特征空间下的子空间投影支持向量机(SVMsub),采用分割后进行特征融合的处理方式,将像元级别转换为面向对象的超像元级别,实现高光谱图像空谱综合分类。结果 在AVIRIS(airbone visible/infrared imaging spectrometer)获取的Indian Pines数据和Reflective ROSIS(optics system spectrographic imaging system)传感器获取的University of Pavia数据实验中,子空间投影算法比对应的非子空间投影算法的分类精度高,特别是在样本数较少的情况下,分类效果提升明显;利用马尔可夫随机场或超像元融合空间信息的算法比对应的没有融合空间信息的算法的分类精度高;在两组数据均使用少于1%的训练样本情况下,同时融合了超像元和子空间投影的支持向量机算法在两组实验中分类精度均为最高,整体分类精度高出其他相关算法4%左右。结论 利用超像元处理可以有效融合空间信息,降低同物异谱对分类结果的不利影响;采用子空间投影能够将高光谱数据变换到低维空间中,实现有限训练样本条件下的高精度分类;结合超像元和子空间投影支持向量机的算法能够得到较高的高光谱图像分类精度。 相似文献