首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 119 毫秒
1.
The paper discusses implementation issues related to the tuning of the hyperparameters of a support vector machine (SVM) with L/sub 2/ soft margin, for which the radius/margin bound is taken as the index to be minimized, and iterative techniques are employed for computing radius and margin. The implementation is shown to be feasible and efficient, even for large problems having more than 10000 support vectors.  相似文献   

2.
An algorithm OnSVM of the kernel-based classification is proposed which solution is very close to -SVM an efficient modification of support vectors machine. The algorithm is faster than batch implementations of -SVM and has a smaller resulting number of support vectors. The approach developed maximizes a margin between a pair of hyperplanes in feature space and can be used in online setup. A ternary classifier of 2-class problem with an “unknown” decision is constructed using these hyperplanes.  相似文献   

3.
谢长菊 《计算机仿真》2010,27(4):188-191
支持向量机C-SVM及υ-SVM是目前两种最为成熟的模型,但是从形式到算法、从参数特性到参数含义,它们都相互不同,这给人们的选择带来不便。为了将这两种SVM模型统一起来,提出一种新的模型Cυ-SVM,并依据统计学习理论,研究它的解的特性。给出了新模型解的完备性条件,找出它的解及其相应的算法,并指出了υ/C既是边界支持向量个数的上界,又是支持向量总数的下界。参数设置说明,新模型完全可以实现旧模型的所有功能,而新的算法更加方便诸如文本自动分类等领域的使用。  相似文献   

4.
Effects of kernel function on Nu support vector machines in extreme cases   总被引:1,自引:0,他引:1  
How we should choose a kernel function in support vector machines (SVMs), is an important but difficult problem. In this paper, we discuss the properties of the solution of the /spl nu/-SVM's, a variation of SVM's, for normalized feature vectors in two extreme cases: All feature vectors are almost orthogonal and all feature vectors are almost the same. In the former case, the solution of the /spl nu/-SVM is nearly the center of gravity of the examples given while the solution is approximated to that of the /spl nu/-SVM with the linear kernel in the latter case. Although extreme kernels are not employed in practice, analyzes are helpful to understand the effects of a kernel function on the generalization performance.  相似文献   

5.
In this paper, a new scheme for constructing parsimonious fuzzy classifiers is proposed based on the L2-support vector machine (L2-SVM) technique with model selection and feature ranking performed simultaneously in an integrated manner, in which fuzzy rules are optimally generated from data by L2-SVM learning. In order to identify the most influential fuzzy rules induced from the SVM learning, two novel indexes for fuzzy rule ranking are proposed and named as alpha-values and omega-values of fuzzy rules in this paper. The alpha-values are defined as the Lagrangian multipliers of the L2-SVM and adopted to evaluate the output contribution of fuzzy rules, while the omega-values are developed by considering both the rule base structure and the output contribution of fuzzy rules. As a prototype-based classifier, the L2-SVM-based fuzzy classifier evades the curse of dimensionality in high-dimensional space in the sense that the number of support vectors, which equals the number of induced fuzzy rules, is not related to the dimensionality. Experimental results on high-dimensional benchmark problems have shown that by using the proposed scheme the most influential fuzzy rules can be effectively induced and selected, and at the same time feature ranking results can also be obtained to construct parsimonious fuzzy classifiers with better generalization performance than the well-known algorithms in literature.  相似文献   

6.
In this paper, we propose a robust parametric twin support vector machine (RPTWSVM) classifier based on Parametric-\(\nu \)-Support Vector Machine (Par-\(\nu \)-SVM) and twin support vector machine. In order to capture heteroscedastic noise present in the training data, RPTWSVM finds a pair of parametric margin hyperplanes that automatically adjusts the parametric insensitive margin to incorporate the structural information of data. The proposed model of RPTWSVM is not only useful in controlling the heteroscedastic noise but also has much faster training speed when compared to Par-\(\nu \)-SVM. Experimental results on several machine learning benchmark datasets show the advantages of RPTWSVM both in terms of generalization ability and training speed over other related models.  相似文献   

7.
最小最大模块化支持向量机改进研究   总被引:3,自引:1,他引:2  
该文提出了一种新的聚类算法以实现训练数据的等分割并将其应用于最小最大模块化支持向量机(M3-SVM)。仿真实验表明:当训练数据不是同分布时,与随机分割方法相比,该文提出的聚类算法不但能提高M3-SVM的一般化能力,缩短训练时间,还能减少支持向量。  相似文献   

8.
By introducing the rough set theory into the support vector machine (SVM), a rough margin based SVM (RMSVM) is proposed to deal with the overfitting problem due to outliers. Similar to the classical SVM, the RMSVM searches for the separating hyper-plane that maximizes the rough margin, defined by the lower and upper margin. In this way, more data points are adaptively considered rather than the few extreme value points used in the classical SVM. In addition, different support vectors may have different effects on the learning of the separating hyper-plane depending on their positions in the rough margin. Points in the lower margin have more effects than those in the boundary of the rough margin. From experimental results on six benchmark datasets, the classification accuracy of this algorithm is improved without additional computational expense compared with the classical ν-SVM.  相似文献   

9.
For a real-valued function f defined on {0,1}n , the linkage graph of f is a hypergraph that represents the interactions among the input variables with respect to f . In this paper, lower and upper bounds for the number of function evaluations required to discover the linkage graph are rigorously analyzed in the black box scenario. First, a lower bound for discovering linkage graph is presented. To the best of our knowledge, this is the first result on the lower bound for linkage discovery. The investigation on the lower bound is based on Yao's minimax principle. For the upper bounds, a simple randomized algorithm for linkage discovery is analyzed. Based on the Kruskal-Katona theorem, we present an upper bound for discovering the linkage graph. As a corollary, we rigorously prove that O(n 2logn) function evaluations are enough for bounded functions when the number of hyperedges is O(n), which was suggested but not proven in previous works. To see the typical behavior of the algorithm for linkage discovery, three random models of fitness functions are considered. Using probabilistic methods, we prove that the number of function evaluations on the random models is generally smaller than the bound for the arbitrary case. Finally, from the relation between the linkage graph and the Walsh coefficients, it is shown that, for bounded functions, the proposed bounds are eventually the bounds for finding the Walsh coefficients.  相似文献   

10.
Understanding the empirical success of boosting algorithms is an important theoretical problem in machine learning.One of the most influential works is the margin theory,which provides a series of upper bounds for the generalization error of any voting classifier in terms of the margins of the training data.Recently an equilibrium margin (Emargin) bound which is sharper than previously well-known margin bounds is proposed.In this paper,we conduct extensive experiments to test the Emargin theory.Specifically,we develop an efficient algorithm that,given a boosting classifier (or a voting classifier in general),learns a new voting classifier which usually has a smaller Emargin bound.We then compare the performances of the two classifiers and find that the new classifier often has smaller test errors,which agrees with what the Emargin theory predicts.  相似文献   

11.
Hybrid wavelet-large margin classifiers have recently proven to solve difficult signal classification problems in cases where solely using a large margin classifier like, e.g., the Support Vector Machine may fail. In this paper, we evaluate several criteria rating feature sets obtained from various orthogonal filter banks for the classification by a Support Vector Machine. Appropriate criteria may then be used for adapting the wavelet filter with respect to the subsequent support vector classification. Our results show that criteria which are computationally more efficient than the radius-margin Support Vector Machine error bound are sufficient for our filter adaptation and, hence, feature selection. Further, we propose an adaptive search algorithm that, once the criterion is fixed, efficiently finds the optimal wavelet filter. As an interesting byproduct we prove a theorem which allows the computation of the radius of a set of vectors by a standard Support Vector Machine.  相似文献   

12.
We consider bounds on the prediction error of classification algorithms based on sample compression. We refine the notion of a compression scheme to distinguish permutation and repetition invariant and non-permutation and repetition invariant compression schemes leading to different prediction error bounds. Also, we extend known results on compression to the case of non-zero empirical risk.We provide bounds on the prediction error of classifiers returned by mistake-driven online learning algorithms by interpreting mistake bounds as bounds on the size of the respective compression scheme of the algorithm. This leads to a bound on the prediction error of perceptron solutions that depends on the margin a support vector machine would achieve on the same training sample.Furthermore, using the property of compression we derive bounds on the average prediction error of kernel classifiers in the PAC-Bayesian framework. These bounds assume a prior measure over the expansion coefficients in the data-dependent kernel expansion and bound the average prediction error uniformly over subsets of the space of expansion coefficients.Editor Shai Ben-David  相似文献   

13.
The paper deals with the estimation of the maximal sparsity degree for which a given measurement matrix allows sparse reconstruction through ? 1-minimization. This problem is a key issue in different applications featuring particular types of measurement matrices, as for instance in the framework of tomography with low number of views. In this framework, while the exact bound is NP hard to compute, most classical criteria guarantee lower bounds that are numerically too pessimistic. In order to achieve an accurate estimation, we propose an efficient greedy algorithm that provides an upper bound for this maximal sparsity. Based on polytope theory, the algorithm consists in finding sparse vectors that cannot be recovered by ? 1-minimization. Moreover, in order to deal with noisy measurements, theoretical conditions leading to a more restrictive but reasonable bounds are investigated. Numerical results are presented for discrete versions of tomography measurement matrices, which are stacked Radon transforms corresponding to different tomograph views.  相似文献   

14.
基于自适应边界向量提取的多尺度v-支持向量机建模   总被引:1,自引:0,他引:1  
针对v-支持向量机(v-SVM)用于大规模、多峰样本建模时易出现训练速度慢和回归精度低的问题,提出基于边界向量提取的多尺度v-SVM建模方法.该方法采用一种自适应边界向量提取算法,从训练样本中预提取出包含全部支持向量的边界向量集,以缩减训练样本规模,并通过求解多尺度v-SVM二次规划问题获取全局最优回归模型,从多个尺度上对复杂分布样本进行逼近.仿真结果表明,基于边界向量提取的多尺度v-SVM比v-SVM具有更好的回归结果.  相似文献   

15.
In this paper, we take a new look at the mixed structured singular value problem, a problem of finding important applications in robust stability analysis. Several new upper bounds are proposed using a very simple approach which we call the multiplier approach. These new bounds are convex and computable by using linear matrix inequality (LMI) techniques. We show, most importantly, that these upper bounds are actually lower bounds of a well-known upper bound which involves the so-called D-scaling (for complex perturbations) and G-scaling (for real perturbations)  相似文献   

16.
In this paper we discuss lower bounds for the asymptotic worst case ratio of on-line algorithms for different kind of bin packing problems. Recently, Galambos and Frenk gave a simple proof of the 1.536 ... lower bound for the 1-dimensional bin packing problem. Following their ideas, we present a general technique that can be used to derive lower bounds for other bin packing problems as well. We apply this technique to prove new lower bounds for the 2-dimensional (1.802...) and 3-dimensional (1.974...) bin packing problem.  相似文献   

17.
This paper addresses the robust performance problem when the system under consideration is subjected to norm bounded time-varying uncertainty. Different performance objectives are considered, and computable bounds are obtained for the worst case performance in various norms. In some important cases, it is shown that these bounds are tight, giving simple exact formulas for computing robust performance measures. One of the important features of these formulas is that they enable the direct and exact calculation of a number of worst cast performance measures while avoiding entirely the iterations needed for computing these measures by existing methods that rely on repeated scalings and spectral radius bound evaluations  相似文献   

18.
As a development of powerful SVMs, the recently proposed parametric-margin ν-support vector machine (par-ν-SVM) is good at dealing with heteroscedastic noise classification problems. In this paper, we propose a novel and fast proximal parametric-margin support vector classifier (PPSVC), based on the par-ν-SVM. In the PPSVC, we maximize a novel proximal parametric-margin by solving a small system of linear equations, while the par-ν-SVM maximizes the parametric-margin by solving a quadratic programming problem. Therefore, our PPSVC not only is useful with the case of heteroscedastic noise but also has a much faster learning speed compared with the par-ν-SVM. Experimental results on several artificial and public available datasets show the advantages of our PPSVC both on the generalization ability and learning speed. Furthermore, we investigate the performance of the proposed PPSVC on the text categorization problem. The experimental results on two benchmark text corpora show the practicability and effectiveness of the proposed PPSVC.  相似文献   

19.
Tracking the best hyperplane with a simple budget Perceptron   总被引:1,自引:0,他引:1  
Shifting bounds for on-line classification algorithms ensure good performance on any sequence of examples that is well predicted by a sequence of changing classifiers. When proving shifting bounds for kernel-based classifiers, one also faces the problem of storing a number of support vectors that can grow unboundedly, unless an eviction policy is used to keep this number under control. In this paper, we show that shifting and on-line learning on a budget can be combined surprisingly well. First, we introduce and analyze a shifting Perceptron algorithm achieving the best known shifting bounds while using an unlimited budget. Second, we show that by applying to the Perceptron algorithm the simplest possible eviction policy, which discards a random support vector each time a new one comes in, we achieve a shifting bound close to the one we obtained with no budget restrictions. More importantly, we show that our randomized algorithm strikes the optimal trade-off $U = \Theta(\sqrt{B})$ between budget B and norm U of the largest classifier in the comparison sequence. Experiments are presented comparing several linear-threshold algorithms on chronologically-ordered textual datasets. These experiments support our theoretical findings in that they show to what extent randomized budget algorithms are more robust than deterministic ones when learning shifting target data streams.  相似文献   

20.
In view of the bad capability of the standard support vector machine (SVM) in field of white noise of input series, a new v-SVM with Gaussian loss function which is call g-SVM is put forward to handle white noises. To seek the unknown parameters of g-SVM, an adaptive normal Gaussian particle swarm optimization (ANPSO) is also proposed. The results of applications show that the hybrid forecasting model based on the g-SVM and ANPSO is feasible and effective, the comparison between the method proposed in this paper and other ones is also given which proves this method is better than v-SVM and other traditional methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号