首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
用于支持向量机拒识区域的加权k近邻法   总被引:1,自引:0,他引:1       下载免费PDF全文
为解决1-v-r和1-v-1支持向量机中存在的拒识区域问题,提出一种加权k近邻法。该方法计算落入拒识区域中的样本,即拒识样本到所有训练样本的距离,选择最近的k个样本为拒识样本的类别投票,并根据距离大小进行加权,得票多的类即拒识样本的所属类。实验结果表明,加权k近邻法实现了零拒识,提高了传统多分类支持向量机的分类性能。  相似文献   

2.
Multi-Classification by Using Tri-Class SVM   总被引:2,自引:0,他引:2  
The standard form for dealing with multi-class classification problems when bi-classifiers are used is to consider a two-phase (decomposition, reconstruction) training scheme. The most popular decomposition procedures are pairwise coupling (one versus one, 1-v-1), which considers a learning machine for each Pair of classes, and the one-versus-all scheme (one versus all, 1-v-r), which takes into consideration each class versus the remaining classes. In this article a 1-v-1 tri-class Support Vector Machine (SVM) is presented. The expansion of the architecture of this machine into three categories specifically addresses the decomposition problem of how to prevent the loss of information which occurs in the usual 1-v-1 training procedure. The proposed machine, by means of a third class, allows all the information to be incorporated into the remaining training patterns when a multi-class problem is considered in the form of a 1-v-1 decomposition. Three general structures are presented where each improves some features from the precedent structure. In order to deal with multi-classification problems, it is demonstrated that the final machine proposed allows ordinal regression as a form of decomposition procedure. Examples and experimental results are presented which illustrate the performance of the new tri-class SV machine.  相似文献   

3.
动态剪枝二叉树多类SVM在入侵检测中的研究   总被引:1,自引:1,他引:0       下载免费PDF全文
针对现有多分类支持向量机算法所存在的训练时间长、决策速度慢等问题,提出了一种动态剪枝二叉树多类支持向量机算法,该算法能够有效减少支持向量的个数,从而减少训练时间。为了验证算法的有效性,该文使用KDD99数据集对应用该算法的入侵检测模型进行评测,并且将实验结果同1-v-r算法以及1-v-1算法进行了比较。实验结果表明,提出的算法是高效可行的。  相似文献   

4.
支持向量机多类分类算法新研究   总被引:2,自引:1,他引:1  
支持向量机最初是针对两类分类问题提出的,如何将其推广至多类分类问题是当前SVM研究中的热点问题之一。主要针对支持向量机多类分类方法中的分解重构法进行了深入分析,详细讨论了影响分类器性能的两个关键因素:分解策略和组合策略,并通过实验验证了该观点。最后,通过实验对比了包括M-ary 支持向量机和模糊支持向量机的SVM多类分类方法。  相似文献   

5.
Adaptive binary tree for fast SVM multiclass classification   总被引:1,自引:0,他引:1  
Jin  Cheng  Runsheng   《Neurocomputing》2009,72(13-15):3370
This paper presents an adaptive binary tree (ABT) to reduce the test computational complexity of multiclass support vector machine (SVM). It achieves a fast classification by: (1) reducing the number of binary SVMs for one classification by using separating planes of some binary SVMs to discriminate other binary problems; (2) selecting the binary SVMs with the fewest average number of support vectors (SVs). The average number of SVs is proposed to denote the computational complexity to exclude one class. Compared with five well-known methods, experiments on many benchmark data sets demonstrate our method can speed up the test phase while remain the high accuracy of SVMs.  相似文献   

6.
非平衡二叉树多类支持向量机分类方法   总被引:2,自引:0,他引:2       下载免费PDF全文
提出一种新的基于非平衡二叉树的支持向量机多类别分类方法。该方法通过分析已知类别样本的先验分布知识,构造一个二叉决策树,使容易区分的类别从根节点开始逐层分割出来,以获得较高的推广能力。该方法解决了传统分类算法中所存在的不可分区域问题,在训练时只需构造N-1个SVM分类器,而测试时的判决次数小于N。将该方法应用于人脸识别实验。测试结果表明,与传统分类算法相比,该方法的平均分类时间是最少的。  相似文献   

7.
We present an improved version of One-Against-All (OAA) method for multiclass SVM classification based on a decision tree approach. The proposed decision tree based OAA (DT-OAA) is aimed at increasing the classification speed of OAA by using posterior probability estimates of binary SVM outputs. DT-OAA decreases the average number of binary SVM tests required in testing phase to a greater extent when compared to OAA and other multiclass SVM methods. For a balanced multiclass dataset with K classes, under best situation, DT-OAA requires only (K + 1)/2 binary tests on an average as opposed to K binary tests in OAA; however, on imbalanced multiclass datasets we observed DT-OAA to be much faster with proper selection of order in which the binary SVMs are arranged in the decision tree. Computational comparisons on publicly available datasets indicate that the proposed method can achieve almost the same classification accuracy as that of OAA, but is much faster in decision making.  相似文献   

8.
Standard support vector machines (SVMs) training algorithms have O(l 3) computational and O(l 2) space complexities, where l is the training set size. It is thus computationally infeasible on very large data sets. To alleviate the computational burden in SVM training, we propose an algorithm to train SVMs on a bound vectors set that is extracted based on Fisher projection. For linear separate problems, we use linear Fisher discriminant to compute the projection line, while for non-linear separate problems, we use kernel Fisher discriminant to compute the projection line. For each case, we select a certain ratio samples whose projections are adjacent to those of the other class as bound vectors. Theoretical analysis shows that the proposed algorithm is with low computational and space complexities. Extensive experiments on several classification benchmarks demonstrate the effectiveness of our approach.  相似文献   

9.
Cutting-plane training of structural SVMs   总被引:4,自引:0,他引:4  
Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at .  相似文献   

10.
Support Vector Machines (SVM) represent one of the most promising Machine Learning (ML) tools that can be applied to the problem of traffic classification in IP networks. In the case of SVMs, there are still open questions that need to be addressed before they can be generally applied to traffic classifiers. Having being designed essentially as techniques for binary classification, their generalization to multi-class problems is still under research. Furthermore, their performance is highly susceptible to the correct optimization of their working parameters. In this paper we describe an approach to traffic classification based on SVM. We apply one of the approaches to solving multi-class problems with SVMs to the task of statistical traffic classification, and describe a simple optimization algorithm that allows the classifier to perform correctly with as little training as a few hundred samples. The accuracy of the proposed classifier is then evaluated over three sets of traffic traces, coming from different topological points in the Internet. Although the results are relatively preliminary, they confirm that SVM-based classifiers can be very effective at discriminating traffic generated by different applications, even with reduced training set sizes.  相似文献   

11.
Predicting corporate credit-rating using statistical and artificial intelligence (AI) techniques has received considerable research attention in the literature. In recent years, multi-class support vector machines (MSVMs) have become a very appealing machine-learning approach due to their good performance. Until now, researchers have proposed a variety of techniques for adapting support vector machines (SVMs) to multi-class classification, since SVMs were originally devised for binary classification. However, most of them have only focused on classifying samples into nominal categories; thus, the unique characteristic of credit-rating - ordinality - seldom has been considered in the proposed approaches. This study proposes a new type of MSVM classifier (named OMSVM) that is designed to extend the binary SVMs by applying an ordinal pairwise partitioning (OPP) strategy. Our model can efficiently and effectively handle multiple ordinal classes. To validate OMSVM, we applied it to a real-world case of bond rating. We compared the results of our model with those of conventional MSVM approaches and other AI techniques including MDA, MLOGIT, CBR, and ANNs. The results showed that our proposed model improves the performance of classification in comparison to other typical multi-class classification techniques and uses fewer computational resources.  相似文献   

12.
Support vector machines for texture classification   总被引:18,自引:0,他引:18  
This paper investigates the application of support vector machines (SVMs) in texture classification. Instead of relying on an external feature extractor, the SVM receives the gray-level values of the raw pixels, as SVMs can generalize well even in high-dimensional spaces. Furthermore, it is shown that SVMs can incorporate conventional texture feature extraction methods within their own architecture, while also providing solutions to problems inherent in these methods. One-against-others decomposition is adopted to apply binary SVMs to multitexture classification, plus a neural network is used as an arbitrator to make final classifications from several one-against-others SVM outputs. Experimental results demonstrate the effectiveness of SVMs in texture classification.  相似文献   

13.
衣治安  刘杨 《计算机应用》2007,27(11):2860-2862
目前性能较好的多分类算法有1-v-r支持向量机(SVM)、1-1-1SVM、DDAG SVM等,但存在大量不可分区域且训练时间较长的问题。提出一种基于二叉树的多分类SVM算法用于电子邮件的分类与过滤,通过构建二叉树将多分类转化为二值分类,算法采用先聚类再分类的思想,计算测试样本与子类中心的最大相似度和子类间的分离度,以构造决策节点的最优分类超平面。对于C类分类只需C-1个决策函数,从而可节省训练时间。实验表明,该算法得到了较高的查全率、查准率。  相似文献   

14.

支持向量机(SVM) 在处理多分类问题时, 需要综合利用多个二分类SVM, 以获得多分类判决结果. 传统多分类拓展方法使用的是SVM的硬输出, 在一定程度上造成了信息的丢失. 为了更加充分地利用信息, 提出一种基于证据推理-多属性决策方法的SVM多分类算法, 将多分类问题视为一个多属性决策问题, 使用证据推理-模糊谨慎有序加权平均方法(FCOWA-ER) 实现SVM的多分类判决. 实验结果表明, 所提出方法可以获得更高的分类精度.

  相似文献   

15.
The sparsity driven classification technologies have attracted much attention in recent years, due to their capability of providing more compressive representations and clear interpretation. Two most popular classification approaches are support vector machines (SVMs) and kernel logistic regression (KLR), each having its own advantages. The sparsification of SVM has been well studied, and many sparse versions of 2-norm SVM, such as 1-norm SVM (1-SVM), have been developed. But, the sparsification of KLR has been less studied. The existing sparsification of KLR is mainly based on L 1 norm and L 2 norm penalties, which leads to the sparse versions that yield solutions not so sparse as it should be. A very recent study on L 1/2 regularization theory in compressive sensing shows that L 1/2 sparse modeling can yield solutions more sparse than those of 1 norm and 2 norm, and, furthermore, the model can be efficiently solved by a simple iterative thresholding procedure. The objective function dealt with in L 1/2 regularization theory is, however, of square form, the gradient of which is linear in its variables (such an objective function is the so-called linear gradient function). In this paper, through extending the linear gradient function of L 1/2 regularization framework to the logistic function, we propose a novel sparse version of KLR, the 1/2 quasi-norm kernel logistic regression (1/2-KLR). The version integrates advantages of KLR and L 1/2 regularization, and defines an efficient implementation scheme of sparse KLR. We suggest a fast iterative thresholding algorithm for 1/2-KLR and prove its convergence. We provide a series of simulations to demonstrate that 1/2-KLR can often obtain more sparse solutions than the existing sparsity driven versions of KLR, at the same or better accuracy level. The conclusion is also true even in comparison with sparse SVMs (1-SVM and 2-SVM). We show an exclusive advantage of 1/2-KLR that the regularization parameter in the algorithm can be adaptively set whenever the sparsity (correspondingly, the number of support vectors) is given, which suggests a methodology of comparing sparsity promotion capability of different sparsity driven classifiers. As an illustration of benefits of 1/2-KLR, we give two applications of 1/2-KLR in semi-supervised learning, showing that 1/2-KLR can be successfully applied to the classification tasks in which only a few data are labeled.  相似文献   

16.
Type-2 fuzzy logic-based classifier fusion for support vector machines   总被引:1,自引:0,他引:1  
As a machine-learning tool, support vector machines (SVMs) have been gaining popularity due to their promising performance. However, the generalization abilities of SVMs often rely on whether the selected kernel functions are suitable for real classification data. To lessen the sensitivity of different kernels in SVMs classification and improve SVMs generalization ability, this paper proposes a fuzzy fusion model to combine multiple SVMs classifiers. To better handle uncertainties existing in real classification data and in the membership functions (MFs) in the traditional type-1 fuzzy logic system (FLS), we apply interval type-2 fuzzy sets to construct a type-2 SVMs fusion FLS. This type-2 fusion architecture takes considerations of the classification results from individual SVMs classifiers and generates the combined classification decision as the output. Besides the distances of data examples to SVMs hyperplanes, the type-2 fuzzy SVMs fusion system also considers the accuracy information of individual SVMs. Our experiments show that the type-2 based SVM fusion classifiers outperform individual SVM classifiers in most cases. The experiments also show that the type-2 fuzzy logic-based SVMs fusion model is better than the type-1 based SVM fusion model in general.  相似文献   

17.
A new classification method for fault waveform is proposed based on discrete orthogonal wavelet transform (DOWT) and hybrid support vector machine (hybrid SVM) for fault type of a three-phase voltage inverter. The waveforms of output voltage obtained from the faulty inverter are decomposed by DOWT into wavelet coefficient matrices, through which we can obtain singular value vectors acted as features of time-series periodic waveforms. And then a multi-classes classification method based on a new Huffman Tree structure is presented to realize 1-v-r SVM strategy. The extracted features are applied to hybrid SVM for determining fault type. Compared to employing the structure based on ordinary binary tree, the superiority of the proposed SVM method is shown in the success of fault diagnosis because the average Loo-correctness of the SVM based on Huffman tree structure exceed the general SVM 3.65%, and the correctness reaches 99.6%.  相似文献   

18.
基于支持向量机的二值分类原理,提出了一种由自适应共振理论方法与支持向量机相结合的改进型多类分类方法,此方法改进了传统支持向量机的一对一多类分类方法;对于每个二值分类器的结果进行决策时没有采用投票原则,而是采用自适应共振理论网络融合二值分类器的输出信息,从而克服了当分类器输出结果接近于O时投票法容易出现决策错误和票数相同时无法决策的不足.此算法已应用于玻璃的分类.仿真实验证明,此方法具有较好的分类效果.  相似文献   

19.
Twin Support Vector Machines for pattern classification   总被引:3,自引:0,他引:3  
We propose twin SVM, a binary SVM classifier that determines two nonparallel planes by solving two related SVM-type problems, each of which is smaller than in a conventional SVM. The twin SVM formulation is in the spirit of proximal SVMs via generalized eigenvalues. On several benchmark data sets, Twin SVM is not only fast, but shows good generalization. Twin SVM is also useful for automatically discovering two-dimensional projections of the data  相似文献   

20.
Support vector machines (SVMs) have proven to be a powerful technique for pattern classification. SVMs map inputs into a high-dimensional space and then separate classes with a hyperplane. A critical aspect of using SVMs successfully is the design of the inner product, the kernel, induced by the high dimensional mapping. We consider the application of SVMs to speaker and language recognition. A key part of our approach is the use of a kernel that compares sequences of feature vectors and produces a measure of similarity. Our sequence kernel is based upon generalized linear discriminants. We show that this strategy has several important properties. First, the kernel uses an explicit expansion into SVM feature space—this property makes it possible to collapse all support vectors into a single model vector and have low computational complexity. Second, the SVM builds upon a simpler mean-squared error classifier to produce a more accurate system. Finally, the system is competitive and complimentary to other approaches, such as Gaussian mixture models (GMMs). We give results for the 2003 NIST speaker and language evaluations of the system and also show fusion with the traditional GMM approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号