首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
从理论上看,支持向量机(SVM)的解对应于样本空间最大超球体的中心,而最新的由T.B.Trafalis等人提出的解析中心机(ACM)的解对应于样本空间最大超球体的解析中心。理论和实验表明,解析中心机的泛化性能要优于支持向量机,但其收敛性问题尚待解决。本文研究了解析中心机的收敛性问题,证明了只要满足一定的条件,解析中心机是收敛的。  相似文献   

2.
Benchmarking Least Squares Support Vector Machine Classifiers   总被引:16,自引:0,他引:16  
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of equations in the dual space. While the SVM classifier has a large margin interpretation, the LS-SVM formulation is related in this paper to a ridge regression approach for classification with binary targets and to Fisher's linear discriminant analysis in the feature space. Multiclass categorization problems are represented by a set of binary classifiers using different output coding schemes. While regularization is used to control the effective number of parameters of the LS-SVM classifier, the sparseness property of SVMs is lost due to the choice of the 2-norm. Sparseness can be imposed in a second stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LS-SVM classifiers with linear, polynomial and radial basis function (RBF) kernels. Both the SVM and LS-SVM classifier with RBF kernel in combination with standard cross-validation procedures for hyperparameter selection achieve comparable test set performances. These SVM and LS-SVM performances are consistently very good when compared to a variety of methods described in the literature including decision tree based algorithms, statistical algorithms and instance based learning methods. We show on ten UCI datasets that the LS-SVM sparse approximation procedure can be successfully applied.  相似文献   

3.
针对块算法和分解算法各自的特.点,提出一种对一个样本点集合同时进行块算法和分解算法的算法,即对一个数据集合同时进行两个优化的方法,控制了块算法的工作集的规模,加快了分解算法的收敛速度。  相似文献   

4.
标准的近似支持向量机(PSVM)用求解正则化最小二乘问题代替了求解二次规划问题,它可以得到一个解析解,从而减少训练时间。但是标准的PSVM没有考虑数据集中正、负样本的分布情况,对所有的样本都赋予了相同的惩罚因子。而在实际问题中,数据集中样本的分布是不平衡的。针对此问题,在PSVM的基础上提出了一种基于密度加权的近似支持向量机(DPSVM),其先计算样本的密度指标,不同的样例有不同的密度信息,因此对不同的样例给予不同的惩罚因子,并将原始优化问题中的惩罚因子由数值变为一个对角矩阵。在UCI数据集上用这种方法进行了实验,并与SVM和PSVM方法进行了比较,结果表明,DPSVM在正负类样本分布不平衡的数据集上有较好的分类性能。  相似文献   

5.
Digital Least Squares Support Vector Machines   总被引:1,自引:0,他引:1  
This paper presents a very simple digital architecture that implements a Least-Squares Support Vector Machine. The simplicity of the whole system and its good behavior when used to solve classification problems hold good prospects for the application of such a kind of learning machines to build embedded systems.  相似文献   

6.
一种新型的多元分类支持向量机   总被引:3,自引:0,他引:3  
最小二乘支持向量机采用最小二乘线性系统代替传统的支持向量机采用二次规划方法解决模式识别问题。该文详细推理和分析了二元分类最小二乘支持向量机算法,构建了多元分类最小二乘支持向量机,并通过典型样本进行测试,结果表明采用多元分类最小二乘支持向量机进行模式识别是有效、可行的。  相似文献   

7.
Multicategory Proximal Support Vector Machine Classifiers   总被引:5,自引:0,他引:5  
Given a dataset, each element of which labeled by one of k labels, we construct by a very fast algorithm, a k-category proximal support vector machine (PSVM) classifier. Proximal support vector machines and related approaches (Fung & Mangasarian, 2001; Suykens & Vandewalle, 1999) can be interpreted as ridge regression applied to classification problems (Evgeniou, Pontil, & Poggio, 2000). Extensive computational results have shown the effectiveness of PSVM for two-class classification problems where the separating plane is constructed in time that can be as little as two orders of magnitude shorter than that of conventional support vector machines. When PSVM is applied to problems with more than two classes, the well known one-from-the-rest approach is a natural choice in order to take advantage of its fast performance. However, there is a drawback associated with this one-from-the-rest approach. The resulting two-class problems are often very unbalanced, leading in some cases to poor performance. We propose balancing the k classes and a novel Newton refinement modification to PSVM in order to deal with this problem. Computational results indicate that these two modifications preserve the speed of PSVM while often leading to significant test set improvement over a plain PSVM one-from-the-rest application. The modified approach is considerably faster than other one-from-the-rest methods that use conventional SVM formulations, while still giving comparable test set correctness.Editor Shai Ben-David  相似文献   

8.
A Simple Decomposition Method for Support Vector Machines   总被引:21,自引:0,他引:21  
The decomposition method is currently one of the major methods for solving support vector machines. An important issue of this method is the selection of working sets. In this paper through the design of decomposition methods for bound-constrained SVM formulations we demonstrate that the working set selection is not a trivial task. Then from the experimental analysis we propose a simple selection of the working set which leads to faster convergences for difficult cases. Numerical experiments on different types of problems are conducted to demonstrate the viability of the proposed method.  相似文献   

9.
研究基于支持向量机的人脸识别技术.在识别过程中,首先将人脸图片分为子图片,再利用离散小波变换提取子图片特征组合为多维向量作为整幅人脸图片特征.在此基础上,为每个类构造一个支持向量机进行识别.基于ORL人脸数据库的模拟实验表明,算法实现较简单,并具有较好的性能.  相似文献   

10.
人脸检测作为人脸识别系统的重要一环,越来越受到技术研究和商业应用的关注。针对人脸检测环境的复杂性,该文提出了基于肤色和支持向量机的人脸检测算法。该算法对于具有复杂背景信息的人脸彩色图像,采用肤色检测的方法进行肤色区域的分割并去除噪声干扰,然后使用支持向量机(SVM)对于类似肤色区域进一步检测并确定人脸区域。实验表明,结合肤色模型的快速检测和支持向量机的二次验证,该方法能提高人脸检测的准确性,并缩短检测时间。  相似文献   

11.
一种改进的支持向量机NN-SVM   总被引:39,自引:0,他引:39  
支持向量机(SVM)是一种较新的机器学习方法,它利用靠近边界的少数向量构造一个最优分类超平面。在训练分类器时,SVM的着眼点在于两类的交界部分,那些混杂在另一类中的点往往无助于提高分类器的性能,反而会大大增加训练器的计算负担,同时它们的存在还可能造成过学习,使泛化能力减弱.为了改善支持向量机的泛化能力,该文在其基础上提出了一种改进的SVM—NN-SVM:它先对训练集进行修剪,根据每个样本与其最近邻类标的异同决定其取舍,然后再用SVM训练得到分类器.实验表明,NN-SVM相比SVM在分类正确率、分类速度以及适用的样本规模上都表现出了一定的优越性.  相似文献   

12.
In this paper, we introduce an interactive multi‐party negotiation support method for decision problems that involve multiple, conflicting linear criteria and linear constraints. Most previous methods for this type of problem have relied on decision alternatives located on the Pareto frontier; in other words, during the negotiation process the parties are presented with new Pareto optimal solutions, requiring the parties to sacrifice the achievement of some criteria in order to secure improvements with respect to other criteria. Such a process may be vulnerable to stalemate situations where none of the parties is willing to move to a potentially better solution, e.g., because they perceive – rightly or wrongly ? that they have to give up more than their fair share. Our method relies on “win–win” scenarios in which each party will be presented with “better” solutions at each stage of the negotiations. Each party starts the negotiation process at some inferior initial solution, for instance the best starting point that can be achieved without negotiation with the other parties, such as BATNA (best alternative to a negotiated agreement). In subsequent iterations, the process gravitates closer to the Pareto frontier by suggesting an improved solution to each party, based on the preference information (e.g., aspiration levels) provided by all parties at the previous iteration. The preference information that each party needs to provide is limited to aspiration levels for the objectives, and a party's revealed preference information is not shared with the opposing parties. Therefore, our method may represent a more natural negotiation environment than previous methods that rely on tradeoffs and sacrifice, and provides a positive decision support framework in which each party may be more comfortable with, and more readily accept, the proposed compromise solution. The current paper focuses on the concept, the algorithmic development, and uses an example to illustrate the nature and capabilities of our method. In a subsequent paper, we will use experiments with real users to explore issues such as whether our proposed “win–win” method tends to result in better decisions or just better negotiations, or both; and how users will react in practice to using an inferior starting point in the negotiations.  相似文献   

13.
最小二乘隐空间支持向量机   总被引:9,自引:0,他引:9  
王玲  薄列峰  刘芳  焦李成 《计算机学报》2005,28(8):1302-1307
在隐空间中采用最小二乘损失函数,提出了最小二乘隐空间支持向量机(LSHSSVMs).同隐空间支持向量机(HSSVMs)一样,最小二乘隐空间支持向量机不需要核函数满足正定条件,从而扩展了支持向量机核函数的选择范围.由于采用了最小二乘损失函数,最小二乘隐空问支持向量机产生的优化问题为无约束凸二次规划,这比隐空间支持向量机产生的约束凸二次规划更易求解.仿真实验结果表明所提算法在计算时间和推广能力上较隐空间支持向量机存在一定的优势.  相似文献   

14.
在实际的邮件过滤应用中,由于垃圾邮件本身的一些因素,像传统的支持向量机分类模型把一个邮件样本明确地归为某一类就很容易出错,而以一定概率的输出判断是否属于某一类则较为合理。根据这种思想,本文在传统支持向量机邮件分类器基础上,提出了一种分类器优化思想,通过对分类输出进行概率计算,并对概率的阈值进行判断,从而确定邮件所属类别。实验证明这种方法是有效可行的。  相似文献   

15.
支特向量机是一种新的机器学习方法,已成功地应用于模式分类、回归分析和密度估计等问题中.本文依据统计学习理论和最优化理论建立了线性支特向量机的无约束优化模型,并给出了一种有效的近似解法一极大熵方法,为求解支持向量机优化问题提供了一种新途径,本文方法特别易于计算机实现。数值实验结果表明了模型和算法的可行性和有效性.  相似文献   

16.
Least Squares Support Vector Machine Classifiers   总被引:396,自引:1,他引:396  
In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM's. The approach is illustrated on a two-spiral benchmark classification problem.  相似文献   

17.
探讨基于支持向量机的高分辨率遥感图像中某型号飞机的检测识别问题.提出将小波变换结合灰度共生矩阵法提取目标样本信息特征的一种新方法,通过对Brodatz纹理进行测试,实验表明该方法有效提高了纹理分类识别率.此外,将支持向量机方法运用于遥感图像目标识别中,用分块区域搜索的方法检测到目标所在区域,实现对目标的检测识别.试验表明,该方法快速、高效且具备一定的鲁棒性.  相似文献   

18.
最小二乘双支持向量机的在线学习算法   总被引:1,自引:0,他引:1  
针对具有两个非并行分类超平面的最小二乘双支持向量机,提出了一种在线学习算法。通过利用矩阵求逆分解引理,所提在线学习算法能充分利用历史的训练结果,避免了大型矩阵的求逆计算过程,从而降低了计算的复杂性。仿真结果验证了所提学习算法的有效性。  相似文献   

19.
基于线性临近支持向量机,提出一种改进的分类器一直接支持向量机.该分类器与临近支持向量机相比,对线性分类二者相同;对于非线性分类,直接支持向量机的Lagrangian乘子求解公式和分类器的表达式都更加简单,计算复杂度降低一半,且通过替代核函数就可实现线性与非线性的统一,可使用相同的算法代码,改正了临近支持向量机的不足.数值实验表明,非线性分类时,直接支持向量机的训练速度比临近支持向量机要快一倍左右,而测试速度则快更多,且分类精度并没有降低.  相似文献   

20.
支持向量机中的核参数选择问题   总被引:18,自引:3,他引:18  
核函数中的参数选择是支持向量机中的一个很重要的问题,它直接影响模型的推广能力。通过最速下降法求LOO上界的极小点来确定核参数是一种新的核参数选择方法。由于该方法易陷入局部最优解,提出了一种基于混合遗传算法求解LOO上界极小点的核参数选择方法。实验证明,通过该方法选择出来的核参数能够提高分类精度,具有实用性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号