首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
In a previous paper, the author (2001) proved the convergence of a commonly used decomposition method for support vector machines (SVMs). However, there is no theoretical justification about its stopping criterion, which is based on the gap of the violation of the optimality condition. It is essential to have the gap asymptotically approach zero, so we are sure that existing implementations stop in a finite number of iterations after reaching a specified tolerance. Here, we prove this result and illustrate it by two extensions: /spl nu/-SVM and a multiclass SVM by Crammer and Singer (2001). A further result shows that, in final iterations of the decomposition method, only a particular set of variables are still being modified. This supports the use of the shrinking and caching techniques in some existing implementations. Finally, we prove the asymptotic convergence of a decomposition method for this multiclass SVM. Discussions on the difference between this convergence proof and the one in another paper by Lin are also included.  相似文献   

2.
On the convergence of the decomposition method for support vectormachines   总被引:13,自引:0,他引:13  
The decomposition method is currently one of the major methods for solving support vector machines (SVM). Its convergence properties have not been fully understood. The general asymptotic convergence was first proposed by Chang et al. However, their working set selection does not coincide with existing implementation. A later breakthrough by Keerthi and Gilbert (2000, 2002) proved the convergence finite termination for practical cases while the size of the working set is restricted to two. In this paper, we prove the asymptotic convergence of the algorithm used by the software SVM(light) and other later implementation. The size of the working set can be any even number. Extensions to other SVM formulations are also discussed.  相似文献   

3.
Today, decomposition methods are one of the most popular methods for training support vector machines (SVMs). With the use of kernels that do not satisfy Mercer's condition, new techniques must be designed to handle nonpositive–semidefinite kernels resulting to this choice. In this work we incorporate difference of convex (DC functions) optimization techniques into decomposition methods to tackle this difficulty. The new approach needs no problem modification and we show that the only use of a truncated DC algorithms (DCAs) in the decomposition scheme produces a sufficient decrease of the objective function at each iteration. Thanks to this property, an asymptotic convergence proof of the new algorithm is produced without any blockwise convexity assumption on the objective function. We also investigate a working set selection rule using second-order information for sequential minimal optimization (SMO)-type decomposition in the spirit of DC optimization. Numerical results show the robustness and the efficiency of the new methods compared with state-of-the-art software.   相似文献   

4.
Embedding feature selection in nonlinear support vector machines (SVMs) leads to a challenging non-convex minimization problem, which can be prone to suboptimal solutions. This paper develops an effective algorithm to directly solve the embedded feature selection primal problem. We use a trust-region method, which is better suited for non-convex optimization compared to line-search methods, and guarantees convergence to a minimizer. We devise an alternating optimization approach to tackle the problem efficiently, breaking it down into a convex subproblem, corresponding to standard SVM optimization, and a non-convex subproblem for feature selection. Importantly, we show that a straightforward alternating optimization approach can be susceptible to saddle point solutions. We propose a novel technique, which shares an explicit margin variable to overcome saddle point convergence and improve solution quality. Experiment results show our method outperforms the state-of-the-art embedded SVM feature selection method, as well as other leading filter and wrapper approaches.  相似文献   

5.
支持向量机训练算法综述   总被引:6,自引:0,他引:6  
训练SVM的本质是解决二次规划问题,在实际应用中,如果用于训练的样本数很大,标准的二次型优化技术就很难应用。针对这个问题,研究人员提出了各种解决方案,这些方案的核心思想是先将整个优化问题分解为多个同样性质的子问题,通过循环解决子问题来求得初始问题的解。由于这些方法都需要不断地循环迭代来解决每个子问题,所以需要的训练时间很长,这也是阻碍SVM广泛应用的一个重要原因。文章系统回顾了SVM训练的三种主流算法:块算法、分解算法和顺序最小优化算法,并且指出了未来发展方向。  相似文献   

6.
Several decomposition methods have been proposed for the distributed optimal design of quasi-separable problems encountered in Multidisciplinary Design Optimization (MDO). Some of these methods are known to have numerical convergence difficulties that can be explained theoretically. We propose a new decomposition algorithm for quasi-separable MDO problems. In particular, we propose a decomposed problem formulation based on the augmented Lagrangian penalty function and the block coordinate descent algorithm. The proposed solution algorithm consists of inner and outer loops. In the outer loop, the augmented Lagrangian penalty parameters are updated. In the inner loop, our method alternates between solving an optimization master problem and solving disciplinary optimization subproblems. The coordinating master problem can be solved analytically; the disciplinary subproblems can be solved using commonly available gradient-based optimization algorithms. The augmented Lagrangian decomposition method is derived such that existing proofs can be used to show convergence of the decomposition algorithm to Karush–Kuhn–Tucker points of the original problem under mild assumptions. We investigate the numerical performance of the proposed method on two example problems.  相似文献   

7.
Decomposition methods are well-known techniques for solving quadratic programming (QP) problems arising in support vector machines (SVMs). In each iteration of a decomposition method, a small number of variables are selected and a QP problem with only the selected variables is solved. Since large matrix computations are not required, decomposition methods are applicable to large QP problems. In this paper, we will make a rigorous analysis of the global convergence of general decomposition methods for SVMs. We first introduce a relaxed version of the optimality condition for the QP problems and then prove that a decomposition method reaches a solution satisfying this relaxed optimality condition within a finite number of iterations under a very mild condition on how to select variables  相似文献   

8.
A parallel randomized support vector machine (PRSVM) and a parallel randomized support vector regression (PRSVR) algorithm based on a randomized sampling technique are proposed in this paper. The proposed PRSVM and PRSVR have four major advantages over previous methods. (1) We prove that the proposed algorithms achieve an average convergence rate that is so far the fastest bounded convergence rate, among all SVM decomposition training algorithms to the best of our knowledge. The fast average convergence bound is achieved by a unique priority based sampling mechanism. (2) Unlike previous work (Provably fast training algorithm for support vector machines, 2001) the proposed algorithms work for general linear-nonseparable SVM and general non-linear SVR problems. This improvement is achieved by modeling new LP-type problems based on Karush–Kuhn–Tucker optimality conditions. (3) The proposed algorithms are the first parallel version of randomized sampling algorithms for SVM and SVR. Both the analytical convergence bound and the numerical results in a real application show that the proposed algorithm has good scalability. (4) We present demonstrations of the algorithms based on both synthetic data and data obtained from a real word application. Performance comparisons with SVMlight show that the proposed algorithms may be efficiently implemented.  相似文献   

9.
Rigorous proof of termination of SMO algorithm for support vector Machines   总被引:1,自引:0,他引:1  
Sequential minimal optimization (SMO) algorithm is one of the simplest decomposition methods for learning of support vector machines (SVMs). Keerthi and Gilbert have recently studied the convergence property of SMO algorithm and given a proof that SMO algorithm always stops within a finite number of iterations. In this letter, we point out the incompleteness of their proof and give a more rigorous proof.  相似文献   

10.
针对因工业机器人旋转部件故障诊断模型最优参数难以自适应确定导致故障识别率低的问题,提出了一种参数联合优化的VMD-SVM的工业机器人旋转部件故障诊断方法;提出了一种基于遗传变异的改进灰狼算法,该算法采用Logistic混沌映射进行种群初始化,将非线性因子引入位置更新公式,并利用遗传变异策略解决算法陷入局部最优时的停滞现象;基于该算法对VMD和SVM进行参数联合优化;利用参数优化的VMD对故障信号进行分解,对所得的本征模态函数计算改进样本熵以构成特征向量,再输入至参数优化的SVM完成工业机器人旋转部件的故障诊断;仿真和实验结果表明,本文方法能够准确地进行故障诊断,在信号无噪和含噪的条件下准确率最高均达100%,较EMD、LMD、DTCWT、VMD等四种方法具有更优的指标。  相似文献   

11.
Dantzig–Wolfe decomposition can be used to solve the Lagrangian dual of a linear mixed-integer programming problem ( MIP ) if the dual structure of the ( MIP ) is exploited via Lagrangian relaxation with respect to the complicating constraints. In the so-called weighted Dantzig–Wolfe decomposition algorithm, instead of the optimal solution of the Dantzig–Wolfe master problem a specially weighted average of the previously constructed Lagrangian multipliers and the optimal solution of the master problem is used as Lagrangian multiplier for the next Lagrangian subproblem to be solved. A convergence proof of the weighted Dantzig–Wolfe decomposition algorithm is given, and some properties of this procedure together with computational results for the capacitated facility location problem are discussed.  相似文献   

12.
The support vector machine (SVM) has been used in a wide variety of classification problems. The original SVM uses the hinge loss function, which is non-differentiable and makes the problem difficult to solve in particular for regularized SVMs, such as with \(\ell _1\)-regularization. This paper considers the Huberized SVM (HSVM), which uses a differentiable approximation of the hinge loss function. We first explore the use of the proximal gradient (PG) method to solving binary-class HSVM (B-HSVM) and then generalize it to multi-class HSVM (M-HSVM). Under strong convexity assumptions, we show that our algorithm converges linearly. In addition, we give a finite convergence result about the support of the solution, based on which we further accelerate the algorithm by a two-stage method. We present extensive numerical experiments on both synthetic and real datasets which demonstrate the superiority of our methods over some state-of-the-art methods for both binary- and multi-class SVMs.  相似文献   

13.
姚全珠  田元 《计算机工程》2008,34(15):223-225
支持向量机中参数设置对训练支持向量机分类的精确度有不可忽视的影响。支持向量机参数的选取可看作参数的组合优化。免疫算法是一种有效的随机全局优化技术,它具有不易陷入局部最优解、解精度高、收敛速度快等优点。该文利用人工免疫算法进行支持向量机模型选择。该算法主要包括克隆选择、高频变异、受体编辑等操作。试验证明,该算法能够有效提高支持向量机分类的正确性。  相似文献   

14.
在无向加权图上进行距离检索和对象查询是使用无向加权图的重要工作,也是解决实际问题的重要步骤。该文提出一种基于距离签名的处理方法来实现距离检索和查询,通过距离分级、签名编码和压缩等,实现了检索和查询的高效率,减少了存储空间。描述了建模及处理KNN查询的过程,实验证明了该方法的有效性。  相似文献   

15.
Matrices, or more generally, multi-way arrays (tensors) are common forms of data that are encountered in a wide range of real applications. How to classify this kind of data is an important research topic for both pattern recognition and machine learning. In this paper, by analyzing the relationship between two famous traditional classification approaches, i.e., SVM and STM, a novel tensor-based method, i.e., multiple rank multi-linear SVM (MRMLSVM), is proposed. Different from traditional vector-based and tensor based methods, multiple-rank left and right projecting vectors are employed to construct decision boundary and establish margin function. We reveal that the rank of transformation can be regarded as a tradeoff parameter to balance the capacity of learning and generalization in essence. We also proposed an effective approach to solve the proposed non-convex optimization problem. The convergence behavior, initialization, computational complexity and parameter determination problems are analyzed. Compared with vector-based classification methods, MRMLSVM achieves higher accuracy and has lower computational complexity. Compared with traditional supervised tensor-based methods, MRMLSVM performs better for matrix data classification. Promising experimental results on various kinds of data sets are provided to show the effectiveness of our method.  相似文献   

16.
A. Zinn 《Computing》1989,41(3):267-274
This method consists in decoupling the transmission problem into two boundary value problems, which can be solved separately by well known procedures. A convergence proof is given with the help of the integral equation method and convergence results on projection methods.  相似文献   

17.
In this paper, a new weighted approach on Lagrangian support vector machine for imbalanced data classification problem is proposed. The weight parameters are embedded in the Lagrangian SVM formulation. The training method for weighted Lagrangian SVM is presented and its convergence is proven. The weighted Lagrangian SVM classifier is tested and compared with some other SVMs using synthetic and real data to show its effectiveness and feasibility.  相似文献   

18.
We consider a Galerkin Finite Element approximation of the Stokes-Darcy problem which models the coupling between surface and groundwater flows. Then we propose an iterative subdomain method for its solution, inspired to the domain decomposition theory. The convergence analysis that we develop is based on the properties of the discrete Steklov-Poincaré operators associated to the given coupled problem. An optimal preconditioner for Krylov methods is proposed and analyzed.  相似文献   

19.
E. Zampieri 《Calcolo》1989,26(1):61-91
In this paper we consider the numerical approximation of elliptic problems by spectral methods in domains subdivided into substructures. We review an iterative procedure with interface relaxation, reducing the given differential problem to a sequence of Dirichlet and mixed Neumann-Dirichlet problems on each subdomain. The iterative procedure is applied to both tau and collocation spectral approximations. Two optimal strategies for the automatic selection of the relaxation parameter are indicated. We present several numerical experiments showing the convergence properties of the iterative scheme with respect to the decomposition. A multilevel technique for domain decomposition methods is proposed.  相似文献   

20.
In this paper, we are interested in texture modeling with functional analysis spaces. We focus on the case of color image processing, and in particular color image decomposition. The problem of image decomposition consists in splitting an original image f into two components u and v. u should contain the geometric information of the original image, while v should be made of the oscillating patterns of f, such as textures. We propose here a scheme based on a projected gradient algorithm to compute the solution of various decomposition models for color images or vector-valued images. We provide a direct convergence proof of the scheme, and we give some analysis on color texture modeling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号