首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The well-known sequential minimal optimization (SMO) algorithm is the most commonly used algorithm for numerical solutions of the support vector learning problems. At each iteration in the traditional SMO algorithm, also called 2PSMO algorithm in this paper, it jointly optimizes only two chosen parameters. The two parameters are selected either heuristically or randomly, whilst the optimization with respect to the two chosen parameters is performed analytically. The 2PSMO algorithm is naturally generalized to the three-parameter sequential minimal optimization (3PSMO) algorithm in this paper. At each iteration of this new algorithm, it jointly optimizes three chosen parameters. As in 2PSMO algorithm, the three parameters are selected either heuristically or randomly, whilst the optimization with respect to the three chosen parameters is performed analytically. Consequently, the main difference between these two algorithms is that the optimization is performed at each iteration of the 2PSMO algorithm on a line segment, whilst that of the 3PSMO algorithm on a two-dimensional region consisting of infinitely many line segments. This implies that the maximum can be attained more efficiently by 3PSMO algorithm. Main updating formulae of both algorithms for each support vector learning problem are presented. To assess the efficiency of the 3PSMO algorithm compared with the 2PSMO algorithm, 14 benchmark datasets, 7 for classification and 7 for regression, will be tested and numerical performances are compared. Simulation results demonstrate that the 3PSMO outperforms the 2PSMO algorithm significantly in both executing time and computation complexity.  相似文献   

2.
量子粒子群优化算法在训练支持向量机中的应用   总被引:3,自引:0,他引:3  
山艳  须文波  孙俊 《计算机应用》2006,26(11):2645-2647,2677
训练支持向量机的本质问题就是求解二次规划问题,但对大规模的训练样本来说,求解二次规划问题困难很大。遗传算法和粒子群算法等智能搜索技术可以在较少的时间开销内给出问题的近似解。量子粒子群优化(QPSO)算法是在经典的微粒群算法的基础上所提出的一种有较高收敛性和稳定性的进化算法。将操作简单而收敛快速的QPSO算法运用于训练支持向量机,优化求解二次规划问题.为解决大规模的二次规划问题开辟了一条新的途径。  相似文献   

3.
Incremental training of support vector machines   总被引:13,自引:0,他引:13  
We propose a new algorithm for the incremental training of support vector machines (SVMs) that is suitable for problems of sequentially arriving data and fast constraint parameter variation. Our method involves using a "warm-start" algorithm for the training of SVMs, which allows us to take advantage of the natural incremental properties of the standard active set approach to linearly constrained optimization problems. Incremental training involves quickly retraining a support vector machine after adding a small number of additional training vectors to the training set of an existing (trained) support vector machine. Similarly, the problem of fast constraint parameter variation involves quickly retraining an existing support vector machine using the same training set but different constraint parameters. In both cases, we demonstrate the computational superiority of incremental training over the usual batch retraining method.  相似文献   

4.
Selecting training points for one-class support vector machines   总被引:1,自引:0,他引:1  
This paper proposes a training points selection method for one-class support vector machines. It exploits the feature of a trained one-class SVM, which uses points only residing on the exterior region of data distribution as support vectors. Thus, the proposed training set reduction method selects the so-called extreme points which sit on the boundary of data distribution, through local geometry and k-nearest neighbours. Experimental results demonstrate that the proposed method can reduce training set considerably, while the obtained model maintains generalization capability to the level of a model trained on the full training set, but uses less support vectors and exhibits faster training speed.  相似文献   

5.
山艳  须文波孙俊 《计算机应用》2006,26(11):2645-2647
训练支持向量机的本质问题就是求解二次规划问题,但对大规模的训练样本来说,求解二次规划问题困难很大。遗传算法和粒子群算法等智能搜索技术可以在较少的时间开销内给出问题的近似解。量子粒子群优化(QPSO)算法是在经典的微粒群算法的基础上所提出的一种有较高收敛性和稳定性的进化算法。将操作简单而收敛快速的QPSO算法运用于训练支持向量机,优化求解二次规划问题,为解决大规模的二次规划问题开辟了一条新的途径。  相似文献   

6.
The Journal of Supercomputing - Support vector machine faces some problems associated with training time in the presence of large data sets due to the need for high memory and high computational...  相似文献   

7.
支持向量机的多层动态自适应参数优化   总被引:10,自引:3,他引:10       下载免费PDF全文
首先提出了基于多层动态自适应搜索技术的最小二乘支持向量机参数优化方法,然后采用最小二乘支持向量机对典型非线性控制系统的辨识进行了研究.辨识结果表明,最小二乘支持向量机可以用于非线性控制系统辨识,多层动态自适应搜索方法确定了最优支持向量机参数,从而获得精确的非线性控制系统辨识结果.  相似文献   

8.
Constrained efficient global optimization with support vector machines   总被引:1,自引:1,他引:0  
This paper presents a methodology for constrained efficient global optimization (EGO) using support vector machines (SVMs). While the objective function is approximated using Kriging, as in the original EGO formulation, the boundary of the feasible domain is approximated explicitly as a function of the design variables using an SVM. Because SVM is a classification approach and does not involve response approximations, this approach alleviates issues due to discontinuous or binary responses. More importantly, several constraints, even correlated, can be represented using one unique SVM, thus considerably simplifying constrained problems. In order to account for constraints, this paper introduces an SVM-based ??probability of feasibility?? using a new Probabilistic SVM model. The proposed optimization scheme is constituted of two levels. In a first stage, a global search for the optimal solution is performed based on the ??expected improvement?? of the objective function and the probability of feasibility. In a second stage, the SVM boundary is locally refined using an adaptive sampling scheme. An unconstrained and a constrained formulation of the optimization problem are presented and compared. Several analytical examples are used to test the formulations. In particular, a problem with 99 constraints and an aeroelasticity problem with binary output are presented. Overall, the results indicate that the constrained formulation is more robust and efficient.  相似文献   

9.
李艳  杨晓伟 《计算机应用》2011,31(12):3297-3301
高的计算复杂度限制了双边加权模糊支持向量机在实际分类问题中的应用。为了降低计算复杂度,提出了应用序贯最小优化算法(SMO)解该模型,该模型首先将整个二次规划问题分解成一系列规模为2的二次规划子问题,然后求解这些二次规划子问题。为了测试SMO算法的性能,在三个真实数据集和两个人工数据集上进行了数值实验。结果表明:与传统的内点算法相比,在不损失测试精度的情况下,SMO算法明显地降低了模型的计算复杂度,使其在实际中的应用成为可能。  相似文献   

10.
We present a new method for the incremental training of multiclass support vector machines that can simultaneously modify each class separating hyperplane and provide computational efficiency for training tasks where the training data collection is sequentially enriched and dynamic adaptation of the classifier is required over time. An auxiliary function has been designed, that incorporates some desired characteristics in order to provide an upper bound for the objective function, which summarizes the multiclass classification task. A novel set of multiplicative update rules is proposed, which is independent from any kind of learning rate parameter, provides computational efficiency compared to the conventional batch training approach and is easy to implement. Convergence to the global minimum is guaranteed, since the optimization problem is convex and the global minimizer for the enriched dataset is found using a warm-start algorithm. Experimental evidence on various data collections verified that our method is faster than retraining the classifier from scratch, while the achieved classification accuracy rate is maintained at the same level.  相似文献   

11.
Support vector machines (SVMs) are theoretically well-justified machine learning techniques, which have also been successfully applied to many real-world domains. The use of optimization methodologies plays a central role in finding solutions of SVMs. This paper reviews representative and state-of-the-art techniques for optimizing the training of SVMs, especially SVMs for classification. The objective of this paper is to provide readers an overview of the basic elements and recent advances for training SVMs and enable them to develop and implement new optimization strategies for SVM-related research at their disposal.  相似文献   

12.
特征子集选择和训练参数的优化一直是SVM研究中的两个重要方面,选择合适的特征和合理的训练参数可以提高SVM分类器的性能,以往的研究是将两个问题分别进行解决。随着遗传优化等自然计算技术在人工智能领域的应用,开始出现特征选择及参数的同时优化研究。研究采用免疫遗传算法(IGA)对特征选择及SVM 参数的同时优化,提出了一种IGA-SVM 算法。实验表明,该方法可找出合适的特征子集及SVM 参数,并取得较好的分类效果,证明算法的有效性。  相似文献   

13.
In this paper, we present an improved incremental training algorithm for support vector machines (SVMs). Instead of selecting training samples randomly, we divide them into groups and apply the k-means clustering algorithm to collect the initial set of training samples. In active query, we assign a weight to each sample according to its confidence factor and its distance to the separating hyperplane. The confidence factor is calculated from the error upper bound of the SVM to indicate the closeness of the current hyperplane to the optimal hyperplane. A criterion is developed to eliminate non-informative training samples incrementally. Experimental results show our algorithm works successfully on artificial and real data, and is superior to the existing methods.  相似文献   

14.
Support vector machines (SVMs) are one of the most popular classification tools and show the most potential to address under-sampled noisy data (a large number of features and a relatively small number of samples). However, the computational cost is too expensive, even for modern-scale samples, and the performance largely depends on the proper setting of parameters. As the data scale increases, the improvement in speed becomes increasingly challenging. As the dimension (feature number) largely increases while the sample size remains small, the avoidance of overfitting becomes a significant challenge. In this study, we propose a two-phase sequential minimal optimization (TSMO) to largely reduce the training cost for large-scale data (tested with 3186–70,000-sample datasets) and a two-phased-in differential-learning particle swarm optimization (tDPSO) to ensure the accuracy for under-sampled data (tested with 2000–24481-feature datasets). Because the purpose of training SVMs is to identify support vectors that denote a hyperplane, TSMO is developed to quickly select support vector candidates from the entire dataset and then identify support vectors from those candidates. In this manner, the computational burden is largely reduced (a 29.4%–65.3% reduction rate). The proposed tDPSO uses topology variation and differential learning to solve PSO’s premature convergence issue. Population diversity is ensured through dynamic topology until a ring connection is achieved (topology-variation phases). Further, particles initiate chemo-type simulated-annealing operations, and the global-best particle takes a two-turn diversion in response to stagnation (event-induced phases). The proposed tDPSO-embedded SVMs were tested with several under-sampled noisy cancer datasets and showed superior performance over various methods, even those methods with feature selection for the preprocessing of data.  相似文献   

15.
Optimizing the training speed of support vector machines (SVMs) is one of the most important topics in the SVM research. In this paper, we propose an algorithm in which the size of working set is reduced to one in order to obtain a faster training speed. Instead of the complex heuristic criteria, the random order for selecting the elements into the working set is adopted. The proposed algorithm shows a better performance in linear SVM training, especially in the large-scale scenario.  相似文献   

16.
Support vector machines (SVMs) have good accuracy and generalization properties, but they tend to be slow to classify new examples. In contrast to previous work that aims to reduce the time required to fully classify all examples, we present a method that provides the best-possible classification given a specific amount of computational time. We construct two SVMs: a “full” SVM that is optimized for high accuracy, and an approximation SVM (via reduced-set or subset methods) that provides extremely fast, but less accurate, classifications. We apply the approximate SVM to the full data set, estimate the posterior probability that each classification is correct, and then use the full SVM to reclassify items in order of their likelihood of misclassification. Our experimental results show that this method rapidly achieves high accuracy, by selectively devoting resources (reclassification) only where needed. It also provides the first such progressive SVM solution that can be applied to multiclass problems.  相似文献   

17.
Successive overrelaxation for support vector machines   总被引:36,自引:0,他引:36  
Successive overrelaxation (SOR) for symmetric linear complementarity problems and quadratic programs is used to train a support vector machine (SVM) for discriminating between the elements of two massive datasets, each with millions of points. Because SOR handles one point at a time, similar to Platt's sequential minimal optimization (SMO) algorithm (1999) which handles two constraints at a time and Joachims' SVM(light) (1998) which handles a small number of points at a time, SOR can process very large datasets that need not reside in memory. The algorithm converges linearly to a solution. Encouraging numerical results are presented on datasets with up to 10 000 000 points. Such massive discrimination problems cannot be processed by conventional linear or quadratic programming methods, and to our knowledge have not been solved by other methods. On smaller problems, SOR was faster than SVM(light) and comparable or faster than SMO.  相似文献   

18.
This paper presents kernel regularization information criterion (KRIC), which is a new criterion for tuning regularization parameters in kernel logistic regression (KLR) and support vector machines (SVMs). The main idea of the KRIC is based on the regularization information criterion (RIC). We derive an eigenvalue equation to calculate the KRIC and solve the problem. The computational cost for parameter tuning by the KRIC is reduced drastically by using the Nystro/spl uml/m approximation. The test error rate of SVMs or KLR with the regularization parameter tuned by the KRIC is comparable with the one by the cross validation or evaluation of the evidence. The computational cost of the KRIC is significantly lower than the one of the other criteria.  相似文献   

19.
改进的支持向量机分类算法   总被引:1,自引:0,他引:1  
在研究了标准SVM分类算法后,本文提出了一种快速的支持向量机分类方法.该方法通过解决两类相关的SVM问题,找到两个非平行的平面,其中每个平面靠近其相应的类样本点,远离另一类样本点,最后通过这两个平面找到一个将两类样本分开的最优平面.在处理非线性情况下,引入一种快速核函数分类方法.使用该算法可以使分类的速度得到很大提高,针对实际数据集的实验表明了该算法的有效性.  相似文献   

20.
In this paper, we propose to reinforce the Self-Training strategy in semi-supervised mode by using a generative classifier that may help to train the main discriminative classifier to label the unlabeled data. We call this semi-supervised strategy Help-Training and apply it to training kernel machine classifiers as support vector machines (SVMs) and as least squares support vector machines. In addition, we propose a model selection strategy for semi-supervised training. Experimental results on both artificial and real problems demonstrate that Help-Training outperforms significantly the standard Self-Training. Moreover, compared to other semi-supervised methods developed for SVMs, our Help-Training strategy often gives the lowest error rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号