首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we formulate a least squares version of the recently proposed projection twin support vector machine (PTSVM) for binary classification. This formulation leads to extremely simple and fast algorithm, called least squares projection twin support vector machine (LSPTSVM) for generating binary classifiers. Different from PTSVM, we add a regularization term, ensuring the optimization problems in our LSPTSVM are positive definite and resulting better generalization ability. Instead of usually solving two dual problems, we solve two modified primal problems by solving two systems of linear equations whereas PTSVM need to solve two quadratic programming problems along with two systems of linear equations. Our experiments on publicly available datasets indicate that our LSPTSVM has comparable classification accuracy to that of PTSVM but with remarkably less computational time.  相似文献   

2.
In this paper, we propose a novel least squares twin parametric-margin support vector machine (TPMSVM) for binary classification, called LSTPMSVM for short. LSTPMSVM attempts to solve two modified primal problems of TPMSVM, instead of two dual problems usually solved. The solution of the two modified primal problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in TPMSVM, which leads to extremely simple and fast algorithm. Classification using nonlinear kernel with reduced technique also leads to systems of linear equations. Therefore our LSTPMSVM is able to solve large datasets accurately without any external optimizers. Further, a particle swarm optimization (PSO) algorithm is introduced to do the parameter selection. Our experiments on synthetic as well as on several benchmark data sets indicate that our LSTPMSVM has comparable classification accuracy to that of TPMSVM but with remarkably less computational time.  相似文献   

3.
The least squares twin support vector machine (LSTSVM) generates two non-parallel hyperplanes by directly solving a pair of linear equations as opposed to solving two quadratic programming problems (QPPs) in the conventional twin support vector machine (TSVM), which makes learning speed of LSTSVM faster than that of the TSVM. However, LSTSVM fails to discover underlying similarity information within samples which may be important for classification performance. To address the above problem, we apply the similarity information of samples into LSTSVM to build a novel non-parallel plane classifier, called K-nearest neighbor based least squares twin support vector machine (KNN-LSTSVM). The proposed method not only retains the superior advantage of LSTSVM which is simple and fast algorithm but also incorporates the inter-class and intra-class graphs into the model to improve classification accuracy and generalization ability. The experimental results on several synthetic as well as benchmark datasets demonstrate the efficiency of our proposed method. Finally, we further went on to investigate the effectiveness of our classifier for human action recognition application.  相似文献   

4.

In this paper, we have formulated a fuzzy least squares version of recently proposed clustering method, namely twin support vector clustering (TWSVC). Here, a fuzzy membership value of each data pattern to different cluster is optimized and is further used for assigning each data pattern to one or other cluster. The formulation leads to finding k cluster center planes by solving modified primal problem of TWSVC, instead of the dual problem usually solved. We show that the solution of the proposed algorithm reduces to solving a series of system of linear equations as opposed to solving series of quadratic programming problems along with system of linear equations as in TWSVC. The experimental results on several publicly available datasets show that the proposed fuzzy least squares twin support vector clustering (F-LS-TWSVC) achieves comparable clustering accuracy to that of TWSVC with comparatively lesser computational time. Further, we have given an application of F-LS-TWSVC for segmentation of color images.

  相似文献   

5.
Primal least squares twin support vector regression   总被引:1,自引:0,他引:1  
The training algorithm of classical twin support vector regression (TSVR) can be attributed to the solution of a pair of quadratic programming problems (QPPs) with inequality constraints in the dual space.However,this solution is affected by time and memory constraints when dealing with large datasets.In this paper,we present a least squares version for TSVR in the primal space,termed primal least squares TSVR (PLSTSVR).By introducing the least squares method,the inequality constraints of TSVR are transformed into equality constraints.Furthermore,we attempt to directly solve the two QPPs with equality constraints in the primal space instead of the dual space;thus,we need only to solve two systems of linear equations instead of two QPPs.Experimental results on artificial and benchmark datasets show that PLSTSVR has comparable accuracy to TSVR but with considerably less computational time.We further investigate its validity in predicting the opening price of stock.  相似文献   

6.
孪生支持向量机(TWSVM)的研究是近来机器学习领域的一个热点。TWSVM具有分类精度高、训练速度快等优点,但训练时没有充分利用样本的统计信息。作为TWSVM的改进算法,基于马氏距离的孪生支持向量机(TMSVM)在分类过程中考虑了各类样本的协方差信息,在许多实际问题中有着很好的应用效果。然而TMSVM的训练速度有待提高,并且仅适用于二分类问题。针对这两个问题,将最小二乘思想引入TMSVM,用等式约束取代TMSVM中的不等式约束,将二次规划问题的求解简化为求解两个线性方程组,得到基于马氏距离的最小二乘孪生支持向量机(LSTMSVM),并结合有向无环图策略(DAG)设计出基于马氏距离的最小二乘孪生多分类支持向量机。为了减少DAG结构的误差累积,构造了基于马氏距离的类间可分性度量。人工数据集和UCI数据集上的实验均表明,所提算法不仅有效,而且相对于传统多分类SVM,其分类性能有明显提高。  相似文献   

7.
Twin support vector machine (TSVM), least squares TSVM (LSTSVM) and energy-based LSTSVM (ELS-TSVM) satisfy only empirical risk minimization principle. Moreover, the matrices in their formulations are always positive semi-definite. To overcome these problems, we propose in this paper a robust energy-based least squares twin support vector machine algorithm, called RELS-TSVM for short. Unlike TSVM, LSTSVM and ELS-TSVM, our RELS-TSVM maximizes the margin with a positive definite matrix formulation and implements the structural risk minimization principle which embodies the marrow of statistical learning theory. Furthermore, RELS-TSVM utilizes energy parameters to reduce the effect of noise and outliers. Experimental results on several synthetic and real-world benchmark datasets show that RELS-TSVM not only yields better classification performance but also has a lower training time compared to ELS-TSVM, LSPTSVM, LSTSVM, TBSVM and TSVM.  相似文献   

8.
We first propose Distance Difference GEPSVM (DGEPSVM), a binary classifier that obtains two nonparallel planes by solving two standard eigenvalue problems. Compared with GEPSVM, this algorithm does not need to care about the singularity occurring in GEPSVM, but with better classification correctness. This formulation is capable of dealing with XOR problems with different distribution for keeping the genuine geometrical interpretation of primal GEPSVM. Moreover, the proposed algorithm gives classification correctness comparable to that of LSTSVM and TWSVM, but with lesser unknown parameters. Then, the regularization techniques are incorporated to the TWSVM. With the help of the regularized formulation, a linear programming formation for TWSVM is proposed, called FETSVM, to improve TWSVM sparsity, thereby suppressing input features. This means FETSVM is capable of reducing the number of input features, for linear case. When a nonlinear classifier is used, this means few kernel functions determine the classifier. Lastly, this algorithm is compared on artificial and public datasets. To further illustrate the effectiveness of our proposed algorithms, we also apply these algorithms to USPS handwritten digits.  相似文献   

9.
In classification problems, the data samples belonging to different classes have different number of samples. Sometimes, the imbalance in the number of samples of each class is very high and the interest is to classify the samples belonging to the minority class. Support vector machine (SVM) is one of the widely used techniques for classification problems which have been applied for solving this problem by using fuzzy based approach. In this paper, motivated by the work of Fan et al. (Knowledge-Based Systems 115: 87–99 2017), we have proposed two efficient variants of entropy based fuzzy SVM (EFSVM). By considering the fuzzy membership value for each sample, we have proposed an entropy based fuzzy least squares support vector machine (EFLSSVM-CIL) and entropy based fuzzy least squares twin support vector machine (EFLSTWSVM-CIL) for class imbalanced datasets where fuzzy membership values are assigned based on entropy values of samples. It solves a system of linear equations as compared to the quadratic programming problem (QPP) as in EFSVM. The least square versions of the entropy based SVM are faster than EFSVM and give higher generalization performance which shows its applicability and efficiency. Experiments are performed on various real world class imbalanced datasets and compared the results of proposed methods with new fuzzy twin support vector machine for pattern classification (NFTWSVM), entropy based fuzzy support vector machine (EFSVM), fuzzy twin support vector machine (FTWSVM) and twin support vector machine (TWSVM) which clearly illustrate the superiority of the proposed EFLSTWSVM-CIL.  相似文献   

10.
Abstract: Using a conjugate gradient method, a novel iterative support vector machine (FISVM) is proposed, which is capable of generating a new non‐linear classifier. We attempt to solve a modified primal problem of proximal support vector machine (PSVM) and show that the solution of the modified primal problem reduces to solving just a system of linear equations as opposed to a quadratic programming problem in SVM. This algorithm not only has no requirement for special optimization solvers, such as linear or quadratic programming tools, but also guarantees fast convergence. The full algorithm merely needs four lines of MATLAB codes, which gives results that are similar to or better than that of several new learning algorithms, in terms of classification accuracy. Besides, the proposed stand‐alone approach is capable of dealing with instability of classification performance of smooth support vector machine, generalized proximal support vector machine, PSVM and reduced support vector machine. Experiments carried out on UCI datasets show the effectiveness of our approach.  相似文献   

11.
Xinjun Peng 《Information Sciences》2011,181(18):3967-3980
Twin support vector machines (TSVM) obtain faster training speeds than classical support vector machines (SVM). However, TSVM augmented vectors lose sparsity. In this paper, a rapid sparse twin support vector machine (STSVM) classifier in primal space is proposed to improve the sparsity and robustness of TSVM. Based on a simple back-fitting strategy, the STSVM iteratively builds each nonparallel hyperplanes by adding one support vector (SV) from the corresponding class at one time. This process is terminated using an adaptive and stable stopping criterion. STSVM learning is implemented by linear equation computing systems through introducing a quadratic function to approximate the empirical risk. The computational results on several synthetic and benchmark datasets indicate that the STSVM obtains a sparse separating hyperplane at a low cost without sacrificing its generalization performance.  相似文献   

12.
最小二乘孪生支持向量机通过求解两个线性规划问题来代替求解复杂的二次规划问题,具有计算简单和训练速度快的优势。然而,最小二乘孪生支持向量机得到的超平面易受异常点影响且解缺乏稀疏性。针对这一问题,基于截断最小二乘损失提出了一种鲁棒最小二乘孪生支持向量机模型,并从理论上验证了模型对异常点具有鲁棒性。为使模型可处理大规模数据,基于表示定理和不完全Cholesky分解得到了新模型的稀疏解,并提出了适合处理带异常点的大规模数据的稀疏鲁棒最小二乘孪生支持向量机算法。数值实验表明,新算法比已有算法分类准确率、稀疏性、收敛速度分别提高了1.97%~37.7%、26~199倍和6.6~2 027.4倍。  相似文献   

13.
Indefinite kernel support vector machine(IKSVM)has recently attracted increasing attentions in machine learning.Since IKSVM essentially is a non-convex problem,existing algorithms either change the spectrum of indefinite kernel directly but risking losing some valuable information or solve the dual form of IKSVM whereas suffering from a dual gap problem.In this paper,we propose a primal perspective for solving the problem.That is,we directly focus on the primal form of IKSVM and present a novel algorithm termed as IKSVM-DC for binary and multi-class classification.Concretely,according to the characteristics of the spectrum for the indefinite kernel matrix,IKSVM-DC decomposes the primal function into the subtraction of two convex functions as a difference of convex functions(DC)programming.To accelerate convergence rate,IKSVM-DC combines the classical DC algorithm with a line search step along the descent direction at each iteration.Furthermore,we construct a multi-class IKSVM model which can classify multiple classes in a unified form.A theoretical analysis is then presented to validate that IKSVM-DC can converge to a local minimum.Finally,we conduct experiments on both binary and multi-class datasets and the experimental results show that IKSVM-DC is superior to other state-of-the-art IKSVM algorithms.  相似文献   

14.
Least Squares Support Vector Machine Classifiers   总被引:396,自引:1,他引:396  
In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM's. The approach is illustrated on a two-spiral benchmark classification problem.  相似文献   

15.
As a development of powerful SVMs, the recently proposed parametric-margin ν-support vector machine (par-ν-SVM) is good at dealing with heteroscedastic noise classification problems. In this paper, we propose a novel and fast proximal parametric-margin support vector classifier (PPSVC), based on the par-ν-SVM. In the PPSVC, we maximize a novel proximal parametric-margin by solving a small system of linear equations, while the par-ν-SVM maximizes the parametric-margin by solving a quadratic programming problem. Therefore, our PPSVC not only is useful with the case of heteroscedastic noise but also has a much faster learning speed compared with the par-ν-SVM. Experimental results on several artificial and public available datasets show the advantages of our PPSVC both on the generalization ability and learning speed. Furthermore, we investigate the performance of the proposed PPSVC on the text categorization problem. The experimental results on two benchmark text corpora show the practicability and effectiveness of the proposed PPSVC.  相似文献   

16.
In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.  相似文献   

17.
支持张量机(STM)受限于迭代操作,训练时间较长.针对这一缺点,改进STM的目标规划,将训练过程由解决一组二次规划改为计算线性方程组,并引入直推式的思想解决半监督问题,提出最小二乘半监督支持张量机学习算法.在人脸识别和时间序列分类上对比文中算法与传统算法,实验证明文中算法不仅减少运算时间,而且提高识别率.  相似文献   

18.
Benchmarking Least Squares Support Vector Machine Classifiers   总被引:16,自引:0,他引:16  
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of equations in the dual space. While the SVM classifier has a large margin interpretation, the LS-SVM formulation is related in this paper to a ridge regression approach for classification with binary targets and to Fisher's linear discriminant analysis in the feature space. Multiclass categorization problems are represented by a set of binary classifiers using different output coding schemes. While regularization is used to control the effective number of parameters of the LS-SVM classifier, the sparseness property of SVMs is lost due to the choice of the 2-norm. Sparseness can be imposed in a second stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LS-SVM classifiers with linear, polynomial and radial basis function (RBF) kernels. Both the SVM and LS-SVM classifier with RBF kernel in combination with standard cross-validation procedures for hyperparameter selection achieve comparable test set performances. These SVM and LS-SVM performances are consistently very good when compared to a variety of methods described in the literature including decision tree based algorithms, statistical algorithms and instance based learning methods. We show on ten UCI datasets that the LS-SVM sparse approximation procedure can be successfully applied.  相似文献   

19.
X.-Y. Wu  J.-L. Xia  F. Yang 《Computing》2002,68(4):375-386
A new method for solving the weighted linear least squares problems with full rank is proposed. Based on the theory of Liapunov's stability, the method associates a dynamic system with a weighted linear least squares problem, whose solution we are interested in and integrates the former numerically by an A-stable numerical method. The numerical tests suggest that the new method is more than comparative with current conventional techniques based on the normal equations. Received August 4, 2000; revised August 29, 2001 Published online April 25, 2002  相似文献   

20.
This paper introduces two types of nonsmooth optimization methods for selecting model hyperparameters in primal SVM models based on cross-validation. Unlike common grid search approaches for model selection, these approaches are scalable both in the number of hyperparameters and number of data points. Taking inspiration from linear-time primal SVM algorithms, scalability in model selection is achieved by directly working with the primal variables without introducing any dual variables. The proposed implicit primal gradient descent (ImpGrad) method can utilize existing SVM solvers. Unlike prior methods for gradient descent in hyperparameters space, all work is done in the primal space so no inversion of the kernel matrix is required. The proposed explicit penalized bilevel programming (PBP) approach optimizes both the hyperparameters and parameters simultaneously. It solves the original cross-validation problem by solving a series of least squares regression problems with simple constraints in both the hyperparameter and parameter space. Computational results on least squares support vector regression problems with multiple hyperparameters establish that both the implicit and explicit methods perform quite well in terms of generalization and computational time. These methods are directly applicable to other learning tasks with differentiable loss functions and regularization functions. Both the implicit and explicit algorithms investigated represent powerful new approaches to solving large bilevel programs involving nonsmooth loss functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号