共查询到20条相似文献,搜索用时 0 毫秒
1.
最小二乘支持向量机的一种稀疏化算法 总被引:7,自引:0,他引:7
介绍了一种稀疏化最小二乘支持向量机的剪枝算法。由于支持值图谱中小的支持值所对应的训练样本在算法执行阶段所起的作用较小,所以删除它们不会引起性能的显著下降。仿真实验表明,该算法不但简单、易于实现,而且能够保持良好的分类性能。 相似文献
2.
3.
Digital Least Squares Support Vector Machines 总被引:1,自引:0,他引:1
This paper presents a very simple digital architecture that implements a Least-Squares Support Vector Machine. The simplicity of the whole system and its good behavior when used to solve classification problems hold good prospects for the application of such a kind of learning machines to build embedded systems. 相似文献
4.
5.
LI Li-Juan SU Hong-Ye CHU Jian 《自动化学报》2007,33(11):1182-1188
This paper proposes a practical generalized predictive control (GPC) algorithm based on online least squares support vector machines (LS-SVM) which can deal with nonlinear systems effectively. At each sampling period the algorithm recursively modifies the model by adding a new data pair and deleting the least important one out of the consideration on realtime property. The data pair deleted is determined by the absolute value of lagrange multiplier from last sampling period. The paper gives the recursive algorithm of model parameters when adding a new data pair and deleting an existent one, respectively, and thus the inversion of a large matrix is avoided and the memory can be controlled by the algorithm entirely. The nonlinear LS-SVM model is applied in GPC algorithm at each sampling period. The experiments of generalized predictive control on pH neutralizing process show the effectiveness and practicality of the proposed algorithm. 相似文献
6.
以医疗数据为应用对象,应用网格搜索和交叉验证的方法选择参数,建立最小二乘支持向量机分类器,进行实际验证,并与使用K近邻分类器(K-NN)和C4.5决策树两种方法的结果进行比较.结果表明,LS-SVM分类器取得较高的准确率,表明最小二乘支持向量机在医疗诊断研究中具有很大的应用潜力. 相似文献
7.
Benchmarking Least Squares Support Vector Machine Classifiers 总被引:16,自引:0,他引:16
van Gestel Tony Suykens Johan A.K. Baesens Bart Viaene Stijn Vanthienen Jan Dedene Guido de Moor Bart Vandewalle Joos 《Machine Learning》2004,54(1):5-32
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of equations in the dual space. While the SVM classifier has a large margin interpretation, the LS-SVM formulation is related in this paper to a ridge regression approach for classification with binary targets and to Fisher's linear discriminant analysis in the feature space. Multiclass categorization problems are represented by a set of binary classifiers using different output coding schemes. While regularization is used to control the effective number of parameters of the LS-SVM classifier, the sparseness property of SVMs is lost due to the choice of the 2-norm. Sparseness can be imposed in a second stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LS-SVM classifiers with linear, polynomial and radial basis function (RBF) kernels. Both the SVM and LS-SVM classifier with RBF kernel in combination with standard cross-validation procedures for hyperparameter selection achieve comparable test set performances. These SVM and LS-SVM performances are consistently very good when compared to a variety of methods described in the literature including decision tree based algorithms, statistical algorithms and instance based learning methods. We show on ten UCI datasets that the LS-SVM sparse approximation procedure can be successfully applied. 相似文献
8.
Least Squares Support Vector Machine Classifiers 总被引:396,自引:1,他引:396
In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM's. The approach is illustrated on a two-spiral benchmark classification problem. 相似文献
9.
10.
基于LS-SVM的小样本费用智能预测 总被引:5,自引:3,他引:5
最小二乘支持向量机引入最小二乘线性系统到支持向量机中,代替传统的支持向量机采用二次规划方法解决函数估计问题。该文推导了用于函数估计的最小二乘支持向量机算法,构建了基于最小二乘支持向量机的智能预测模型,并对机载电子设备费用预测进行了研究。结果表明最小二乘支持向量机具有比多元对数回归更高的小样本费用预测精度。 相似文献
11.
天线阵列的宽频段测向特性十分复杂,使采用智能学习的方法对波达方向进行估计时,面临着一个海量数据的复杂学习问题.采用LS-SVM建立来波方位估计模型,对LS-SVM的支持向量进行稀疏化,利用支持度高的支持向量作为训练样本,并通过二次学习获取了天线阵列的复杂测向能力,实现了宽频段波达方向的估计.实验结果表明,用稀疏化的支持向量进行二次学习,能显著提高来波方位估计的精度,在宽频段来波方位估计中有巨大的应用价值. 相似文献
12.
Support vector machine (SVM), as an effective method in classification problems, tries to find the optimal hyperplane that
maximizes the margin between two classes and can be obtained by solving a constrained optimization criterion using quadratic
programming (QP). This QP leads to higher computational cost. Least squares support vector machine (LS-SVM), as a variant
of SVM, tries to avoid the above shortcoming and obtain an analytical solution directly from solving a set of linear equations
instead of QP. Both SVM and LS-SVM operate directly on patterns represented by vector, i.e., before applying SVM or LS-SVM
to a pattern, any non-vector pattern such as an image has to be first vectorized into a vector pattern by some techniques
like concatenation. However, some implicit structural or local contextual information may be lost in this transformation.
Moreover, as the dimension d of the weight vector in SVM or LS-SVM with the linear kernel is equal to the dimension d
1 × d
2 of the original input pattern, as a result, the higher the dimension of a vector pattern is, the more space is needed for
storing it. In this paper, inspired by the method of feature extraction directly based on matrix patterns and the advantages
of LS-SVM, we propose a new classifier design method based on matrix patterns, called MatLSSVM, such that the new method can
not only directly operate on original matrix patterns, but also efficiently reduce memory for the weight vector (d) from d
1 × d
2 to d
1 + d
2. However like LS-SVM, MatLSSVM inherits LS-SVM’s existence of unclassifiable regions when extended to multi-class problems.
Thus with the fuzzy version of LS-SVM, a corresponding fuzzy version of MatLSSVM (MatFLSSVM) is further proposed to remove
unclassifiable regions effectively for multi-class problems. Experimental results on some benchmark datasets show that the
proposed method is competitive in classification performance compared to LS-SVM, fuzzy LS-SVM (FLS-SVM), more-recent MatPCA
and MatFLDA. In addition, more importantly, the idea used here has a possibility of providing a novel way of constructing
learning model. 相似文献
13.
回归最小二乘支持向量机的增量和在线式学习算法 总被引:40,自引:0,他引:40
首先给出回归最小二乘支持向量机的数学模型,并分析了它的性质,然后在此基础上根据分块矩阵计算公式和核函数矩阵本身的特点设计了支持向量机的增量式学习算法和在线学习算法.该算法能充分利用历史的训练结果,减少存储空间和计算时间.仿真实验表明了这两种学习方法的有效性. 相似文献
14.
最小二乘支持向量机采用最小二乘线性系统代替传统的支持向量即采用二次规划方法解决模式识别问题,能够有效地减少计算的复杂性。但最小二乘支持向量机失去了对支持向量的稀疏性。文中提出了一种基于边界近邻的最小二乘支持向量机,采用寻找边界近邻的方法对训练样本进行修剪,以减少了支持向量的数目。将边界近邻最小二乘支持向量机用来解决由1-a-r(one-against-rest)方法构造的支持向量机分类问题,有效地克服了用1-a-r(one-against-rest)方法构造的支持向量机分类器训练速度慢、计算资源需求比较大、存在拒分区域等缺点。实验结果表明,采用边界近邻最小二乘支持向量机分类器,识别精度和识别速度都得到了提高。 相似文献
15.
经玲 《计算机工程与应用》2006,42(4):7-9,41
由给定的空间数据点集构造B样条曲线是CAGD中一个重要研究课题,常用的逼近方法实质上是基于“经验风险”意义下的最小二乘逼近。文章讨论了基于“结构风险”意义下用最小二乘支持向量回归机整体构造B样条曲线的逼近问题,其出发点是最小化结构风险,而不是传统学习的经验风险最小化,从而在理论上保证了好的推广能力,能够实现对原始曲线的逼近而不仅仅是对测量数据点的逼近。文章建立了B样条曲线拟合的数学模型,并构造了一种特殊的核函数来保证曲线的B样条表示形式。该方法为曲线拟合问题提供了新思路,数值实验证实了可行性。 相似文献
16.
A recursive orthogonal least squares (ROLS) algorithm for multi-input, multi-output systems is developed in this paper and is applied to updating the weighting matrix of a radial basis function network. An illustrative example is given, to demonstrate the effectiveness of the algorithm for eliminating the effects of ill-conditioning in the training data, in an application of neural modelling of a multi-variable chemical process. Comparisons with results from using standard least squares algorithms, in batch and recursive form, show that the ROLS algorithm can significantly improve the neural modelling accuracy. The ROLS algorithm can also be applied to a large data set with much lower requirements on computer memory than the batch OLS algorithm. 相似文献
17.
基于LS-SVM的图像去噪方法 总被引:3,自引:0,他引:3
支持向量机是一种基于统计学习理论的机器学习方法,该方法已广泛用于解决分类问题和回归问题。文中将最小二乘支持向量机应用于图像去噪中,并同小波去噪及中值滤波进行了比较分析。仿真结果表明,该方法能较好的保存图像细节,并具有很好的泛化能力。 相似文献
18.
为了解决增量式最小二乘孪生支持向量回归机存在构成的核矩阵无法很好地逼近原核矩阵的问题,提出了一种增量式约简最小二乘孪生支持向量回归机(IRLSTSVR)算法.该算法首先利用约简方法,判定核矩阵列向量之间的相关性,筛选出用于构成核矩阵列向量的样本作为支持向量以降低核矩阵中列向量的相关性,使得构成的核矩阵能够更好地逼近原核... 相似文献
19.
为了解决最小二乘支持向量机模型稀疏性不足的问题,提出了一种约简核矩阵的LS-SVM稀疏化方法.按照空间两点的欧式距离寻找核矩阵中相近的行(列),并通过特定的规则进行合并,以减小核矩阵的规模,进而求得稀疏LS-SVM模型.以高斯径向基核函数为例,详细阐述了改进方法的实现步骤,并通过仿真表明了采用该方法求得的稀疏LS-SVM模型泛化能力良好. 相似文献
20.
A Simple Decomposition Method for Support Vector Machines 总被引:21,自引:0,他引:21
The decomposition method is currently one of the major methods for solving support vector machines. An important issue of this method is the selection of working sets. In this paper through the design of decomposition methods for bound-constrained SVM formulations we demonstrate that the working set selection is not a trivial task. Then from the experimental analysis we propose a simple selection of the working set which leads to faster convergences for difficult cases. Numerical experiments on different types of problems are conducted to demonstrate the viability of the proposed method. 相似文献