首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 203 毫秒
1.
给出了标准最小二乘支持向量机的数学回归模型,并提出了多核最小二乘支持向量机算法,用于提高非平坦函数的回归精度.运用谱系聚类方法解决多核最小二乘支持向量机的解缺乏稀疏性的问题.利用偏最小二乘回归方法对多核最小二乘支持向量机进行了鲁棒回归.通过仿真实例证实了所提方法的有效性.  相似文献   

2.
在线鲁棒最小二乘支持向量机回归建模   总被引:5,自引:0,他引:5  
鉴于工业过程的时变特性以及现场采集的数据通常具有非线性特性且包含离群点,利用最小二乘支持向量机回归(least squares support vector regression,LSSVR)建模易受离群点的影响.针对这一问题,结合鲁棒学习算法(robust learning algorithm,RLA),本文提出了一种在线鲁棒最小二乘支持向量机回归建模方法.该方法首先利用LSSVR模型对过程输出进行预测,与真实输出相比较得到预测误差;然后利用RLA方法训练LSSVR模型的权值,建立鲁棒LSSVR模型;最后应用增量学习方法在线更新鲁棒LSSVR模型,从而得到在线鲁棒LSSVR模型.仿真研究验证了所提方法的有效性.  相似文献   

3.
陈圣磊  陈耿  薛晖 《计算机工程》2011,37(22):145-147
最小二乘支持向量机在提高求解效率的同时,会丧失解的稀疏性,导致其在预测新样本时速度较慢。为此,提出一种稀疏化最小二乘支持向量机分类算法。在特征空间中寻找近似线性无关向量组,构造分类判别函数的稀疏表示,相应的最小二乘支持向量机优化问题可以通过线性方程组求解,从而得到最优判别函数。实验结果表明,该算法在不损失分类精度的前提下,能够获得比最小二乘支持向量机更快的预测速度。  相似文献   

4.
模糊最小二乘孪生支持向量机模型融合了模糊函数和最小二乘孪生支持向量机算法特性,以解决训练数据集存在孤立点噪声和运算效率低下问题。针对回归过程基于统计学习结构风险最小化原则,对该模型进行L_2范数正则化改进。考虑到大规模数据集的训练效率问题,对原始模型进行了L_1范数正则化改进。基于增量学习特性,对数据集训练过程进行增量选择迭加以加快训练速度。在UCI数据集上验证了相关改进算法的优越性。  相似文献   

5.
最小二乘双支持向量回归机(LSTSVR)通过引入最小二乘损失将双支持向量回归机(TSVR)中的二次规划问题简化为两个线性方程组的求解,从而大大减少了训练时间。然而,LSTSVR最小化基于最小二乘损失的经验风险易导致以下不足:(1)“过学习”问题;(2)模型的解缺乏稀疏性,难以训练大规模数据。针对(1),提出结构化最小二乘双支持向量回归机(S-LSTSVR)以提升模型的泛化能力;针对(2),进一步利用不完全Choesky分解对核矩阵进行低秩近似,给出求解S-LSTSVR的稀疏算法SS-LSTSVR,使模型能有效地训练大规模数据。人工数据和UCI数据集中的实验证明SS-LSTSVR不但可以避免“过学习”,而且能够高效地解决大规模训练问题。  相似文献   

6.
一种快速最小二乘支持向量机分类算法   总被引:1,自引:1,他引:0       下载免费PDF全文
最小二乘支持向量机不需要求解凸二次规划问题,通过求解一组线性方程而获得最优分类面,但是,最小二乘支持向量机失去了解的稀疏性,当训练样本数量较大时,算法的计算量非常大。提出了一种快速最小二乘支持向量机算法,在保证支持向量机推广能力的同时,算法的速度得到了提高,尤其是当训练样本数量较大时算法的速度优势更明显。新算法通过选择那些支持值较大样本作为训练样本,以减少训练样本数量,提高算法的速度;然后,利用最小二乘支持向量机算法获得近似最优解。实验结果显示,新算法的训练速度确实较快。  相似文献   

7.
基于鲁棒学习的最小二乘支持向量机及其应用   总被引:3,自引:1,他引:2  
鉴于最小二乘支持向量机比标准支持向量机具有更高的计算效率和拟合精度,但缺少标准支持向量机的鲁棒性,即当采样数据存在奇异点或者误差变量的高斯分布假设不成立时,会导致不稳健的估计结果,提出了一种鲁棒最小二乘支持向量机方法.该方法在最小二乘支持向量机基础上,通过引入鲁棒学习方法来获得鲁棒估计.仿真分析及某湿法冶金厂的应用实例验证了该方法的可行性和有效性.  相似文献   

8.
基于鲁棒最小二乘支持向量机的气动参数拟合   总被引:1,自引:0,他引:1       下载免费PDF全文
最小二乘支持向量机(LS-SVM)比标准支持向量机具有更高的计算效率,但是却散失了标准支持向量机的稀疏特性,而且当考虑异常值或者误差变量的高斯假设不成立时,会导致不稳健的估计结果。为了克服这两个缺点,在飞行器的气动参数拟合计算中引入了一种鲁棒最小二乘支持向量机(RLS-SVM),该方法通过加权的支持向量机来获得鲁棒估计,并通过对支持值谱进行剪枝最终得到稀疏解。仿真结果表明:RLS-SVM方法简单,学习速度快,拟合精度高,鲁棒性强,是一种在飞行器轨迹计算中值得推广和采用的方法。  相似文献   

9.
一种稀疏最小二乘支持向量分类机   总被引:1,自引:0,他引:1  
一般的支持向量分类机需要求解二次规划问题,最小二乘支持向量机只需求解一个线性方程组,但其缺乏稀疏性.为了改进最小二乘支持向量分类机,本文结合中心距离比值及增量学习的思想提出一种基于预选、筛选支持向量的稀疏最小二乘支持向量机.该方法既能弥补最小二乘向量机的稀疏性,减少计算机的存储量和计算量,加快最小二乘支持向量机的训练速度和决策速度,又能对非均衡训练数据造成的分类面的偏移进行纠正,还不影响最小二乘支持向量机的分类能力.3组实验结果也证实了这一点.  相似文献   

10.
稀疏最小二乘支持向量机及其应用研究   总被引:2,自引:0,他引:2  
提出一种构造稀疏化最小二乘支持向量机的方法.该方法首先通过斯密特正交化法对核矩阵进 行简约,得到核矩阵的基向量组;再利用核偏最小二乘方法对最小二乘支持向量机进行回归计算,从而使最 小二乘向量机具有一定稀疏性.基于稀疏最小二乘向量机建立了非线性动态预测模型,对铜转炉造渣期吹炼 时间进行滚动预测.仿真结果表明,基于核偏最小二乘辨识的稀疏最小二乘支持向量机具有计算效率高、预 测精度好的特点.  相似文献   

11.
The least squares twin support vector machine (LSTSVM) generates two non-parallel hyperplanes by directly solving a pair of linear equations as opposed to solving two quadratic programming problems (QPPs) in the conventional twin support vector machine (TSVM), which makes learning speed of LSTSVM faster than that of the TSVM. However, LSTSVM fails to discover underlying similarity information within samples which may be important for classification performance. To address the above problem, we apply the similarity information of samples into LSTSVM to build a novel non-parallel plane classifier, called K-nearest neighbor based least squares twin support vector machine (KNN-LSTSVM). The proposed method not only retains the superior advantage of LSTSVM which is simple and fast algorithm but also incorporates the inter-class and intra-class graphs into the model to improve classification accuracy and generalization ability. The experimental results on several synthetic as well as benchmark datasets demonstrate the efficiency of our proposed method. Finally, we further went on to investigate the effectiveness of our classifier for human action recognition application.  相似文献   

12.
Twin support vector machine (TSVM), least squares TSVM (LSTSVM) and energy-based LSTSVM (ELS-TSVM) satisfy only empirical risk minimization principle. Moreover, the matrices in their formulations are always positive semi-definite. To overcome these problems, we propose in this paper a robust energy-based least squares twin support vector machine algorithm, called RELS-TSVM for short. Unlike TSVM, LSTSVM and ELS-TSVM, our RELS-TSVM maximizes the margin with a positive definite matrix formulation and implements the structural risk minimization principle which embodies the marrow of statistical learning theory. Furthermore, RELS-TSVM utilizes energy parameters to reduce the effect of noise and outliers. Experimental results on several synthetic and real-world benchmark datasets show that RELS-TSVM not only yields better classification performance but also has a lower training time compared to ELS-TSVM, LSPTSVM, LSTSVM, TBSVM and TSVM.  相似文献   

13.

Classical support vector machine (SVM) and its twin variant twin support vector machine (TWSVM) utilize the Hinge loss that shows linear behaviour, whereas the least squares version of SVM (LSSVM) and twin least squares support vector machine (LSTSVM) uses L2-norm of error which shows quadratic growth. The robust Huber loss function is considered as the generalization of Hinge loss and L2-norm loss that behaves like the quadratic L2-norm loss for closer error points and the linear Hinge loss after a specified distance. Three functional iterative approaches based on generalized Huber loss function are proposed in this paper to solve support vector classification problems of which one is based on SVM, i.e. generalized Huber support vector machine and the other two are in the spirit of TWSVM, namely generalized Huber twin support vector machine and regularization on generalized Huber twin support vector machine. The proposed approaches iteratively find the solutions and eliminate the requirements to solve any quadratic programming problem (QPP) as for SVM and TWSVM. The main advantages of the proposed approach are: firstly, utilize the robust Huber loss function for better generalization and for lesser sensitivity towards noise and outliers as compared to quadratic loss; secondly, it uses functional iterative scheme to find the solution that eliminates the need to solving QPP and also makes the proposed approaches faster. The efficacy of the proposed approach is established by performing numerical experiments on several real-world datasets and comparing the result with related methods, viz. SVM, TWSVM, LSSVM and LSTSVM. The classification results are convincing.

  相似文献   

14.
Neural Processing Letters - Least squares twin support vector machine (LSTSVM) is a new machine learning method, as opposed to solving two quadratic programming problems in twin support vector...  相似文献   

15.
As a new version of support vector machine (SVM), least squares SVM (LS-SVM) involves equality instead of inequality constraints and works with a least squares cost function. A well-known drawback in the LS-SVM applications is that the sparseness is lost. In this paper, we develop an adaptive pruning algorithm based on the bottom-to-top strategy, which can deal with this drawback. In the proposed algorithm, the incremental and decremental learning procedures are used alternately and a small support vector set, which can cover most of the information in the training set, can be formed adaptively. Using this set, one can construct the final classifier. In general, the number of the elements in the support vector set is much smaller than that in the training set and a sparse solution is obtained. In order to test the efficiency of the proposed algorithm, we apply it to eight UCI datasets and one benchmarking dataset. The experimental results show that the presented algorithm can obtain adaptively the sparse solutions with losing a little generalization performance for the classification problems with no-noises or noises, and its training speed is much faster than sequential minimal optimization algorithm (SMO) for the large-scale classification problems with no-noises.  相似文献   

16.
In this paper we formulate a least squares version of the recently proposed projection twin support vector machine (PTSVM) for binary classification. This formulation leads to extremely simple and fast algorithm, called least squares projection twin support vector machine (LSPTSVM) for generating binary classifiers. Different from PTSVM, we add a regularization term, ensuring the optimization problems in our LSPTSVM are positive definite and resulting better generalization ability. Instead of usually solving two dual problems, we solve two modified primal problems by solving two systems of linear equations whereas PTSVM need to solve two quadratic programming problems along with two systems of linear equations. Our experiments on publicly available datasets indicate that our LSPTSVM has comparable classification accuracy to that of PTSVM but with remarkably less computational time.  相似文献   

17.
孪生支持向量机(TWSVM)的研究是近来机器学习领域的一个热点。TWSVM具有分类精度高、训练速度快等优点,但训练时没有充分利用样本的统计信息。作为TWSVM的改进算法,基于马氏距离的孪生支持向量机(TMSVM)在分类过程中考虑了各类样本的协方差信息,在许多实际问题中有着很好的应用效果。然而TMSVM的训练速度有待提高,并且仅适用于二分类问题。针对这两个问题,将最小二乘思想引入TMSVM,用等式约束取代TMSVM中的不等式约束,将二次规划问题的求解简化为求解两个线性方程组,得到基于马氏距离的最小二乘孪生支持向量机(LSTMSVM),并结合有向无环图策略(DAG)设计出基于马氏距离的最小二乘孪生多分类支持向量机。为了减少DAG结构的误差累积,构造了基于马氏距离的类间可分性度量。人工数据集和UCI数据集上的实验均表明,所提算法不仅有效,而且相对于传统多分类SVM,其分类性能有明显提高。  相似文献   

18.
为了解决增量式最小二乘孪生支持向量回归机存在构成的核矩阵无法很好地逼近原核矩阵的问题,提出了一种增量式约简最小二乘孪生支持向量回归机(IRLSTSVR)算法。该算法首先利用约简方法,判定核矩阵列向量之间的相关性,筛选出用于构成核矩阵列向量的样本作为支持向量以降低核矩阵中列向量的相关性,使得构成的核矩阵能够更好地逼近原核矩阵,保证解的稀疏性。然后通过分块矩阵求逆引理高效增量更新逆矩阵,进一步缩短了算法的训练时间。最后在基准测试数据集上验证算法的可行性和有效性。实验结果表明,与现有的代表性算法相比,IRLSTSVR算法能够获得稀疏解和更接近离线算法的泛化性能。  相似文献   

19.
针对模糊孪生支持向量机算法(FTSVM)对噪声仍然敏感,容易过拟合以及不能有效区分支持向量和离群值等问题,提出了一种改进的鲁棒模糊孪生支持向量机算法(IRFTSVM)。将改进的k近邻隶属度函数和基于类内超平面的隶属度函数结合,构造了一种新的混合隶属度函数;在FTSVM算法的目标函数中引入正则化项和额外的约束条件,实现了结构风险最小化,避免了逆矩阵运算,且非线性问题可以像经典的SVM算法一样直接从线性问题扩展而来;将铰链损失函数替换为pinball损失函数,以此降低对噪声的敏感性。此外,在UCI数据集和人工数据集上对该算法进行评估,并与SVM、TWSVM、FTSVM、PTSVM和TBSVM五个算法进行比较。实验结果表明,该算法的分类结果是令人满意的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号