首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
最小二乘双支持向量机的在线学习算法   总被引:1,自引:0,他引:1  
针对具有两个非并行分类超平面的最小二乘双支持向量机,提出了一种在线学习算法。通过利用矩阵求逆分解引理,所提在线学习算法能充分利用历史的训练结果,避免了大型矩阵的求逆计算过程,从而降低了计算的复杂性。仿真结果验证了所提学习算法的有效性。  相似文献   

2.
在线稀疏最小二乘支持向量机回归的研究   总被引:6,自引:0,他引:6  
王定成  姜斌 《控制与决策》2007,22(2):132-137
现有最小二乘支持向量机回归的训练和模型输出的计算需要较长的时间,不适合在线实时训练.对此,提出一种在线稀疏最小二乘支持向量机回归,其训练算法采用样本字典,减少了训练样本的计算量.训练样本采用序贯加入的方式,适合在线获取,并且该算法在理论上是收敛的.仿真结果表明,该算法具有较好的稀疏性和实时性,可进一步用于建模与实时控制等方面的研究.  相似文献   

3.
提出了一个最小二乘双支持向量回归机,它是在双支持向量回归机基础之上建立的,打破了标准支持向量回归机利用两条平行超平面构造ε带的思想。事实上,它是利用两条不一定平行的超平面构造ε带,每条超平面确定一个半ε-带,从而得到最终的回归函数,这使该回归函数更符合数据本身的分布情况,回归算法有更好的推广能力。另外,最小二乘双支持向量机只需求解两个较小规模的线性方程组就能得到最后的回归函数,其计算复杂度相对较低。数值实验也表明该回归算法在推广能力和计算效率上有一定的优势。  相似文献   

4.

针对一般最小二乘支持向量机处理大规模数据集会出现训练速度慢,计算量大,不易在线训练的缺点,将修正后的遗忘因子矩形窗方法与支持向量机相结合,提出一种基于改进的遗忘因子矩形窗算法的在线最小二乘支持向量机回归算法,既突出了当前窗口数据的作用,又考虑了历史数据的影响.所提出的算法可减少计算量,提高在线辨识精度.仿真算例表明了该方法的有效性.

  相似文献   

5.
郭辉  刘贺平 《信息与控制》2005,34(4):403-407
提出了用核的偏最小二乘进行特征提取.首先把初始输入映射到高维特征空间,然后在高维特征空间中计算得分向量,降低样本的维数,再用最小二乘支持向量机进行回归.通过实验表明,这种方法得到的效果优于没有特征提取的回归.同时与PLS提取特征相比,KPLS分析效果更好.  相似文献   

6.
回归最小二乘支持向量机的增量和在线式学习算法   总被引:40,自引:0,他引:40  
首先给出回归最小二乘支持向量机的数学模型,并分析了它的性质,然后在此基础上根据分块矩阵计算公式和核函数矩阵本身的特点设计了支持向量机的增量式学习算法和在线学习算法.该算法能充分利用历史的训练结果,减少存储空间和计算时间.仿真实验表明了这两种学习方法的有效性.  相似文献   

7.
模糊最小二乘孪生支持向量机模型融合了模糊函数和最小二乘孪生支持向量机算法特性,以解决训练数据集存在孤立点噪声和运算效率低下问题。针对回归过程基于统计学习结构风险最小化原则,对该模型进行L_2范数正则化改进。考虑到大规模数据集的训练效率问题,对原始模型进行了L_1范数正则化改进。基于增量学习特性,对数据集训练过程进行增量选择迭加以加快训练速度。在UCI数据集上验证了相关改进算法的优越性。  相似文献   

8.
针对硅锰合金埋弧冶炼过程中合金成分检测困难、离线化验滞后大、难以实时控制的问题,提出一种改进在线最小二乘支持向量机(IOLSSVM)的合金成分预测模型。该模型对每一个新增样本采用增量式学习,根据样本对模型贡献的不同删除样本集中对模型贡献最小的样本数据,利用递推计算增强模型的在线学习能力。将此模型应用于30MVA硅锰合金埋弧炉冶炼过程合金成分预测,实际生产运行数据表明了此方法的有效性。  相似文献   

9.
鉴于传统在线最小二乘支持向量机在解决时变对象的回归问题时, 模型跟踪精度不高, 支持向量不够稀疏, 结合迭代策略和约简技术, 提出一种在线自适应迭代约简最小二乘支持向量机. 该方法考虑新增样本与历史数据共同作用对现有模型产生的约束影响, 寻求对目标函数贡献最大的样本作为新增支持向量, 实现了支持向量稀疏化, 提高了在线预测精度与速度. 仿真对比分析表明该方法可行有效, 较传统方法回归精度高且所需支持向量数目最少.  相似文献   

10.
一种快速最小二乘支持向量机分类算法   总被引:1,自引:1,他引:0       下载免费PDF全文
最小二乘支持向量机不需要求解凸二次规划问题,通过求解一组线性方程而获得最优分类面,但是,最小二乘支持向量机失去了解的稀疏性,当训练样本数量较大时,算法的计算量非常大。提出了一种快速最小二乘支持向量机算法,在保证支持向量机推广能力的同时,算法的速度得到了提高,尤其是当训练样本数量较大时算法的速度优势更明显。新算法通过选择那些支持值较大样本作为训练样本,以减少训练样本数量,提高算法的速度;然后,利用最小二乘支持向量机算法获得近似最优解。实验结果显示,新算法的训练速度确实较快。  相似文献   

11.
A sparse approximation algorithm based on projection is presented in this paper in order to overcome the limitation of the non-sparsity of least squares support vector machines (LS-SVM). The new inputs are projected into the subspace spanned by previous basis vectors (BV) and those inputs whose squared distance from the subspace is higher than a threshold are added in the BV set, while others are rejected. This consequently results in the sparse approximation. In addition, a recursive approach to deleting an exiting vector in the BV set is proposed. Then the online LS-SVM, sparse approximation and BV removal are combined to produce the sparse online LS-SVM algorithm that can control the size of memory irrespective of the processed data size. The suggested algorithm is applied in the online modeling of a pH neutralizing process and the isomerization plant of a refinery, respectively. The detailed comparison of computing time and precision is also given between the suggested algorithm and the nonsparse one. The results show that the proposed algorithm greatly improves the sparsity just with little cost of precision.  相似文献   

12.
针对递归最小二乘支持向量机的递归性易导致建模中偏微分方程组求解困难的问题,提出用解析法求解偏微分方程组,实现了完整的递归最小二乘支持向量机模型.首先分析了各参数的相关性,然后推导出偏微分方程的解析表达式并求解.仿真实例表明,在动态系统建模中,该模型的性能比常用的串并联模型以及现有不完整递归最小二乘支持向量机模型的精度更高、性能更好.  相似文献   

13.
Combining reduced technique with iterative strategy, we propose a recursive reduced least squares support vector regression. The proposed algorithm chooses the data which make more contribution to target function as support vectors, and it considers all the constraints generated by the whole training set. Thus it acquires less support vectors, the number of which can be arbitrarily predefined, to construct the model with the similar generalization performance. In comparison with other methods, our algorithm also gains excellent parsimoniousness. Numerical experiments on benchmark data sets confirm the validity and feasibility of the presented algorithm. In addition, this algorithm can be extended to classification.  相似文献   

14.
一种基于最小二乘支持向量机的预测控制算法   总被引:24,自引:0,他引:24  
刘斌  苏宏业  褚健 《控制与决策》2004,19(12):1399-1402
针对工业过程中普遍存在的非线性被控对象,提出一种基于最小二乘支持向量机建模的预测控制算法.首先,用具有RBF核函数的LS-SVM离线建立被控对象的非线性模型;然后,在系统运行过程中,将离线模型在每一个采样周期关于当前采样点进行线性化,并用广义预测算法实现对被控系统的预测控制.仿真结果表明了该算法的有效性和优越性.  相似文献   

15.
A prediction control algorithm is presented based on least squares support vector machines (LS-SVM) model for a class of complex systems with strong nonlinearity. The nonlinear off-line model of the controUed plant is built by LS-SVM with radial basis function (RBF) kernel. In the process of system running, the off-line model is linearized at each sampling instant, and the generalized prediction control (GPC) algorithm is employed to implement the prediction control for the controlled plant. The obtained algorithm is applied to a boiler temperature control system with complicated nonlinearity and large time delay. The results of the experiment verify the effectiveness and merit of the algorithm.  相似文献   

16.
一种快速稀疏最小二乘支持向量回归机   总被引:4,自引:0,他引:4  
赵永平  孙健国 《控制与决策》2008,23(12):1347-1352
将Jiao法直接应用于最小二乘支持向量回归机上的效果并不理想,为此采用不完全抛弃的策略,提出了改进的Jiao法,并将其应用于最小二乘支持向量回归机.数据集测试的结果表明,基于改进Jiao法的稀疏最小二乘支持向量回归机,无论在支持向量个数和训练时间上都取得了一定的优势.与其他剪枝算法相比,在不丧失回归精度的情况下,改进的Jiao法可大大缩短训练时间.另外,改进的Jiao法同样适用于分类问题.  相似文献   

17.
刘渊  王鹏a 《计算机应用研究》2009,26(6):2229-2231
为了提高网络流量预测的精度,研究了一种融合小波变换与贝叶斯LSSVM的网络流量预测方法。首先将原始流量数据时间序列进行小波分解,并将分解得到的近似部分和各细节部分分别单支重构到原级别上;对各个重构后的序列分别用最小二乘支持向量机进行预测,将贝叶斯证据框架应用于最小二乘支持向量机模型参数的选择;将各个预测结果重构后得到对原始序列的预测结果。对比实验表明,该模型不仅具有较快的运行速度,而且具有较高的预测精度。  相似文献   

18.
A least squares support vector fuzzy regression model(LS-SVFR) is proposed to estimate uncertain and imprecise data by applying the fuzzy set principle to weight vectors.This model only requires a set of linear equations to obtain the weight vector and the bias term,which is different from the solution of a complicated quadratic programming problem in existing support vector fuzzy regression models.Besides,the proposed LS-SVFR is a model-free method in which the underlying model function doesn’t need to be predefined.Numerical examples and fault detection application are applied to demonstrate the effectiveness and applicability of the proposed model.  相似文献   

19.
In this paper we discuss sparse least squares support vector machines (sparse LS SVMs) trained in the empirical feature space, which is spanned by the mapped training data. First, we show that the kernel associated with the empirical feature space gives the same value with that of the kernel associated with the feature space if one of the arguments of the kernels is mapped into the empirical feature space by the mapping function associated with the feature space. Using this fact, we show that training and testing of kernel-based methods can be done in the empirical feature space and that training of LS SVMs in the empirical feature space results in solving a set of linear equations. We then derive the sparse LS SVMs restricting the linearly independent training data in the empirical feature space by the Cholesky factorization. Support vectors correspond to the selected training data and they do not change even if the value of the margin parameter is changed. Thus for linear kernels, the number of support vectors is the number of input variables at most. By computer experiments we show that we can reduce the number of support vectors without deteriorating the generalization ability.
Shigeo AbeEmail:

Shigeo Abe   received the B.S. degree in Electronics Engineering, the M.S. degree in Electrical Engineering, and the Dr. Eng. degree, all from Kyoto University, Kyoto, Japan in 1970, 1972, and 1984, respectively. After 25 years in the industry, he was appointed as full professor of Electrical Engineering, Kobe University in April 1997. He is now a professor of Graduate School of Science and Technology, Kobe University. His research interests include pattern classification and function approximation using neural networks, fuzzy systems, and support vector machines. He is the author of Neural Networks and Fuzzy Systems (Kluwer, 1996), Pattern Classification (Springer, 2001), and Support Vector Machines for Pattern Classification (Springer, 2005). Dr. Abe was awarded an outstanding paper prize from the Institute of Electrical Engineers of Japan in 1984 and 1995. He is a member of IEEE, INNS, and several Japanese Societies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号