共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
在线稀疏最小二乘支持向量机回归的研究 总被引:6,自引:0,他引:6
现有最小二乘支持向量机回归的训练和模型输出的计算需要较长的时间,不适合在线实时训练.对此,提出一种在线稀疏最小二乘支持向量机回归,其训练算法采用样本字典,减少了训练样本的计算量.训练样本采用序贯加入的方式,适合在线获取,并且该算法在理论上是收敛的.仿真结果表明,该算法具有较好的稀疏性和实时性,可进一步用于建模与实时控制等方面的研究. 相似文献
3.
4.
提出了一个最小二乘双支持向量回归机,它是在双支持向量回归机基础之上建立的,打破了标准支持向量回归机利用两条平行超平面构造ε带的思想。事实上,它是利用两条不一定平行的超平面构造ε带,每条超平面确定一个半ε-带,从而得到最终的回归函数,这使该回归函数更符合数据本身的分布情况,回归算法有更好的推广能力。另外,最小二乘双支持向量机只需求解两个较小规模的线性方程组就能得到最后的回归函数,其计算复杂度相对较低。数值实验也表明该回归算法在推广能力和计算效率上有一定的优势。 相似文献
5.
回归最小二乘支持向量机的增量和在线式学习算法 总被引:40,自引:0,他引:40
首先给出回归最小二乘支持向量机的数学模型,并分析了它的性质,然后在此基础上根据分块矩阵计算公式和核函数矩阵本身的特点设计了支持向量机的增量式学习算法和在线学习算法.该算法能充分利用历史的训练结果,减少存储空间和计算时间.仿真实验表明了这两种学习方法的有效性. 相似文献
6.
基于核的偏最小二乘特征提取的最小二乘支持向量机回归方法 总被引:4,自引:0,他引:4
提出了用核的偏最小二乘进行特征提取.首先把初始输入映射到高维特征空间,然后在高维特征空间中计算得分向量,降低样本的维数,再用最小二乘支持向量机进行回归.通过实验表明,这种方法得到的效果优于没有特征提取的回归.同时与PLS提取特征相比,KPLS分析效果更好. 相似文献
7.
《计算机应用与软件》2018,(4)
模糊最小二乘孪生支持向量机模型融合了模糊函数和最小二乘孪生支持向量机算法特性,以解决训练数据集存在孤立点噪声和运算效率低下问题。针对回归过程基于统计学习结构风险最小化原则,对该模型进行L_2范数正则化改进。考虑到大规模数据集的训练效率问题,对原始模型进行了L_1范数正则化改进。基于增量学习特性,对数据集训练过程进行增量选择迭加以加快训练速度。在UCI数据集上验证了相关改进算法的优越性。 相似文献
8.
针对硅锰合金埋弧冶炼过程中合金成分检测困难、离线化验滞后大、难以实时控制的问题,提出一种改进在线最小二乘支持向量机(IOLSSVM)的合金成分预测模型。该模型对每一个新增样本采用增量式学习,根据样本对模型贡献的不同删除样本集中对模型贡献最小的样本数据,利用递推计算增强模型的在线学习能力。将此模型应用于30MVA硅锰合金埋弧炉冶炼过程合金成分预测,实际生产运行数据表明了此方法的有效性。 相似文献
9.
鉴于传统在线最小二乘支持向量机在解决时变对象的回归问题时, 模型跟踪精度不高, 支持向量不够稀疏, 结合迭代策略和约简技术, 提出一种在线自适应迭代约简最小二乘支持向量机. 该方法考虑新增样本与历史数据共同作用对现有模型产生的约束影响, 寻求对目标函数贡献最大的样本作为新增支持向量, 实现了支持向量稀疏化, 提高了在线预测精度与速度. 仿真对比分析表明该方法可行有效, 较传统方法回归精度高且所需支持向量数目最少. 相似文献
10.
一种快速最小二乘支持向量机分类算法 总被引:1,自引:1,他引:0
最小二乘支持向量机不需要求解凸二次规划问题,通过求解一组线性方程而获得最优分类面,但是,最小二乘支持向量机失去了解的稀疏性,当训练样本数量较大时,算法的计算量非常大。提出了一种快速最小二乘支持向量机算法,在保证支持向量机推广能力的同时,算法的速度得到了提高,尤其是当训练样本数量较大时算法的速度优势更明显。新算法通过选择那些支持值较大样本作为训练样本,以减少训练样本数量,提高算法的速度;然后,利用最小二乘支持向量机算法获得近似最优解。实验结果显示,新算法的训练速度确实较快。 相似文献
11.
Combining reduced technique with iterative strategy, we propose a recursive reduced least squares support vector regression. The proposed algorithm chooses the data which make more contribution to target function as support vectors, and it considers all the constraints generated by the whole training set. Thus it acquires less support vectors, the number of which can be arbitrarily predefined, to construct the model with the similar generalization performance. In comparison with other methods, our algorithm also gains excellent parsimoniousness. Numerical experiments on benchmark data sets confirm the validity and feasibility of the presented algorithm. In addition, this algorithm can be extended to classification. 相似文献
12.
针对递归最小二乘支持向量机的递归性易导致建模中偏微分方程组求解困难的问题,提出用解析法求解偏微分方程组,实现了完整的递归最小二乘支持向量机模型.首先分析了各参数的相关性,然后推导出偏微分方程的解析表达式并求解.仿真实例表明,在动态系统建模中,该模型的性能比常用的串并联模型以及现有不完整递归最小二乘支持向量机模型的精度更高、性能更好. 相似文献
13.
A prediction control algorithm is presented based on least squares support vector machines (LS-SVM) model for a class of complex systems with strong nonlinearity. The nonlinear off-line model of the controUed plant is built by LS-SVM with radial basis function (RBF) kernel. In the process of system running, the off-line model is linearized at each sampling instant, and the generalized prediction control (GPC) algorithm is employed to implement the prediction control for the controlled plant. The obtained algorithm is applied to a boiler temperature control system with complicated nonlinearity and large time delay. The results of the experiment verify the effectiveness and merit of the algorithm. 相似文献
14.
Shigeo Abe 《Pattern Analysis & Applications》2007,10(3):203-214
In this paper we discuss sparse least squares support vector machines (sparse LS SVMs) trained in the empirical feature space, which is spanned by the mapped training data. First, we show that the kernel associated with the empirical feature space gives the same value with that of the kernel associated with the feature space if one of the arguments of the kernels is mapped into the empirical feature space by the mapping function associated with the feature space. Using this fact, we show that training and testing of kernel-based methods can be done in the empirical feature space and that training of LS SVMs in the empirical feature space results in solving a set of linear equations. We then derive the sparse LS SVMs restricting the linearly independent training data in the empirical feature space by the Cholesky factorization. Support vectors correspond to the selected training data and they do not change even if the value of the margin parameter is changed. Thus for linear kernels, the number of support vectors is the number of input variables at most. By computer experiments we show that we can reduce the number of support vectors without deteriorating the generalization ability.
Shigeo Abe received the B.S. degree in Electronics Engineering, the M.S. degree in Electrical Engineering, and the Dr. Eng. degree, all from Kyoto University, Kyoto, Japan in 1970, 1972, and 1984, respectively. After 25 years in the industry, he was appointed as full professor of Electrical Engineering, Kobe University in April 1997. He is now a professor of Graduate School of Science and Technology, Kobe University. His research interests include pattern classification and function approximation using neural networks, fuzzy systems, and support vector machines. He is the author of Neural Networks and Fuzzy Systems (Kluwer, 1996), Pattern Classification (Springer, 2001), and Support Vector Machines for Pattern Classification (Springer, 2005). Dr. Abe was awarded an outstanding paper prize from the Institute of Electrical Engineers of Japan in 1984 and 1995. He is a member of IEEE, INNS, and several Japanese Societies. 相似文献
Shigeo AbeEmail: |
Shigeo Abe received the B.S. degree in Electronics Engineering, the M.S. degree in Electrical Engineering, and the Dr. Eng. degree, all from Kyoto University, Kyoto, Japan in 1970, 1972, and 1984, respectively. After 25 years in the industry, he was appointed as full professor of Electrical Engineering, Kobe University in April 1997. He is now a professor of Graduate School of Science and Technology, Kobe University. His research interests include pattern classification and function approximation using neural networks, fuzzy systems, and support vector machines. He is the author of Neural Networks and Fuzzy Systems (Kluwer, 1996), Pattern Classification (Springer, 2001), and Support Vector Machines for Pattern Classification (Springer, 2005). Dr. Abe was awarded an outstanding paper prize from the Institute of Electrical Engineers of Japan in 1984 and 1995. He is a member of IEEE, INNS, and several Japanese Societies. 相似文献
15.
Least squares support vector machine (LS-SVM) is a successful method for classification or regression problems, in which the margin and sum square errors (SSEs) on training samples are simultaneously minimized. However, LS-SVM only considers the SSEs of input variable. In this paper, a novel normal least squares support vector machine (NLS-SVM) is proposed, which effectively considers the noises on both input and response variables. It introduces a two-stage learning method to solve NLS-SVM. More importantly, a fast iterative updating algorithm is presented, which reaches the solution of NLS-SVM with lower computational complexity instead of directly adopting the two-stage learning method. Several experiments on artificial and real-world datasets are simulated, in which the results show that NLS-SVM outperforms LS-SVM. 相似文献
16.
Glauber Souto dos Santos 《Expert systems with applications》2012,39(5):4805-4812
In the past decade, support vector machines (SVMs) have gained the attention of many researchers. SVMs are non-parametric supervised learning schemes that rely on statistical learning theory which enables learning machines to generalize well to unseen data. SVMs refer to kernel-based methods that have been introduced as a robust approach to classification and regression problems, lately has handled nonlinear identification problems, the so called support vector regression. In SVMs designs for nonlinear identification, a nonlinear model is represented by an expansion in terms of nonlinear mappings of the model input. The nonlinear mappings define a feature space, which may have infinite dimension. In this context, a relevant identification approach is the least squares support vector machines (LS-SVMs). Compared to the other identification method, LS-SVMs possess prominent advantages: its generalization performance (i.e. error rates on test sets) either matches or is significantly better than that of the competing methods, and more importantly, the performance does not depend on the dimensionality of the input data. Consider a constrained optimization problem of quadratic programing with a regularized cost function, the training process of LS-SVM involves the selection of kernel parameters and the regularization parameter of the objective function. A good choice of these parameters is crucial for the performance of the estimator. In this paper, the LS-SVMs design proposed is the combination of LS-SVM and a new chaotic differential evolution optimization approach based on Ikeda map (CDEK). The CDEK is adopted in tuning of regularization parameter and the radial basis function bandwith. Simulations using LS-SVMs on NARX (Nonlinear AutoRegressive with eXogenous inputs) for the identification of a thermal process show the effectiveness and practicality of the proposed CDEK algorithm when compared with the classical DE approach. 相似文献
17.
18.
Regularized least squares support vector regression for the simultaneous learning of a function and its derivatives 总被引:1,自引:0,他引:1
Jayadeva 《Information Sciences》2008,178(17):3402-3414
In this paper, we propose a regularized least squares approach based support vector machine for simultaneously approximating a function and its derivatives. The proposed algorithm is simple and fast as no quadratic programming solver needs to be employed. Effectively, only the solution of a structured system of linear equations is needed. 相似文献
19.
针对火电厂双进双出钢球磨直吹式制粉系统出力较难直接测量的问题,本文在Suykens的最小二乘支持向量机(least squares support vector machine,LS-SVM)稀疏化算法的基础上,提出了一种更好的改进方式,并将改进后的LS-SVM算法对双进双出钢球磨直吹式制粉系统出力建立软测量模型,通过对算法改进前后模型仿真结果的对比分析可知,改进后的LS-SVM算法学习收敛速度更快,误差更小,更加适用于在线学习,并为制粉系统的在线优化控制打下了良好的基础。 相似文献
20.
Online measurement of the average particle size is typically unavailable in industrial cobalt oxalate synthesis process, soft sensor prediction of the important quality variable is therefore required. Cobalt oxalate synthesis process is a complex multivariable and highly nonlinear process. In this paper, an effective soft sensor based on least squares support vector regression (LSSVR) with dual updating is developed for prediction the average particle size. In this soft sensor model, the methods of moving window LSSVR (MWLSSVR) updating and the model output offset updating is activated based on model performance assessment. Feasibility and efficiency of the proposed soft sensor are demonstrated through the application to an industrial cobalt oxalate synthesis process. 相似文献