首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
炉温的实时预测技术对高炉运转具有重要意义。在高炉炼铁过程中,通常以铁水硅含量来表征高炉热状态。针对硅含量预测效率和精度不足的问题,提出主成分分析和粒子群改进的极限学习机相结合的方法对高炉铁水硅含量进行预测。由于影响铁水硅含量的因素众多,且各因素之间相互影响,通过主成分分析对影响硅含量的输入变量进行降维处理。利用粒子群算法来优化极限学习机的权值和阈值,并以均方根误差作为适应度函数建立预测模型。将提取出的主成分作为模型输入,铁水硅含量作为模型输出。最后比较了极限学习机算法和粒子群改进的极限学习机,实验结果表明改进后的预测模型提高了硅含量预测的准确度,上述方法可为高炉的生产操作提供参考。  相似文献   

2.
铁水硅含量的混沌粒子群支持向量机预报方法   总被引:5,自引:1,他引:5  
提出一种基于混沌粒子群优化(CPSO)的支持向量回归机(SVR)参数优化算法, 并使用该算法建立高炉铁水硅含量预测模型(CPSO–SVR), 对某大型钢铁厂高炉铁水硅含量的实际采集数据进行预测, 结果表明基于混沌粒子群优化算法寻优的参数建立的铁水硅含量支持向量回归预测模型具有良好的预测效果. 与最小二乘支持向量回归机(LS–SVR)、使用粒子群优化算法训练的神经网络(PSO–NN)进行比较, CPSO–SVR模型对铁水硅含量进行预测时预测绝对误差小于0.03的样本数占总测试样本数的百分比达到90%以上, 预测效果明显优于PSO–NN, 且比LS–SVR稳定性更强, 可用于高炉铁水硅含量的实际预测, 表明混沌粒子群优化算法是选取SVR参数的有效方法.  相似文献   

3.
针对铁水硅含量无法直接在线检测的问题,本文提出了一种基于优化极限学习机(ELM)的高炉铁水硅含量预报方法.该方法利用复合差分进化算法(CoDE)的快速定位全局最优解的能力来优化极限学习机的输入权值和隐层节点阈值,在此基础上建立了基于复合差分进化算法优化极限学习机(CoDE-ELM)的高炉铁水硅含量预报模型.以某钢铁厂2650 m~3的高炉为例,利用实际采集数据进行模型检验,结果表明,当绝对误差小于0.1时,铁水硅含量的预报命中率为89%,均方根误差为0.047,实际目标值序列与预报值序列的相关系数为0.851.所建模型的预报结果优于支持向量机(SVM)、前馈神经网络(BP-NN)、极限学习机以及差分优化极限学习机(DE-ELM),对高炉炉温的实际调控具有较好的指导意义.  相似文献   

4.
针对风能的波动性、非平稳性导致风电功率预测精度不高的问题,研究并提出一种基于可变模式分解(VMD)技术和改进灰狼算法(DIGWO)优化核极限学习机(KELM)的短期风电功率预测模型。将功率信号进行分解得到若干个不同带宽的模式分量,对各个模式分量建立核极限学习机预测模型。为提高核极限学习机的寻优能力,采用改进的灰狼算法对核极限学习机的参数进行优化,得到各个模式分量的预测值,将分量预测值进行叠加后得到风电功率最终预测。采用实际风电功率数据进行实验仿真,实验结果表明,该模型的RMSE和MAE分别是1.5%和1.16%,相比其他模型提高了风电功率预测精度。  相似文献   

5.
针对高维复杂函数问题,提出一种混合蛙跳–灰狼优化算法(SFL–GWO).该算法通过改进的Logistic映射初始化GWO算法种群提高算法的多样性;其次,提出一种新的距离控制参数的非线性调整策略来增强种群的探索与开发的能力;最后通过引入改进的随机蛙跳算法中改变最差位置的方式使SFL–GWO算法跳出局部最优的局限.通过选取的10个高维复杂函数的寻优结果验证了算法的性能,并与粒子群优化算法(PSO)、灰狼优化算法(GWO)和鲸鱼优化算法(WOA)3种基本算法以及与8种改进算法的寻优的结果进行了比较.仿真结果证明:SFL–GWO算法在不仅可以提高收敛精度也可以提高算法的搜索速度,证明了SFL–GWO算法在求解高维复杂函数的高效性.  相似文献   

6.
针对不平衡数据对变压器故障诊断模型辨识精度的影响,提出一种基于自适应综合过采样(ADAptive SYNthetic, ADASYN)与改进鲸鱼算法优化核极限学习机的变压器故障诊断模型。首先,利用ADASYN算法优化变压器故障数据均衡化处理,解决变压器故障数据集类间不平衡给模型带来的偏倚问题。其次,通过多策略组合改进了鲸鱼优化算法(improved whale optimization algorithm, IWOA)的搜索速度、收敛能力和局部极值的逃逸能力。最后,改进鲸鱼算法对核极限学习机(kernel based extreme learning machine, KELM)正则化系数和核函数参数寻优,构建改进鲸鱼算法优化核极限学习机(IWOA-KELM)故障诊断模型。将模型应用于变压器故障诊断领域,用该模型与粒子群算法核极限学习机模型(PSO-KELM)、灰狼算法优化核极限学习机模型(GWO-KELM)和鲸鱼算法核极限学习机模型(WOA-KELM)的诊断精度对比,分别提升14.17%、 12.5%和8.34%,这证明了所提故障诊断模型具有更高的精度和泛化能力。  相似文献   

7.
任瑞琪  李军 《测控技术》2018,37(6):15-19
针对电力负荷预测,提出了一种优化的核极限学习机(O-KELM)的方法.核极限学习机(KELM)方法仅以核函数表示未知的隐含层非线性特征映射,无需选择隐含层的节点数目,通过正则化最小二乘算法计算网络的输出权值.将优化算法应用于KELM方法中,给出基于遗传算法、微分演化、模拟退火的3种优化KELM方法,优化选择核函数的参数以及正则化系数,以进一步提高KELM方法的学习性能.为验证方法的有效性,将O-KELM方法应用于某地区的中期峰值电力负荷预测研究中,在同等条件下与优化极限学习机(O-ELM)方法、SVM等方法进行比较.实验结果表明,O-KELM方法具有很好的预测性能,其中GA-KELM方法的建模精度最高.  相似文献   

8.
高炉铁水硅含量是铁水品质与炉况的重要表征, 冶炼过程关键参数频繁波动及大时滞特性给高炉铁水硅含量预测带来了巨大挑战. 提出一种基于最优工况迁移的高炉铁水硅含量预测方法. 首先, 针对过程变量频繁波动问题, 提出基于邦费罗尼指数的自适应密度峰值聚类算法, 实现对高炉冶炼过程变量的工况划分, 并建立不同工况硅含量预测子模型. 其次, 针对冶炼过程的大时滞特性, 定义相邻时间节点间的硅含量工况迁移代价函数, 并提出多源路径寻优算法, 实现冶炼过程中硅含量最优工况迁移路径及当前时刻硅含量最优预测值的求解. 最后, 基于工业现场数据验证了所提方法的有效性与准确性.  相似文献   

9.
基于改进自组织临界优化的元启发式灰狼优化算法   总被引:1,自引:0,他引:1  
针对新型元启发式算法灰狼优化(GWO)算法在寻优过程中易陷入局部最优这一问题,提升该算法获取全局最优解的能力。介绍了该算法的基本原理和建模过程,并在此基础上,结合自组织临界性理论的优点,提出了改进的极值优化(IEO)算法,将IEO融入到GWO模型中,构建基于自组织临界(SOC)优化的改进GWO算法(IEO-GWO)。通过与传统优化算法对于23个基准测试函数在寻优性能上的综合比较,验证了IEO-GWO模型在获取全局最优解性能上的优越性。  相似文献   

10.
在核极限学习机(Kernel Based Extreme Learning Machine,KELM)分类应用的基础上,结合狮群算法(Loin Swarm Optimization,LSO)强全局寻优能力与收敛快的特性,提出一种LSO优化KELM算法.将测试准确率作为LSO优化KELM的适应度函数,根据移动位置获取最优...  相似文献   

11.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

12.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

13.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

18.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

19.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

20.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号