首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 141 毫秒
1.
陈华东  蒋平 《控制与决策》2002,17(11):715-718
针对一类单输入单输出不确定非线性重复跟踪系统,提出一种基于完全未知高频反馈增益的自适应迭代学习控制,与普通迭代学习控制需要复习增益稳定性前提条不同,自适应迭代学习控制通过不断修改Nussbaum形式的高频学习增益达到收敛,经证明当迭代次数i→∞时,重复跟踪误差可一致收敛到任意小界δ。仿真结果表明了该控制方法的有效性。  相似文献   

2.
基于未知控制增益的非线性系统自适应迭代反馈控制   总被引:2,自引:0,他引:2  
针对一类单输入单输出不确定非线性重复跟踪系统, 提出一种基于完全未知控制增益的自适应迭代反馈控制. 与普通迭代学习控制需要学习增益稳定性前提条件不同, 所提自适应迭代反馈控制律通过不断修改Nuss baum形式的反馈增益达到收敛. 证明当迭代次数i→δ时, 重复跟踪误差可一致收敛到任意小界δ. 仿真显示了所提控制方法的有效性.  相似文献   

3.
一类线性连续切换系统的迭代学习控制   总被引:1,自引:0,他引:1  
针对有限时间区间内执行重复控制任务的线性连续切换系统,考虑基于迭代学习的跟踪控制问题.假设线性切换系统的切换率在时间域内是任意的,提出该类系统的D型迭代学习控制算法.理论分析表明,当学习增益矩阵满足一定的条件时,D型迭代学习控制算法可以保证切换系统的实际输出在整个运行区间上一致收敛于期望输出,实现完全跟踪控制.数值仿真进一步验证了方法的有效性.  相似文献   

4.
带有初态学习的可变增益迭代学习控制   总被引:1,自引:0,他引:1  
曹伟  丛望  李金  郭媛 《控制与决策》2012,27(3):473-476
针对一类非线性系统提出一种新的学习控制算法,该算法在可变学习增益的迭代学习控制律基础上,增加了系统初态的迭代学习律.利用算子理论证明了系统在存在初态偏移时经过迭代学习后,其输出能够完全跟踪期望轨迹,同时得到了该算法谱半径形式的收敛条件.将该算法与传统迭代学习控制相比较可以看出,前者的收敛速度得到了较大提高,而且解决了可变学习增益迭代学习控制的初态偏移问题.仿真结果验证了该算法的有效性.  相似文献   

5.
对于具有重复运动性质的对象,迭代学习控制是一种有效的控制方法.针对一类 离散非线性时变系统在有限时域上的精确轨迹跟踪问题,提出了一种开闭环PI型迭代学习 控制律.这种迭代律同时利用系统当前的跟踪误差和前次迭代控制的跟踪误差修正控制作 用.给出了所提出的学习控制律收敛的充分必要条件,并采用归纳法进行了证明.最后用仿真 结果对收敛条件进行了验证.  相似文献   

6.
可变学习增益的迭代学习控制律   总被引:1,自引:0,他引:1       下载免费PDF全文
基于迭代学习控制理论提出了一种可变学习增益的迭代学习律,在非线性系统中对期望轨迹进行跟踪,与学习增益不变的迭代学习控制相比较,收敛速度得到很大的提高;通过对其收敛性进行严格的数学证明,得到了迭代学习律收敛的充分条件;在单机无穷大系统中,将该控制律应用于同步发电机的励磁控制,仿真结果表明该控制律的有效性,改善了控制的动态特性,有利于提高电力系统稳定性.  相似文献   

7.
针对基于固定增益迭代学习的交通子区边界控制方法收敛速度慢、迭代次数过多及控制精度差的问题。提出了一种迭代学习结合改进狼群算法的交通子区边界控制方案。该方案首先根据宏观基本图理论建立交通子区路网的车辆平衡方程,设计出系统的迭代学习控制律。其次分析了迭代学习控制对宏观基本图的影响,引入自适应步长的狼群算法,该算法以上一批次的宏观基本图为模型,离线对迭代学习控制器的比例和微分增益系数进行寻优,再将最优结果代入下一控制周期迭代学习控制中,进而改善收敛速度与精度。最后,对该方案的收敛性提供了数学证明,而仿真实验结果也表明该算法相较于具有固定增益的迭代学习控制器,收敛速度得到提升,对系统期望轨迹也具有较好的跟踪精度,具有较强的可行性与有效性。  相似文献   

8.
分数阶迭代学习控制的收敛性分析   总被引:2,自引:0,他引:2  
本文将传统的迭代学习控制时域和频域分析方法扩展到一类针对分数阶非线性系统的分数阶迭代学习控制时域分析方法.提出了一类新的分数阶迭代学习控制框架并简化了收敛条件,且证明了常增益情况下两类分数阶迭代学习控制收敛条件的等价性问题.该讨论进一步引出了如下两个结果:分数阶不确定系统的分数阶自适应迭代学习控制的可学习区域以及理想带阻型分数阶迭代学习控制的框架.上述结果均得到了仿真验证.  相似文献   

9.
本文针对一类在有限时间内执行重复任务的不确定非线性系统状态跟踪问题,提出一种自适应滑模迭代学习控制方法,在存在初始偏移的情况下也能实现对参考轨迹的完全收敛.本文通过设计全饱和自适应迭代学习更新律,估计参数和非参数不确定性以及未知期望控制输入,并将估计值限制在指定界内,避免估计值的正向累加.文章设计的自适应滑模迭代学习控制方法对系统模型的信息需求少,在对系统非参数不确定性的上界估计时不需要Lipschitz界函数已知.本文给出严格的理论分析,证明闭环系统所有信号的一致有界性以及跟踪误差的一致收敛性,并通过仿真验证所提控制方法的有效性.  相似文献   

10.
迭代学习控制能够实现期望轨迹的完全跟踪而被广泛关注,但是采样迭代学习控制成果目前还比较少。针对一类有相对阶和输出延迟的非线性采样系统,研究了高阶迭代学习控制算法。利用Newton-Leibniz公式、贝尔曼引理和Lipschiz条件证明了当系统的采样周期足够小,迭代学习初态严格重复,且学习增益满足要求的条件,那么系统输出在采样点上收敛于期望输出。对一阶和二阶学习算法的仿真表明高阶算法在收敛速度上比一阶有明显改善。  相似文献   

11.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

12.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

13.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

18.
19.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

20.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号