首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2篇
  完全免费   6篇
  自动化技术   8篇
  2020年   1篇
  2018年   1篇
  2016年   1篇
  2013年   1篇
  2012年   2篇
  2011年   1篇
  2008年   1篇
排序方式: 共有8条查询结果,搜索用时 126 毫秒
1
1.
针对一类线性时不变系统, 提出了具有反馈信息的PD-型(Proportional-derivative-type)迭代学习控制律, 利用卷积的推广的Young不等式, 分析了控制律在Lebesgue-p范数意义下的单调收敛性. 分析表明, 收敛性不但决定于系统的输入输出矩阵和控制律的微分学习增益, 而且依赖于系统的状态矩阵和控制律的比例学习增益; 进一步, 当适当选取反馈增益时, 反馈信息可加快典型的PD-型迭代学习控制律的单调收敛性. 数值仿真验证了理论分析的正确性和控制律的有效性.  相似文献
2.
This paper discusses first‐ and second‐order fractional‐order PID‐type iterative learning control strategies for a class of Caputo‐type fractional‐order linear time‐invariant system. First, the additivity of the fractional‐order derivative operators is exploited by the property of Laplace transform of the convolution integral, whilst the absolute convergence of the Mittag‐Leffler function on the infinite time interval is induced and some properties of the state transmit function of the fractional‐order system are achieved via the Gamma and Bata function characteristics. Second, by using the above properties and the generalized Young inequality of the convolution integral, the monotone convergence of the developed first‐order learning strategy is analyzed and the monotone convergence of the second‐order learning scheme is derived after finite iterations, when the tracking errors are assessed in the form of the Lebesgue‐p norm. The resultant convergences exhibit that not only the fractional‐order system input and output matrices and the fractional‐order derivative learning gain, but also the system state matrix and the proportional learning gain, and fractional‐order integral learning gain dominate the convergence. Numerical simulations illustrate the validity and the effectiveness of the results.  相似文献
3.
在大型工业过程递阶稳态优化中, 可行的方法是利用系统的实际信息以修正基于模型的最优解. 在这种情形下, 得出一幅值不等的阶跃型控制值序列, 而且该控制值序列依次激励实际系统. 本文将一组迭代学习控制器分散地嵌入到一类非线性工业过程的递阶稳态优化进程中, 每一子系统的迭代学习控制器将产生一强化的控制信号序列以替代相应的具有不同幅值的阶跃型控制值序列, 目的是不断改进系统的暂态品质. 通过卷积的 Hausdorff-Young 不等式, 本文分析了学习控制律在 Lebesgue-P 范数意义下的收敛性, 讨论了系统的非线性性和关联性对控制律收敛性的影响. 最后, 数字仿真验证了所研究的学习控制机理的正确性和有效性.  相似文献
4.
传统的迭代学习控制机理中, 积分补偿是典型的策略之一, 但其跟踪效用并不明确. 本文针对连续线性时 不变系统, 对传统的PD–型迭代学习控制律嵌入积分补偿, 利用分部积分法和推广的卷积Young不等式, 在Lebesgue- p范数意义下, 理论分析一阶和二阶PID–型迭代学习控制律的收敛性态. 结果表明, 当比例、积分和导数学习增益满 足适当条件时, 一阶PID–型迭代学习控制律是单调收敛的, 二阶PID–型迭代学习控制律是双迭代单调收敛的. 数值 仿真验证了积分补偿可有效地提高系统的跟踪性能.  相似文献
5.
The rectangular pulse function is adopted to incorporate feed-forward compensation for various proportional-derivative-type iterative learning control updating laws applied to a class of linear time-invariant systems with initial state shift. The objective of pulse compensation is to suppress the tracking discrepancy incurred by initial state shift. By means of the generalised Young inequality of the convolution integral, the tracking performance of the pulse-based learning updating laws is analysed and the suppressive effect of the pulse compensation is evaluated by measuring the tracking error in the sense of Lebesgue-p norm. The derivation clarifies that the upper bound of the asymptotical tracking error can be improved by tuning the compensation gain properly though it is determined not only by the proportional and derivative learning gains but also by the system state, input and output matrices as well. Numerical simulations show that pulse compensation can effectively suppress the tracking error caused by initial state shift.  相似文献
6.
针对于具有初始状态不确定性的非线性时不变系统,采用矩形脉冲信号补偿传统的比例微分型一阶和二阶迭代学习控制律.在Lebesgue-p范数度量跟踪误差意义下,利用卷积的推广的Young不等式分析学习控制律的跟踪性能.分析表明,在适当选取比例学习增益,微分学习增益和非线性状态函数的Lipschitz常数以保证收敛因子小于1的前提下,渐近跟踪误差是由初始状态不确定性引起的,而且可通过调节补偿因子予以消减.数值仿真验证了补偿策略的有效性和理论分析的正确性.  相似文献
7.
针对一类线性时不变系统,讨论存在固定初始偏移时的学习控制问题,提出带有反馈辅助项的比例微分(proportion differentiation,PD)型学习控制算法,分析所提算法在Lebesgue-p范数意义下的单调收敛性,获得对期望轨迹的渐近跟踪结果.进一步地,为获得系统输出对期望轨迹的完全跟踪,给出带有初始修正策略的比例–积分–微分(proportion multiple integration differentiation,PMID)型学习律,并给出了所提学习算法的单调收敛性能分析结果.最后,通过数值结果,验证了所提学习算法的跟踪性能和单调收敛性能.  相似文献
8.
This paper addresses the convergence issue of first‐order and second‐order PD‐type iterative learning control schemes for a type of partially known linear time‐invariant systems. By taking advantage of the generalized Young inequality of convolution integral, the convergence is analyzed in the sense of the Lebesgue‐p norm and the convergence speed is also discussed in terms of Qp factors. Specifically, we find that: (1) the sufficient condition on convergence is dominated not only by the derivative learning gains, along with the system input and output matrices, but also by the proportional learning gains and the system state matrix; (2) the strictly monotone convergence is guaranteed for the first‐order rule while, in the case of the second‐order scheme, the monotonicity is maintained after some finite number of iterations; and (3) the iterative learning process performed by the second‐order learning scheme can be Qp‐faster, Qp‐equivalent, or Qp‐slower than the iterative learning process manipulated by the first‐order rule if the learning gains are appropriately chosen. To manifest the validity and effectiveness of the results, several numerical simulations are conducted. Copyright © 2011 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号