共查询到17条相似文献,搜索用时 171 毫秒
1.
2.
3.
4.
针对未知参数系统的自适应预测函数控制,模型尚未辨识完成或外界干扰造成的模型不准确,会严重影响控制效果,并产生较大的超调和波动.由此,提出一类对偶自适应预测函数控制(Dtml Adaptive Predictive Function Control,DAPFC)算法.在模型辨识的过程中,通过辨识误差的大小,利用对偶控制方法来调整原有自适应控制律,尽可能地获取未知参数信息并抑制由于模型失配造成的控制量的波动.改善了系统在模型失配时的控制效果,并具有较强的鲁棒性.仿真结果表明,该算法具有良好的控制品质. 相似文献
5.
随机自适应控制的信息论方法 总被引:3,自引:1,他引:2
从Shannon信息理论的角度,分别应用最小熵方法和最大互信息方法,对摸型参数不确定的随机系统的自适应控制问题进行了研究和比较.对于这类系统,由最大互信息方法导出的自适应控制律本质上具有双重控制的特性. 相似文献
6.
7.
8.
9.
10.
研究了带未知模型参数和衰减观测率多传感器线性离散随机系统的信息融合估计问题.在模型参数和衰减观测率未知的情形下,应用递推增广最小二乘(Recursive extend least squares,RELS)算法和加权融合估计算法提出了分布式融合未知模型参数辨识器;应用相关函数对描述衰减观测现象的随机变量的数学期望和方差... 相似文献
11.
In this paper, we examine the problem of optimal state estimation or filtering in stochastic systems using an approach based on information theoretic measures. In this setting, the traditional minimum mean-square measure is compared with information theoretic measures, Kalman filtering theory is reexamined, and some new interpretations are offered. We show that for a linear Gaussian system, the Kalman filter is the optimal filter not only for the mean-square error measure, but for several information theoretic measures which are introduced in this work. For nonlinear systems, these same measures generally are in conflict with each other, and the feedback control policy has a dual role with regard to regulation and estimation. For linear stochastic systems with general noise processes, a lower bound on the achievable mutual information between the estimation error and the observation are derived. The properties of an optimal (probing) control law and the associated optimal filter, which achieve this lower bound, and their relationships are investigated. It is shown that for a linear stochastic system with an affine linear filter for the homogeneous system, under some reachability and observability conditions, zero mutual information between estimation error and observations can be achieved only when the system is Gaussian 相似文献
12.
13.
An adaptive dual control algorithm is presented for linear stochastic systems with constant but unknown parameters. The system parameters are assumed to belong to a finite set on which a prior probability distribution is available. The tool used to derive the algorithm is preposterior analysis: a probabilistic characterization of the future adaptation process allows the controller to take advantage of the dual effect. The resulting actively adaptive control called model adaptive dual (MAD) control is compared to two passively adaptive control algorithms-the heuristic certainty equivalence (HCE) and the Deshpande-Upadhyay-Lainiotis (DUL) model-weighted controllers. An analysis technique developed for the comparison of different controllers is used to show statistically significant improvement in the performance of the MAD algorithm over those of the HCE and DUL. 相似文献
14.
In this paper, using a polynomial transformation technique, we derive a mathematical model for dual‐rate systems. Based on this model, we use a stochastic gradient algorithm to estimate unknown parameters directly from the dual‐rate input‐output data, and then establish an adaptive control algorithm for dual‐rate systems. We prove that the parameter estimation error converges to zero under persistent excitation, and the parameter estimation based control algorithm can achieve virtually asymptotically optimal control and ensure the closed‐loop systems to be stable and globally convergent. The simulation results are included. 相似文献
15.
S. Tan 《International journal of control》2013,86(10):1676-1689
The problem of sampled-data (SD) based adaptive linear quadratic (LQ) optimal control is considered for linear stochastic continuous-time systems with unknown parameters and disturbances. To overcome the difficulties caused by the unknown parameters and incompleteness of the state information, and to probe into the influence of sample size on system performance, a cost-biased parameter estimator and an adaptive control design method are presented. Under the assumption that the unknown parameter belongs to a known finite set, some sufficient conditions ensuring the convergence of the parameter estimate are obtained. It is shown that when the sample step size is small, the SD-based adaptive control is LQ optimal for the corresponding discretized system, and sub-optimal compared with that of the case where the parameter is known and the information is complete. 相似文献
16.
The problem is discussed of finding a cost functional for which an adaptive control law is optimal. The system under consideration is a partially observed linear stochastic system with unknown parameters. It is well known that an optimal finite-dimensional filter for this problem can be derived when the parameters belong to a finite set. Since the optimal filter involves the evaluation of a finite set of a posteriori probabilities for each of the parameter values given the observations, a natural adaptive control scheme is: (i) develop the optimal linear feedback law given each parameter; (ii) use the a posteriori probabilities to form the weighted average (convex combination) of the individual control policies; and (iii) use the weighted average as the control law. A quadratic cost functional is devised for which this strategy is optimal, in a general case, and it is shown that the probing effect identified with dual control problems is inherent in the standard linear-quadratic-Gaussian problem with parameter uncertainty 相似文献
17.
An actively adaptive control for linear systems with random parameters via the dual control approach
A new method is presented for controlling a discrete-time linear system with possibly time-varying random parameters in the presence of input and output noise. The cost is assumed to be quadratic in the state and control. Previous algorithms for the above problem when the system had both zeros and poles unknown were of the open-loop feedback type, i.e., they did not take into account that future observations will be made. Therefore, even though these schemes were adaptive, their learning was "accidental." In contrast to this, the new approach uses an expression of the optimal cost-to-go that exhibits the dual purpose of the control, i.e., learning and control. The effect of the present control on the future estimation ("learning") appears explicitly in the cost used in the stochastic dynamic programming equation. The resulting sequence of controls, which is of the closed-loop type, is shown via simulations to appropriately divide its energy between the learning and the control purposes. Therefore, this control is called actively adaptive because it regulates the speed and amount of learning as required by the performance index. The simulations on a third-order system with six unknown parameters also demonstrate the computational feasibility of the proposed algorithm. 相似文献