首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
This paper presents a solution to the optimal control problem for discrete-time stochastic nonlinear polynomial systems confused with white Poisson noises subject to a quadratic criterion. The solution is obtained in the following way: a nonlinear optimal controller is first developed for polynomial systems, considering the state vector completely available for control design. Then, based on the solution of the state estimation problem for polynomial systems with white Poisson noises, the state estimate vector is used in the control law to obtain a closed-form solution. Performance of this controller is compared to that of the controller employing the extended Kalman filter and the linear-quadratic regulator and the controller designed for polynomial systems confused with white Gaussian noises.  相似文献   

2.
Consider a discrete-time nonlinear system with random disturbances appearing in the real plant and the output channel where the randomly perturbed output is measurable. An iterative procedure based on the linear quadratic Gaussian optimal control model is developed for solving the optimal control of this stochastic system. The optimal state estimate provided by Kalman filtering theory and the optimal control law obtained from the linear quadratic regulator problem are then integrated into the dynamic integrated system optimisation and parameter estimation algorithm. The iterative solutions of the optimal control problem for the model obtained converge to the solution of the original optimal control problem of the discrete-time nonlinear system, despite model-reality differences, when the convergence is achieved. An illustrative example is solved using the method proposed. The results obtained show the effectiveness of the algorithm proposed.  相似文献   

3.
A modified optimal algorithm for multirate output feedback controllers of linear stochastic periodic systems is developed. By combining the discrete-time linear quadratic regulation (LQR) control problem and the discrete-time stochastic linear quadratic regulation (SLQR) control problem to obtain an extended linear quadratic regulation (ELQR) control problem, one derives a general optimal algorithm to balance the advantages of the optimal transient response of the LQR control problem and the optimal steady-state regulation of the SLQR control problem. In general, the solution of this algorithm is obtained by solving a set of coupled matrix equations. Special cases for which the coupled matrix equations can be reduced to a discrete-time algebraic Riccati equation are discussed. A reducable case is the optimal algorithm derived by H.M. Al-Rahmani and G.F. Franklin (1990), where the system has complete state information and the discrete-time quadratic performance index is transformed from a continuous-time one  相似文献   

4.
The transformation into discrete-time equivalents of digital optimal control problems, involving continuous-time linear systems with white stochastic parameters, and quadratic integral criteria, is considered. The system parameters have time-varying statistics. The observations available at the sampling instants are in general nonlinear and corrupted by discrete-time noise. The equivalent discrete-time system has white stochastic parameters. Expressions are derived for the first and second moment of these parameters and for the parameters of the equivalent discrete-time sum criterion, which are explicit in the parameters and statistics of the original digital optimal control problem. A numerical algorithm to compute these expressions is presented. For each sampling interval, the algorithm computes the expressions recursively, forward in time, using successive equidistant evaluations of the matrices which determine the original digital optimal control problem. The algorithm is illustrated with three examples. If the observations at the sampling instants are linear and corrupted by multiplicative and/or additive discrete-time white noise, then, using recent results, full and reduced-order controllers that solve the equivalent discrete-time optimal control problem can be computed.  相似文献   

5.
This article presents the optimal quadratic-Gaussian controller for uncertain stochastic polynomial systems with unknown coefficients and matched deterministic disturbances over linear observations and a quadratic criterion. The optimal closed-form controller equations are obtained through the separation principle, whose applicability to the considered problem is substantiated. As intermediate results, this article gives closed-form solutions of the optimal regulator, controller and identifier problems for stochastic polynomial systems with linear control input and a quadratic criterion. The original problem for uncertain stochastic polynomial systems with matched deterministic disturbances is solved using the integral sliding mode algorithm. Performance of the obtained optimal controller is verified in the illustrative example against the conventional quadratic-Gaussian controller that is optimal for stochastic polynomial systems with known parameters and without deterministic disturbances. Simulation graphs demonstrating overall performance and computational accuracy of the designed optimal controller are included.  相似文献   

6.
This paper designs a discrete-time filter for nonlinear polynomial systems driven by additive white Gaussian noises over linear observations. The solution is obtained by computing the time-update and measurement-update equations for the state estimate and the error covariance matrix. A closed form of this filter is obtained by expressing the conditional expectations of polynomial terms as functions of the estimate and the error covariance. As a particular case, a third-degree polynomial is considered to obtain the finite-dimensional filtering equations. Numerical simulations are performed for a third-degree polynomial system and an induction motor model. Performance of the designed filter is compared with the extended Kalman one to verify its effectiveness.  相似文献   

7.
受扰非线性离散系统的前馈反馈最优控制   总被引:3,自引:2,他引:1  
利用逐次逼近法研究含外部扰动的非线性离散系统的线性二次型前馈反馈最优控制问题.首先将系统的最优控制问题转化为非线性两点边值问题族.其次,构造了该问题族的由精确线性项和非线性补偿项组成的解序列,并证明了解序列一致收敛到系统的最优解.最后,通过截取最优控制序列解中非线性补偿项的有限项,得到系统的前馈反馈次优控制(FFSOC)律及设计算法.仿真算例表明,该算法容易实现,且对抑制外部扰动的鲁棒性优于经典的反馈次优控制(FSOC).  相似文献   

8.
本文研究一类同时含有Markov跳过程和乘性噪声的离散时间非线性随机系统的最优控制问题, 给出并证明了相应的最大值原理. 首先, 利用条件期望的平滑性, 通过引入具有适应解的倒向随机差分方程, 给出了带有线性差分方程约束的线性泛函的表示形式, 并利用Riesz定理证明其唯一性. 其次, 对带Markov跳的非线性随机控制系统, 利用针状变分法, 对状态方程进行一阶变分, 获得其变分所满足的线性差分方程. 然后, 在引入Hamilton函数的基础上, 通过一对由倒向随机差分方程刻画的伴随方程, 给出并证明了带有Markov跳的离散时间非线性随机最优控制问题的最大值原理, 并给出该最优控制问题的一个充分条件和相应的Hamilton-Jacobi-Bellman方程. 最后, 通过 一个实际例子说明了所提理论的实用性和可行性.  相似文献   

9.
This paper aims at characterizing the most destabilizing switching law for discrete-time switched systems governed by a set of bounded linear operators. The switched system is embedded in a special class of discrete-time bilinear control systems. This allows us to apply the variational approach to the bilinear control system associated with a Mayer-type optimal control problem, and a second-order necessary optimality condition is derived. Optimal equivalence between the bilinear system and the switched system is analyzed, which shows that any optimal control law can be equivalently expressed as a switching law. This specific switching law is most unstable for the switched system, and thus can be used to determine stability under arbitrary switching. Based on the second-order moment of the state, the proposed approach is applied to analyze uniform mean-square stability of discrete-time switched linear stochastic systems. Numerical simulations are presented to verify the usefulness of the theoretic results.  相似文献   

10.
A new approach for the solution of the regulator problem for linear discrete-time dynamical systems with non-Gaussian disturbances is proposed. This approach generalizes a previous result concerning the definition of the quadratic optimal regulator. It consists of the definition of the polynomial optimal algorithm of an order /spl nu/ for the solution of the linear quadratic non-Gaussian stochastic regulator problem for systems with partial state information. The validity of the separation principle has also been proved in this case. Numerical simulations show the high performance of the proposed method with respect to the classical linear regulation techniques.  相似文献   

11.
A new solution to the quadratic optimal tracking problem in linear multivariable discrete-time systems is presented. The approach is based on polynomial input-output models. The optimal control law consists of a plant output feedback and a reference feedforward. It is obtained by performing spectral factorization and then solving two matrix polynomial equations. The design procedure is simple and well suited for systems with inaccessible states.  相似文献   

12.
This work presents a polynomial version of the well-known extended Kalman filter (EKF) for the state estimation of nonlinear discrete-time stochastic systems. The proposed filter, denoted polynomial EKF (PEKF), consists in the application of the optimal polynomial filter of a chosen degree /spl mu/ to the Carleman approximation of a nonlinear system. When /spl mu/=1 the PEKF algorithm coincides with the standard EKF. For the filter implementation the moments of the state and output noises up to order 2/spl mu/ are required. Numerical simulations compare the performances of the PEKF with those of some other existing filters, showing significant improvements.  相似文献   

13.
A finite horizon linear quadratic (LQ) optimal control problem is studied for a class of discrete-time linear fractional systems (LFSs) affected by multiplicative, independent random perturbations. Based on the dynamic programming technique, two methods are proposed for solving this problem. The first one seems to be new and uses a linear, expanded-state model of the LFS. The LQ optimal control problem reduces to a similar one for stochastic linear systems and the solution is obtained by solving Riccati equations. The second method appeals to the principle of optimality and provides an algorithm for the computation of the optimal control and cost by using directly the fractional system. As expected, in both cases, the optimal control is a linear function in the state and can be computed by a computer program. A numerical example and comparative simulations of the optimal trajectory prove the effectiveness of the two methods. Some other simulations are obtained for different values of the fractional order.  相似文献   

14.
针对状 态和控制输入均含有时滞的离散时间系统, 提出最优跟踪控制的设计方法. 通 过引入一种新的状态向量, 将含有状态和控制输入时滞的离散时间系统转化为 含有虚拟扰动项的无时滞离散时间系统. 根据最优控制理论, 构造离散Riccati矩阵方 程和离散Stein矩阵方程的序列, 并证明该解序列一致收敛于变换后的离散时间系统的最优跟 踪控制策略. 利用最优控制的逐次逼近设计方法, 得到最优跟踪控制的近似 解, 并给出求解最优跟踪控制律的算法. 仿真算例表明了所提出最优跟踪控制 方法的有效性.  相似文献   

15.
This paper proposes an optimal control algorithm for a polynomial system with a quadratic criterion over infinite horizon. The designed regulator gives a closed-form solution to the infinite horizon optimal control problem for a polynomial system with a quadratic criterion. The obtained solution consists of a feedback control law obtained by solving a Riccati algebraic equation dependent on the state. Numerical simulations in the example show advantages of the developed algorithm.  相似文献   

16.
The stochastic model considered is a linear jump diffusion process X for which the coefficients and the jump processes depend on a Markov chain Z with finite state space. First, we study the optimal filtering and control problem for these systems with non-Gaussian initial conditions, given noisy observations of the state X and perfect measurements of Z. We derive a new sufficient condition which ensures the existence and the uniqueness of the solution of the nonlinear stochastic differential equations satisfied by the output of the filter. We study a quadratic control problem and show that the separation principle holds. Next, we investigate an adaptive control problem for a state process X defined by a linear diffusion for which the coefficients depend on a Markov chain, the processes X and Z being observed in independent white noises. Suboptimal estimates for the process X, Z and approximate control law are investigated for a large class of probability distributions of the initial state. Asymptotic properties of these filters and this control law are obtained. Upper bounds for the corresponding error are given  相似文献   

17.
In this paper, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control for nonlinear stochastic dynamical systems. Specifically, we provide a simplified and tutorial framework for stochastic optimal control and focus on connections between stochastic Lyapunov theory and stochastic Hamilton–Jacobi–Bellman theory. In particular, we show that asymptotic stability in probability of the closed‐loop nonlinear system is guaranteed by means of a Lyapunov function that can clearly be seen to be the solution to the steady‐state form of the stochastic Hamilton–Jacobi–Bellman equation and, hence, guaranteeing both stochastic stability and optimality. In addition, we develop optimal feedback controllers for affine nonlinear systems using an inverse optimality framework tailored to the stochastic stabilization problem. These results are then used to provide extensions of the nonlinear feedback controllers obtained in the literature that minimize general polynomial and multilinear performance criteria. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper, a stochastic linear quadratic optimal tracking scheme is proposed for unknown linear discrete-time (DT) systems based on adaptive dynamic programming (ADP) algorithm. First, an augmented system composed of the original system and the command generator is constructed and then an augmented stochastic algebraic equation is derived based on the augmented system. Next, to obtain the optimal control strategy, the stochastic case is converted into the deterministic one by system transformation, and then an ADP algorithm is proposed with convergence analysis. For the purpose of realizing the ADP algorithm, three back propagation neural networks including model network, critic network and action network are devised to guarantee unknown system model, optimal value function and optimal control strategy, respectively. Finally, the obtained optimal control strategy is applied to the original stochastic system, and two simulations are provided to demonstrate the effectiveness of the proposed algorithm.  相似文献   

19.
李金娜  尹子轩 《控制与决策》2019,34(11):2343-2349
针对具有数据包丢失的网络化控制系统跟踪控制问题,提出一种非策略Q-学习方法,完全利用可测数据,在系统模型参数未知并且网络通信存在数据丢失的情况下,实现系统以近似最优的方式跟踪目标.首先,刻画具有数据包丢失的网络控制系统,提出线性离散网络控制系统跟踪控制问题;然后,设计一个Smith预测器补偿数据包丢失对网络控制系统性能的影响,构建具有数据包丢失补偿的网络控制系统最优跟踪控制问题;最后,融合动态规划和强化学习方法,提出一种非策略Q-学习算法.算法的优点是:不要求系统模型参数已知,利用网络控制系统可测数据,学习基于预测器状态反馈的最优跟踪控制策略;并且该算法能够保证基于Q-函数的迭代Bellman方程解的无偏性.通过仿真验证所提方法的有效性.  相似文献   

20.
Some observations and improvements on the conventional Kalman filtering scheme to function properly are presented. The improvements can be achieved using the minimal principle evolutionary programming (EP) technique. A new linearization methodology is presented to obtain the exact linear models of a class of discrete-time nonlinear time-invariant systems at operating states of interest, so that the conventional Kalman filter can work for the nonlinear stochastic systems. Furthermore, a Kalman innovation filtering algorithm and such an algorithm based on the evolutionary programming optimal-search technique are proposed in this paper for discrete-time time-invariant nonlinear stochastic systems with unknown-but-bounded plant uncertainties and noise uncertainties to find a practically implementable “best” Kalman filter. The worst-case realization of the discrete-time nonlinear stochastic uncertain systems represented by the interval form with respect to the implemented “best” nominal filter is also found in this paper for demonstrating the effectiveness of the proposed filtering scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号