共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
This paper is concerned with the infinite horizon linear quadratic optimal control for discrete‐time stochastic systems with both state and control‐dependent noise. Under assumptions of stabilization and exact observability, it is shown that the optimal control law and optimal value exist, and the properties of the associated discrete generalized algebraic Riccati equation (GARE) are also discussed. Copyright © 2008 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society 相似文献
3.
W.L. De Koning 《Automatica》1982,18(4):443-453
The infinite horizon optimal control problem is considered in the general case of linear discrete time systems and quadratic criteria, both with stochastic parameters which are independent with respect to time. A stronger stabilizability property and a weaker observability property than usual for deterministic systems are introduced. It is shown that the infinite horizon problem has a solution if the system has the first property. If in addition the problem has the second property the solution is unique and the control system is stable in the mean square sense. A simple necessary and sufficient condition, explicit in the system matrices, is given for the system to have the stronger stabilizability property. This condition also holds for deterministic systems to be stabilizable in the usual sense. The stronger stabilizability and weaker observability properties coincide with the usual ones if the parameters are deterministic. 相似文献
4.
To address a computationally intractable optimal control problem for a class of stochastic hybrid systems, this paper proposes a near optimal state feedback control scheme, which is constructed by using a statistical prediction method based on approximate numerical solution that samples over the entire state space. A numerical example illustrates the potential of the approach. 相似文献
5.
The author considers a class of discrete-time nonlinear stochastic control systems whose nonlinearities are described by statistical means. By introducing the proper mean-square stabilizability and detectability properties of such systems and giving a characterization of these properties in terms of system parameters, he examines the steady-state properties of quadratically optimal controllers. A robustness property of these controllers is pointed out 相似文献
6.
Koichi Kobayashi Koichiro Matou Kunihiko Hiraishi 《International Journal of Control, Automation and Systems》2012,10(5):897-904
Stochastic hybrid systems have several applications such as biological systems and communication networks, but it is difficult to consider control of general stochastic hybrid systems. In this paper, a class of discrete-time stochastic hybrid systems, in which only discrete dynamics are stochastic, is considered. For this system, a solution method for the optimal control problem with probabilistic constraints is proposed. Probabilistic constraints guarantee that the probability that the continuous state reaches a given unsafe region is less than a given constant. In the propose method, first, continuous state regions, from which the state reaches a given unsafe region, are computed by a backward-reachability graph. Next, mixed integer quadratic programming problems with constraints derived from the backward-reachability graph are solved. The proposed method can be applied to model predictive control. 相似文献
7.
We consider a class of time-varying
-valued control models, and with possibly unbounded costs. The processes evolve according to the system equation xn+1=Gn(xn,an)+ξn (
), where {ξn} are i.i.d. random vectors and {Gn} a sequence of known functions converging to some function G∞. Under suitable hypotheses, we show the existence of an α-discount optimal policy for the limiting system xn+1=G∞(xn,an)+ξn. 相似文献
8.
In this article, the problem of H 2-control of a discrete-time linear system subject to Markovian jumping and independent random perturbations is considered. Different H 2 performance criteria (often called H 2-norms) are introduced and characterised via solutions of some suitable linear equations on certain spaces of symmetric matrices. Some aspects specific to the discrete-time framework are revealed. The problem of optimisation of H 2-norms is solved under the assumption that full state vector is available for measurements. One shows that among all stabilising controllers of higher dimension, the best performance is achieved by a zero-order controller. The corresponding feedback gain of the optimal controller is constructed based on the stabilising solution of a system of discrete-time generalised Riccati equations. 相似文献
9.
10.
Composite hierarchical antidisturbance control for a class of discrete‐time stochastic systems 下载免费PDF全文
In this paper, antidisturbance control and estimation problem are discussed for a class of discrete‐time stochastic systems with nonlinearity and multiple disturbances, which include the disturbance with partially known information and a sequence of random vectors. A disturbance observer is constructed to estimate the disturbance with partially known information. A composite hierarchical antidisturbance control scheme is proposed by combining disturbance observer and H∞ control. It is proved that the 2 different disturbances can be rejected and attenuated, and the corresponding desired performances can be guaranteed for discrete‐time stochastic systems with known and unknown nonlinear dynamics, respectively. Simulation examples are given to demonstrate the effectiveness of the proposed scheme. 相似文献
11.
In this paper, we investigate the adaptive output-feedback stabilisation for a class of stochastic non-linear systems with time-varying time delays. First, we give some sufficient conditions to ensure the existence and uniqueness of the solution process for stochastic non-linear systems with time delays, and introduce a new stability notion and the related criterion. Then, for a class of stochastic non-linear systems with time-varying time delays, uncertain parameters in both drift and diffusion terms, and general constant virtual control coefficients, we present a systematic design procedure for a memoryless adaptive output-feedback control law by using the backstepping method. It is shown that under the control law based on a memoryless observer, the closed-loop equilibrium of interest is globally stable in probability, and moreover, the solution process can be regulated to the origin almost surely. 相似文献
12.
The problem considered in this paper deals with the control of linear discrete-time stochastic systems with unknown (possibly time-varying and random) gain parameters. The philosophy of control is based on the use of an open-loop feedback optimal (OLFO) control using a quadratic index of performance. It is shown that the OLFO system consists of 1) an identifier that estimates the system state variables and gain parameters and 2) a controller described by an "adaptive" gain and correction term. Several qualitative properties and asymptotic properties of the OLFO adaptive system are discussed. Simulation results dealing with the control of stable and unstable third-order plants are presented. The key quantitative result is the precise variation of the control system adaptive gains as a function of the future expected uncertainty of the parameters; thus, in this problem the ordinary "separation theorem" does not hold. 相似文献
13.
14.
This paper discusses discrete-time stochastic linear quadratic (LQ) problem in the
infinite horizon with state and control dependent noise, where the
weighting matrices in the cost function are assumed to be
indefinite. The problem gives rise to a generalized algebraic
Riccati equation (GARE) that involves equality and inequality
constraints. The well-posedness of the indefinite LQ problem is
shown to be equivalent to the feasibility of a linear matrix
inequality (LMI). Moreover, the existence of a stabilizing solution
to the GARE is equivalent to the attainability of the LQ problem.
All the optimal controls are obtained in terms of the solution to
the GARE. Finally, we give an LMI -based approach to solve the GARE
via a semidefinite programming. 相似文献
15.
This paper presents a solution to the discrete-time optimal control problem for stochastic nonlinear polynomial systems over linear observations and a quadratic criterion. The solution is obtained in two steps: the optimal control algorithm is developed for nonlinear polynomial systems by considering complete information when generating a control law. Then, the state estimate equations for discrete-time stochastic nonlinear polynomial system over linear observations are employed. The closed-form solution is finally obtained substituting the state estimates into the obtained control law. The designed optimal control algorithm can be applied to both distributed and lumped systems. To show effectiveness of the proposed controller, an illustrative example is presented for a second degree polynomial system. The obtained results are compared to the optimal control for the linearized system. 相似文献
16.
Hui Zhang 《International journal of control》2018,91(2):253-265
The problem of adaptive tracking is considered for a class of stochastic switched systems, in this paper. As preliminaries, the criterion of global asymptotical practical stability in probability is first presented by the aid of common Lyapunov function method. Based on the Lyapunov stability criterion, adaptive backstepping controllers are designed to guarantee that the closed-loop system has a unique global solution, which is globally asymptotically practically stable in probability, and the tracking error in the fourth moment converges to an arbitrarily small neighbourhood of zero. Simulation examples are given to demonstrate the efficiency of the proposed schemes. 相似文献
17.
Xiufeng Miao 《国际计算机数学杂志》2015,92(11):2251-2260
In this paper, the adaptive state estimation and state-feedback stabilization problems for a class of nonlinear stochastic systems with unknown constant parameters are studied. The sequential design methods are proposed to construct the adaptive controllers. Adaptive state and parameter estimators are designed by using a stochastic Lyapunov method and the separation theory of the design for the state-feedback gain and observer gain, which guarantees that the closed-loop system is asymptotically stable in the mean-square sense. Sufficient conditions for the existence of parameters estimator are given in terms of linear matrix inequalities. Finally, the numerical examples are provided to illustrate the feasibility of the proposed theoretical results. 相似文献
18.
In this contribution, we obtain a nonlinear controller for a class of nonlinear time delay systems, by using the inverse optimality approach. We avoid the solution of the Hamilton Jacobi Bellman type equation and the determination of the Bellman's functional by extending the inverse optimality approach for delay free nonlinear systems to time delay nonlinear systems. This is achieved by combining the Control Lyapunov Function framework and Lyapunov-Krasovskii functionals of complete type. Explicit formulas for an optimal control are obtained. The efficiency of the proposed method is illustrated via experimental results applied to a dehydration process whose model includes a delayed state linear part and a delayed nonlinear part. To give evidence of the good performance of the proposed control law, experimental comparison against an industrial Proportional Integral Derivative controller and optimal linear controller. Additionally experimental robustness tests are presented. 相似文献
19.
Bellman's principle of optimality and dynamic programming are shown to be the basis for solution of a physically significant class of nonlinear stochastic control problems. Various previous results are integrated into a survey here, and a new result which extends the separation principle is presented. Certain bilinear and linear-in-control systems are included in the analysis. 相似文献