首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
The transformation into discrete-time equivalents of digital optimal control problems, involving continuous-time linear systems with white stochastic parameters, and quadratic integral criteria, is considered. The system parameters have time-varying statistics. The observations available at the sampling instants are in general nonlinear and corrupted by discrete-time noise. The equivalent discrete-time system has white stochastic parameters. Expressions are derived for the first and second moment of these parameters and for the parameters of the equivalent discrete-time sum criterion, which are explicit in the parameters and statistics of the original digital optimal control problem. A numerical algorithm to compute these expressions is presented. For each sampling interval, the algorithm computes the expressions recursively, forward in time, using successive equidistant evaluations of the matrices which determine the original digital optimal control problem. The algorithm is illustrated with three examples. If the observations at the sampling instants are linear and corrupted by multiplicative and/or additive discrete-time white noise, then, using recent results, full and reduced-order controllers that solve the equivalent discrete-time optimal control problem can be computed.  相似文献   

3.
We describe locally-convergent algorithms for discrete-time optimal control problems which are amenable to multiprocessor implementation. Parallelism is achieved both through concurrent evaluation of the component functions and their derivatives, and through the use of a parallel solver which solves a linear system to find the step at each iteration.. Results from an implementation on the Alliant FX/8 are described.  相似文献   

4.
5.
6.
The problem of simultaneously determining an optimal control strategy and an optimal observation strategy for a linear system is considered. Quadratic costs on state and control and an "on-off" type observation cost are assumed. Dynamic programming is used to obtain a solution. An example is provided that shows some interesting relations between the optimal observation strategy and various system parameters.  相似文献   

7.
The problem of optimal control of the solution of a stochastic differential equation with a fractional Wiener sheet is investigated.  相似文献   

8.
The solution of the dual criterion linear quadratic stochastic optimal control problem is obtained by following a Wiener type of solution procedure. A stabilizing solution is guaranteed by parameterizing the controller using the Desoer fractional representation approach. The dual criterion includes sensitivity and complementary sensitivity weighting terms which provide a means of varying the robustness characteristics of the multivariable system.  相似文献   

9.
The robust maximum principle applied to the minimax linear quadratic problem is derived for stochastic differential equations containing a control-dependent diffusion term. The parametric families of the first and second order adjoint stochastic processes are obtained to construct the corresponding Hamiltonian formalism. The Hamiltonian function used for the construction of the robust optimal control is shown to be equal to the sum of the standard stochastic Hamiltonians corresponding to each value of the uncertain parameter from a given finite set. The cost function is considered on a finite horizon (contains the mathematical expectation of both an integral and a terminal term) and on an infinite one (a time-averaged losses function). These problems belong to the class of minimax stochastic optimization problems. It is shown that the construction of the minimax optimal controller can be reduced to an optimization problem on a finitedimensional simplex and consists in the analysis of the dependence of Riccati equation solution on the weight parameters to be found.  相似文献   

10.
An optimal feedback control problem for a partially observed linear system with noise of fixed-sized jumps occurring at random times driven by a Poisson process is extended to include noise with random-sized jumps. The control structure is appropriately modified to compensate for the mean behavior of the system jumps via an additional deterministic term. Editor: N.U. Ahmed  相似文献   

11.
In this paper the solution of a stochastic optimal control problem described by linear equations of motion and a nonquadratic performance index is presented. The theory is then applied to the dynamics of a single-foil and a hydrofoil boat flying on rough water. The random disturbances caused by sea waves are represented as the response of an auxiliary system to a white noise input. The control objective is formulated as an integral performance index containing a quadratic acceleration term and a nonquadratic term of the submergence deviation of the foil from calm water submergence. The stochastic version of the maximum principle is used in the formulation of a feedback control law. The Riccati equations and the feedback gains associated with a nonquadratic performance index are non-linear functions of the state and auxiliary state variables. These equations are integrated forward with the state equations for the steady-state solution of the problem. The controller for a nonquadratic performance index contains computing elements which perform the integration of the Riccati equations to generate the instantaneous values of the feedback gains. The effect of a nonquadratic penalty on the submergence deviation and the effect of a nonquadratic control penalty on the response of the system are investigated. A comparison between an optimal nonlinear control law and a suboptimal linear control law is presented.  相似文献   

12.
A numerical method of solution of problems of optimal control of objects that may be described by systems of ordinary differential equations in the class of piecewise-constant controls is proposed in the article. At the same time, both the piecewise-constant values of the controls as well as the constancy intervals of these values are optimized. Analytic formulas of the gradient of the functional with respect to the optimized parameters are obtained. The formulas for the gradient of the functional obtained for an initial continuous optimal control problem and the corresponding discretized optimal control problem are compared and results of numerical experiments are presented.  相似文献   

13.
In this paper, the learning gain, for a selected learning algorithm, is derived based on minimizing the trace of the input error covariance matrix for linear time-varying systems. It is shown that, if the product of the input/output coupling matrices is a full-column rank, then the input error covariance matrix converges uniformly to zero in the presence of uncorrelated random disturbances. However, the state error covariance matrix converges uniformly to zero in presence of measurement noise. Moreover, it is shown that, if a certain condition is met, then the knowledge of the state coupling matrix is not needed to apply the proposed stochastic algorithm. The proposed algorithm is shown to suppress a class of nonlinear and repetitive state disturbance. The application of this algorithm to a class of nonlinear systems is also considered. A numerical example is included to illustrate the performance of the algorithm  相似文献   

14.
In the paper by Fujita and Fukao [1], a proof of the separation theorem for discrete-time linear systems with interrupted observations was presented. The theorem is based heavily on [1, Lemma]. This note points out errors of the proof of [1, Lemma] and disproves the separation theorem.  相似文献   

15.
In this paper, we consider an optimal control problem for the stochastic system described by stochastic differential equations with delay. We obtain the maximum principle for the optimal control of this problem by virtue of the duality method and the anticipated backward stochastic differential equations. Our results can be applied to a production and consumption choice problem. The explicit optimal consumption rate is obtained.  相似文献   

16.
In an earlier paper by the author (2001), the learning gain for a D-type learning algorithm, is derived based on minimizing the trace of the input error covariance matrix for linear time-varying systems. It is shown that, if the product of the input/output coupling matrices is full-column rank, then the input error covariance matrix converges uniformly to zero in the presence of uncorrelated random disturbances, whereas, the state error covariance matrix converges uniformly to zero in the presence of measurement noise. However, in general, the proposed algorithm requires knowledge of the state matrix. In this note, it is shown that equivalent results can be achieved without the knowledge of the state matrix. Furthermore, the convergence rate of the input error covariance matrix is shown to be inversely proportional to the number of learning iterations  相似文献   

17.
This paper gives a self-contained presentation of minimax control for discrete-time time-varying stochastic systems under finite- and infinite-horizon expected total cost performance criteria. Suitable conditions for the existence of minimax strategies are proposed. Also, we prove that the values of the finite-horizon problem converge to the values of the infinite-horizon problems. Moreover, for finite-horizon problems an algorithm of calculation of minimax strategies is developed and tested by using time-varying stochastic systems.  相似文献   

18.
Peng Cui  Huanshui Zhang 《Automatica》2009,45(10):2458-2461
An indefinite linear quadratic (ILQ) optimal control problem is discussed for singular discrete-time-varying linear systems with multiple input delays. The problem is transformed to the one for standard systems by normalizability decomposition. An explicit controller is obtained by computing the gain of the smoothing estimation of dual systems. Necessary and sufficient conditions guaranteeing the existence of unique solution are given simultaneously. A numerical example illustrates the presented method.  相似文献   

19.
The problem of adaptive dual control of discrete-time distributed-parameter stochastic systems is examined. It is shown that there exists an important difference between feedback and closed-loop policies of control for this type of system as for the lumped parameter case. This difference is based on the adaptivity feature of the control. Namely, when the control policy affects both the state and its uncertainty (dual effect) it possesses the so-called feature of active adaptivity and can only be a characteristic of a closed-loop policy, whereas a feedback policy can only be passively adaptive. These results can be used to develop a control algorithm for non-linear problems for which the realization of optimal control laws involves control strategies with both learning and control features.  相似文献   

20.
K. N. Swamy  T. J. Tarn 《Automatica》1979,15(6):677-682
Optimal control of a class of time invariant single-input, discrete bilinear systems is investigated in this paper. Both deterministic and stochastic problems are considered.

In the deterministic problem, for the initial state in a certain set ∑0, the solution is the same as the solution to the associated linear system. The optimal path may be a regular path or a singular path.

The stochastic control problem is considered with perfect state observation, and additive and multiplicative noise in the state equation. It is demonstrated that the presence of noise simplifies the analysis compared to that in the determinstic case.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号