首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The purpose of this paper is to compare analytically the properties of the suboptimal dual adaptive stochastic control with those of the optimal control, when plant dynamic contain multiplicative white noise parameters. A simple scalar example is used for this analysis.  相似文献   

2.
In this work, strong consistency results for a class of stochastic gradient algorithms are developed where the system is not necessarily minimum-phase or stable and is assumed to be of the CARMA form. The analysis uses only standard martingale results, and strong consistency is shown to hold under certain persistent excitation conditions  相似文献   

3.
An optimal indirect stochastic adaptive control is obtained explicitly for linear time-varying discrete-time systems with general delay and white noise perturbation, while minimizing the variance of the output around a desired value one step ahead. The resulting controller incorporates parameter uncertainties. The solution unifies and generalizes previously known results, which become special cases. The results are compared with a certainty equivalence controller in an example  相似文献   

4.
Stochastic adaptive minimum variance control algorithms require a division by a function of a recursively computed parameter estimate at each instant of time. In order that the analysis of these algorithms is valid, zero divisions must be events of probability zero. This property is established for the stochastic gradient adaptive control algorithm under the condition that the initial state of the system and all finite segments of its random disturbance process have a joint distribution which is absolutely continuous with respect to Lebesgue measure. This result is deduced from the following general result established in this paper: a non-constant rational function of a finite set of random variables {x1},xn} is absolutely continuous with respect to Lebesgue measure if the joint distribution function of {x1,…,xn} has this property.  相似文献   

5.
自适应一致性替换算法的设计与实现   总被引:1,自引:0,他引:1  
针对代理缓存的一致性策略和替换策略还没有很好地结合起来的技术现状,基于最优化模型,提出、设计和实现了一种新的优化代理缓存的自适应一致性--替换算法(即 ACR算法).这种算法包括一致性策略和替换策略两部分,一致性策略采用自适应TTL机制,替换策略是基于成本/价值模型的算法.通过Trace-Driven模拟实验,结果表明ACR算法在陈旧命中比上均优于传统的几个替换算法.  相似文献   

6.
This paper is concerned with the extension of the well-known minimum variance control strategy to the multi-input-multi-output case. It is shown that this extension is straightforward when the system interactor matrix is diagonal but presents some unexpected difficulties in the general case. We develop a suitable stochastic controller for the general case as a logical extension of the single input algorithm. We also explore the properties of the algorithm in detail. We also address the question of adaptive control of multivariable stochastic systems and investigate one possible strategy for overcoming the requirement of knowing the system interactor matrix a priori.  相似文献   

7.
An adaptive dual control algorithm is presented for linear stochastic systems with constant but unknown parameters. The system parameters are assumed to belong to a finite set on which a prior probability distribution is available. The tool used to derive the algorithm is preposterior analysis: a probabilistic characterization of the future adaptation process allows the controller to take advantage of the dual effect. The resulting actively adaptive control called model adaptive dual (MAD) control is compared to two passively adaptive control algorithms-the heuristic certainty equivalence (HCE) and the Deshpande-Upadhyay-Lainiotis (DUL) model-weighted controllers. An analysis technique developed for the comparison of different controllers is used to show statistically significant improvement in the performance of the MAD algorithm over those of the HCE and DUL.  相似文献   

8.
The authors establish global convergence and asymptotic properties of a direct adaptive controller for continuous-time stochastic linear systems by presenting a direct adaptive control algorithm and an associated proof of convergence. This result is comprehensive and covers many other existing results as special cases. It has practical implications for the discrete-time case since it reveals how the existing discrete-time results must be modified so that they have meaningful limits as the sampling period decreases  相似文献   

9.
A variational approach is taken to derive optimality conditions for a discrete-time linear quadratic adaptive stochastic optimal control problem. These conditions lead to an algorithm for computing optimal control laws which differs from the dynamic programming algorithm.  相似文献   

10.
Originally, adaptive control theory was developed for the ideal system models, i.e., linear system models under the assumption that relative degree and upper bounds on the order of the systems are known. In the early 1980s, adaptive control algorithms designed for such ideal system models were strongly attacked by many researchers due to “lack of robustness” in the presence of unmodeled dynamics and external disturbances. The purpose of the present paper is to relax existing constant pressure on the adaptive control algorithms originally designed for the ideal system models. It is shown that such adaptive control algorithms are globally stable and robust with respect to unmodeled dynamics and external disturbances without any modifications, such as iσ-modification, ϵ1-modification, relative dead-zone, projection of the parameter estimates, etc. Global stability of the unmodified algorithms is established by requiring the reference signal to be persistently exciting  相似文献   

11.
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochastic optimization problems. However, the performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, we present two adaptive steplength schemes for strongly convex differentiable stochastic optimization problems, equipped with convergence theory, that aim to overcome some of the reliance on user-specific parameters. The first scheme, referred to as a recursive steplength stochastic approximation (RSA) scheme, optimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. The second scheme, termed as a cascading steplength stochastic approximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. Then, we allow for nondifferentiable objectives but with bounded subgradients over a certain domain. In such a regime, we propose a local smoothing technique, based on random local perturbations of the objective function, that leads to a differentiable approximation of the function. Assuming a uniform distribution on the local randomness, we establish a Lipschitzian property for the gradient of the approximation and prove that the obtained Lipschitz bound grows at a modest rate with problem size. This facilitates the development of an adaptive steplength stochastic approximation framework, which now requires sampling in the product space of the original measure and the artificially introduced distribution.  相似文献   

12.
Recent papers on adaptive stochastic control have established global convergence for the general delay-colored noise case. However, for delayskgreater than unity they require the implementation ofkinterlaced adaptation algorithms. Using an indirect adaptive control approach, we show that in the white noise case a single adaptation algorithm suffices to establish that, with probability one, the systems input, output and the output tracking error are sample mean-square bounded. Moreover, the conditional variance of the output tracking error is shown to converge to its global minimum value with probability one.  相似文献   

13.
The principal techniques used up to now for the analysis of stochastic adaptive control systems have been 1) super-martingale (often called stochastic Lyapunov) methods and 2) methods relying upon the strong consistency of some parameter estimation scheme. Optimal stochastic control and filtering methods have also been employed. Although there have been some successes, the extension of these techniques to a broader class of adaptive control problems, including the case of time-varying parameters, has been difficult. In this paper a new approach is adopted: if an underlying Markovian state-space system for the controlled process is available, and if this process possesses stationary transition probabilities, then the powerful ergodic theory of Markov processes may be applied. Subject to technical conditions, such as stability, one may deduce 1) the existence of an invariant measure for the process and 2) the convergence almost surely of the sample averages of a function of the state process (and of its expectation) to its conditional expectation. The technique is illustrated by an application to a previously unsolved problem involving a linear system with unbounded random time-varying parameters.  相似文献   

14.
The replacement in an implicit predictive adaptive controller of recursive least squares (RLS) identifiers with stochastic gradient (SG) identifiers is considered. By an ordinary differential equation analysis, local convergence properties of the new algorithm are investigated. The conclusions, supported by simulation results, are that, in contrast with the one-step-ahead self-tuning regulator, the convergence properties of the resulting adaptive controller deteriorate if RLS identifiers are replaced with SG identifiers  相似文献   

15.
The discrete-time stochastic control system with complete observation is considered with quadratic loss function when the constant coefficients of the system are unknown. The parameter estimates given by the recursive least-squares method are used to define the feedback gain, and the adaptive control is taken to be either a linear-state feedback disturbed by a sequence of random vectors with variances tending to zero or only the disturbance without the feedback term in accordance with stopping times defined on the trajectory of the system. It is proved that the parameter estimates are strongly consistent and the loss function reaches its minimum, i.e. the adaptive control is optimal.  相似文献   

16.
Zheng  Yongjun  Huang  Ming  Lu  Yi  Li  Wenjun 《Neural computing & applications》2020,32(22):16807-16818

The output effect of fractional-order stochastic resonance (FOSR) system is affected by many factors such as input system parameters and noise intensity. In practice, many tests are needed to adjust parameters to achieve the optimal effect, and this way of “trial and error” greatly limits the application prospect of FOSR. Based on genetic algorithm, a suitable adaptive function was established to adjust the multiple parameters, including the fractional order, system parameters, and the input noise intensity of the fractional bistable system. Simulation results showed that the algorithm can achieve joint optimization of these parameters. It was proved that this algorithm is conducive to the real-time adaptive adjustment of the FOSR system in practical applications and conducive to the application and extension of FOSR in weak signal detection and other fields. The proposed algorithm has certain practical value.

  相似文献   

17.
18.
Conclusions The obtained results constitute an identification method of analysis of optimal decisions under uncertainty. Within this approach we have constructed a recursive procedure that has a number of fairly useful properties. First of all, it is sequential. Thus it is possible to use each observation for improving the estimates and for adaptive control. In this way we can construct fairly good estimates and a strategy with a small number of observations. In particular, the improvement of the strategy begins with a number of observations n=2. Moreover, it makes it possible to construct an optimal true stationary strategy after finitely many steps of adaptive control. Finally, the above methods make it possible to solve problems of fairly large dimension.Translated from Kibernetika, No. 6, pp. 88–94, November–December, 1981.  相似文献   

19.
This paper presents a model reference based adaptive control approach for the control of the output probability density function for unknown linear time-invariant stochastic systems. Different from most existing models used in stochastic control, it is assumed here that the measured control input directly affects the distribution of the system output in probability sense. As such, the purpose of control is to make the shape of the probability density function of the system output as close as possible to a prespecified one. Using the weighted integration of the measurable output probability density functions, two adaptive on-line updating rules are developed which guarantee the global stability for theclosed loop adaptive control system under certain conditions. Ithas been shown, when there is no external disturbance, that the so-formed closed loop system also realizes the perfect tracking (i.e., the probability density function of the system output approaches a class of given distributions asymptotically). A simulated example is included to illustrate the use of the developed control algorithm and desired results have been obtained.  相似文献   

20.
In this paper, the learning gain, for a selected learning algorithm, is derived based on minimizing the trace of the input error covariance matrix for linear time-varying systems. It is shown that, if the product of the input/output coupling matrices is a full-column rank, then the input error covariance matrix converges uniformly to zero in the presence of uncorrelated random disturbances. However, the state error covariance matrix converges uniformly to zero in presence of measurement noise. Moreover, it is shown that, if a certain condition is met, then the knowledge of the state coupling matrix is not needed to apply the proposed stochastic algorithm. The proposed algorithm is shown to suppress a class of nonlinear and repetitive state disturbance. The application of this algorithm to a class of nonlinear systems is also considered. A numerical example is included to illustrate the performance of the algorithm  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号