共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper examines how considerations of model uncertainty can affect policy design. Without such considerations one may expect that choice of policy control rules for a macroeconomic model would depend on some welfare criterion based on the model as given. However if there is uncertainty in the structure of the model or in the values of particular model parameters then it is argued that choice of policy should take this into account.We introduce and define some measures ofrobustness which describe how well a particular control rule performs when the model is uncertain. These can only be evaluated using Monte-Carlo simulations; in that sense they are ex post. Then we define a number of indicators which may be of use in predicting robustness, and which do not require simulations to calculate. In that sense they are ex ante.Lastly we evaluate the ex ante indicators on a small macromodel by comparing their predictions with the actual robustness outturn for the range of possible control rules. We find that use of the indicators in choosing rules yields some improvement on the ordinary welfare criterion, especially when the shocks hitting the system are unknown. 相似文献
2.
Vijaykumar Gullapalli 《Robotics and Autonomous Systems》1995,15(4):237-246
Complexity and uncertainty in modern robots and other autonomous systems make it difficult to design controllers for such systems that can achieve desired levels of precision and robustness. Therefore learning methods are being incorporated into controllers for such systems, thereby providing the adaptibility necessary to meet the performance demands of the task. We argue that for learning tasks arising frequently in control applications, the most useful methods in practice probably are those we call direct associative reinforcement learning methods. We describe direct reinforcement learning methods and also illustrate with an example the utility of these methods for learning skilled robot control under uncertainty. 相似文献
3.
Model predictive control (MPC) has become one of the most popular control techniques in the process industry mainly because of its ability to deal with multiple-input–multiple-output plants and with constraints. However, in the presence of model uncertainties and disturbances its performance can deteriorate. Therefore, the development of robust MPC techniques has been widely discussed during the last years, but they were rarely, if at all, applied in practice due to the conservativeness or the computational complexity of the approaches. In this paper, we present multi-stage NMPC as a promising robust non-conservative nonlinear model predictive control scheme. The approach is based on the representation of the evolution of the uncertainty by a scenario tree, and leads to a non-conservative robust control of the uncertain plant because the adaptation of future inputs to new information is taken into account. Simulation results show that multi-stage NMPC outperforms standard and min–max NMPC under the presence of uncertainties for a semi-batch polymerization benchmark problem. In addition, the advantages of the approach are illustrated for the case where only noisy measurements are available and the unmeasured states and the uncertainties have to be estimated using an observer. It is shown that better performance can be achieved than by estimating the unknown parameters online and adapting the plant model. 相似文献
4.
Consider a dynamic decision making model under risk with a fixed planning horizon, namely the dynamic capacity control model. The model describes a firm, operating in a monopolistic setting and selling a range of products consuming a single resource. Demand for each product is time-dependent and modeled by a random variable. The firm controls the revenue stream by allowing or denying customer requests for product classes. We investigate risk-sensitive policies in this setting, for which risk concerns are important for many non-repetitive events and short-time considerations.Numerically analysing several risk-averse capacity control policies in terms of standard deviation and conditional-value-at-risk, our results show that only a slight modification of the risk-neutral solution is needed to apply a risk-averse policy. In particular, risk-averse policies which decision rules are functions depending only on the marginal values of the risk-neutral policy perform well. From a practical perspective, the advantage is that a decision maker does not need to compute any risk-averse dynamic program. Risk sensitivity can be easily achieved by implementing risk-averse functional decision rules based on a risk-neutral solution. 相似文献
5.
自寻优自适应动态面控制 总被引:2,自引:0,他引:2
针对一类不确定非线性系统的跟踪问题,利用神经网络和动态面技术设计控制器,提出一种控制器参数自寻优策略.在每个子系统中,应用径向基函数(RBF)神经网络逼近该子系统中的不确定项;在每一步递推中,引入一个滤波器以克服反推技术中控制项爆炸的缺点.通过定义一个优化目标函数,应用梯度法在控制器参数可行解中寻找一组最优的控制器参数.数值仿真表明该方案是可行的. 相似文献
6.
Adrian M. Thompson Author Vitae Author Vitae 《Automatica》2005,41(5):767-778
Practical exploitation of optimal dual control (ODC) theory continues to be hindered by the difficulties involved in numerically solving the associated stochastic dynamic programming (SDPs) problems. In particular, high-dimensional hyper-states coupled with the nesting of optimizations and integrations within these SDP problems render their exact numerical solution computationally prohibitive. This paper presents a new stochastic dynamic programming algorithm that uses a Monte Carlo approach to circumvent the need for numerical integration, thereby dramatically reducing computational requirements. Also, being a generalization of iterative dynamic programming (IDP) to the stochastic domain, the new algorithm exhibits reduced sensitivity to the hyper-state dimension and, consequently, is particularly well suited to solution of ODC problems. A convergence analysis of the new algorithm is provided, and its benefits are illustrated on the problem of ODC of an integrator with unknown gain, originally presented by Åström and Helmersson (Computers and Mathematics with Applications 12A (1986) 653-662). 相似文献
7.
《Journal of Process Control》2014,24(8):1247-1259
In the last years, the use of an economic cost function for model predictive control (MPC) has been widely discussed in the literature. The main motivation for this choice is that often the real goal of control is to maximize the profit or the efficiency of a certain system, rather than tracking a predefined set-point as done in the typical MPC approaches, which can be even counter-productive. Since the economic optimal operation of a system resulting from the application of an economic model predictive control approach drives the system to the constraints, the explicit consideration of the uncertainties becomes crucial in order to avoid constraint violations. Although robust MPC has been studied during the past years, little attention has yet been devoted to this topic in the context of economic nonlinear model predictive control, especially when analyzing the performance of the different MPC approaches. In this work, we present the use of multi-stage scenario-based nonlinear model predictive control as a promising strategy to deal with uncertainties in the context of economic NMPC. We make a comparison based on simulations of the advantages of the proposed approach with an open-loop NMPC controller in which no feedback is introduced in the prediction and with an NMPC controller which optimizes over affine control policies. The approach is efficiently implemented using CasADi, which makes it possible to achieve real-time computations for an industrial batch polymerization reactor model provided by BASF SE. Finally, a novel algorithm inspired by tube-based MPC is proposed in order to achieve a trade-off between the variability of the controlled system and the economic performance under uncertainty. Simulations results show that a closed-loop approach for robust NMPC increases the performance and that enforcing low variability under uncertainty of the controlled system might result in a big performance loss. 相似文献
8.
9.
10.
An iterative learning control scheme is presented for a class of nonlinear dynamic systems which includes holonomic systems as its subset. The control scheme is composed of two types of control methodology: a linear feedback mechanism and a feedforward learning strategy. At each iteration, the linear feedback provides stability of the system and keeps its state errors within uniform bounds. The iterative learning rule, on the other hand, tracks the entire span of a reference input over a sequence of iterations. The proposed learning control scheme takes into account the dominant system dynamics in its update algorithm in the form of scaled feedback errors. In contrast to many other learning control techniques, the proposed learning algorithm neither uses derivative terms of feedback errors nor assumes external input perturbations as a prerequisite. The convergence proof of the proposed learning scheme is given under minor conditions on the system parameters. 相似文献
11.
A cooperative game of a pair of learning automata 总被引:1,自引:0,他引:1
A cooperative game played in a sequential manner by a pair of learning automata is investigated in this paper. The automata operate in an unknown random environment which gives a common pay-off to the automata. Necessary and sufficient conditions on the functions in the reinforcement scheme are given for absolute monotonicity which enables the expected pay-off to be monotonically increasing in any arbitrary environment. As each participating automaton operates with no information regarding the other partner, the results of the paper are relevant to decentralized control. 相似文献
12.
This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from multi-agent dynamical systems, where pinning control is used to make all the agents synchronize to the state of a command generator or a leader agent. Novel coupled Bellman equations and Hamiltonian functions are developed for the dynamic graphical games. The Hamiltonian mechanics are used to derive the necessary conditions for optimality. The solution for the dynamic graphical game is given in terms of the solution to a set of coupled Hamilton-Jacobi-Bellman equations developed herein. Nash equilibrium solution for the graphical game is given in terms of the solution to the underlying coupled Hamilton-Jacobi-Bellman equations. An online model-free policy iteration algorithm is developed to learn the Nash solution for the dynamic graphical game. This algorithm does not require any knowledge of the agents’ dynamics. A proof of convergence for this multi-agent learning algorithm is given under mild assumption about the inter-connectivity properties of the graph. A gradient descent technique with critic network structures is used to implement the policy iteration algorithm to solve the graphical game online in real-time. 相似文献
13.
John McFarland Sankaran Mahadevan 《Computer Methods in Applied Mechanics and Engineering》2008,197(29-32):2467
The importance of modeling and simulation in the scientific community has drawn interest towards methods for assessing the accuracy and uncertainty associated with such models. This paper addresses the validation and calibration of computer simulations using the thermal challenge problem developed at Sandia National Laboratories for illustration. The objectives of the challenge problem are to use hypothetical experimental data to validate a given model, and then to use the model to make predictions in an untested domain. With regards to assessing the accuracy of the given model (validation), we illustrate the use of Hotelling’s T2 statistic for multivariate significance testing, with emphasis on the formulation and interpretation of such an analysis for validation assessment. In order to use the model for prediction, we next employ the Bayesian calibration method introduced by Kennedy and O’Hagan. Towards this end, we discuss how inherent variability can be reconciled with “lack-of-knowledge” and other uncertainties, and we illustrate a procedure that allows probability distribution characterization uncertainty to be included in the overall uncertainty analysis of the Bayesian calibration process. 相似文献
14.
Stefano Moretti S. Zeynep Alparslan Gök Rodica Branzei Stef Tijs 《Computers & Operations Research》2011
This paper deals with cost allocation problems arising from connection situations where edge costs are closed intervals of real numbers. To solve such problems, we extend to the interval uncertainty setting the obligation rules from the theory of minimum cost spanning tree problems, and study their cost monotonicity and stability properties. We also present an application to a simulated ad hoc wireless network using a software implementation of an appealing obligation rule, the P-value. 相似文献
15.
Tom Oomen Author Vitae Jeroen van de Wijdeven Author Vitae Author Vitae 《Automatica》2009,45(4):981-1666
Iterative Learning Control (ILC) is a control strategy to improve the performance of digital batch repetitive processes. Due to its digital implementation, discrete time ILC approaches do not guarantee good intersample behavior. In fact, common discrete time ILC approaches may deteriorate the intersample behavior, thereby reducing the performance of the sampled-data system. In this paper, a generally applicable multirate ILC approach is presented that enables to balance the at-sample performance and the intersample behavior. Furthermore, key theoretical issues regarding multirate systems are addressed, including the time-varying nature of the multirate ILC setup. The proposed multirate ILC approach is shown to outperform discrete time ILC in realistic simulation examples. 相似文献
16.
This paper presents a design method for robust two degree-of-freedom (DOF) controllers that optimize the control performance with respect to both model uncertainty and signal measurement uncertainty. In many situations, non-causal feedforward is a welcome control addition when closed loop feedback bandwidth limitations exist due to plant dynamics such as: delays, non-minimum phase zeros, poorly placed zeros and poles (Xie, Alleyne, Greer, and Deneault (2013); Xie (2013), etc. However, feedforward control is sensitive to both model uncertainty and signal measurement uncertainty. The latter is particularly true when the feedforward is responding to pre-measured disturbance signals. The combined sensitivity will deteriorate the feedforward controller performance if care is not taken in design. In this paper a two DOF design is introduced which optimizes the performance based on a given estimate of uncertainties. The controller design uses H∞ tools to balance the controlled system bandwidth with increased sensitivity to signal measurement uncertainties. A successful case study on an experimental header height control system for a combine harvester is shown as an example of the approach. 相似文献
17.
We consider discrete time control problems, where only the support sets of the initial condition and of the disturbances are known. We study the applicability of the DP method and, as a counterpart of the LQ problem of the stochastic setting, we present a problem that admits an explicit analytic solution. In particular, we characterize a property of compatibility between the system dynamics and the norms of the spaces, that is crucial to obtain the analytic solution also in the general multidimensional case. 相似文献
18.
Bayesian Network is a stochastic model, which shows the qualitative dependence between two or more random variables by the
graph structure, and indicates the quantitative relations between individual variables by the conditional probability. This
paper deals with the production and inventory control using the dynamic Bayesian network. The probabilistic values of the
amount of delivered goods and the production quantities are changed in the real environment, and then the total stock is also
changed randomly. The probabilistic distribution of the total stock is calculated through the propagation of the probability
on the Bayesian network. Moreover, an adjusting rule of the production quantities to maintain the probability of the lower
bound and the upper bound of the total stock to certain values is shown.
This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January
31–February 2, 2008 相似文献
19.
LI Heng-jie HAO Xiao-hong XU Wei-tao 《通讯和计算机》2008,5(1):58-62
Clonal selection algorithm is improved and proposed as a method to implement nonlinear optimal iterative learning control algorithm. In the method, more priori information was coded in a clonal selection algorithm to decrease the size of the search space and to deal with constraint on input. Another clonal selection algorithm is used as a model modifying device to cope with uncertainty in the plant model. Finally, simulations show that the convergence speed is satisfactory regardless of the nature of the plant and whether or not the plant model is precise. 相似文献
20.
Constrained robust model predictive control via parameter-dependent dynamic output feedback 总被引:2,自引:0,他引:2
B. Ding Author Vitae 《Automatica》2010,46(9):1517-1523
This paper considers output feedback robust model predictive control for the quasi-linear parameter varying (quasi-LPV) system with bounded disturbance. The so-called quasi-LPV means that the varying parameters of the linear system are known at the current time, but unknown in the future. The control law is parameterized as a parameter-dependent dynamic output feedback, and the closed-loop stability is specified by the notion of quadratic boundedness. An iterative algorithm is proposed for the on-line synthesis of the control law via convex optimization. A numerical example is given to illustrate the effectiveness of the controller. 相似文献