共查询到20条相似文献,搜索用时 17 毫秒
1.
We present an application of stochastic Concurrent Constraint Programming (sCCP) for modeling biological systems. We provide a library of sCCP processes that can be used to describe straightforwardly biological networks. In the meanwhile, we show that sCCP proves to be a general and extensible framework, allowing to describe a wide class of dynamical behaviours and kinetic laws. 相似文献
2.
Probabilistic reachability and safety for controlled discrete time stochastic hybrid systems 总被引:2,自引:0,他引:2
In this work, probabilistic reachability over a finite horizon is investigated for a class of discrete time stochastic hybrid systems with control inputs. A suitable embedding of the reachability problem in a stochastic control framework reveals that it is amenable to two complementary interpretations, leading to dual algorithms for reachability computations. In particular, the set of initial conditions providing a certain probabilistic guarantee that the system will keep evolving within a desired ‘safe’ region of the state space is characterized in terms of a value function, and ‘maximally safe’ Markov policies are determined via dynamic programming. These results are of interest not only for safety analysis and design, but also for solving those regulation and stabilization problems that can be reinterpreted as safety problems. The temperature regulation problem presented in the paper as a case study is one such case. 相似文献
3.
混合系统在matlab环境下的建模和仿真 总被引:1,自引:0,他引:1
混合系统是集连续动态系统和离散事件为一体的复杂动态系统,是近年来控制理论研究领域的热门课题.由于混合系统既含连续变量义含离散事件,给处理这类系统带来了复杂性.一般混合系统建立模方法有:混合自动机,混合petri,时段演算及其扩展等模型.在概述混合系统概念与特点的基础上,介绍了混合系统研究中的建模与仿真问题.结合超市冰柜系统用混合自动机建模,并用MATLAB中的SIMULINK和STATEFLOW进行仿真.仿真结果表明效果很好,为系统分析和设计提供了有力的工具. 相似文献
4.
We develop stochastic optimal control results for nonlinear discrete-time systems driven by disturbances modeled by a Markov chain. A characterization and a computational procedure for a control law which maximizes a cost functional, related to expected time-to-violate specified constraints or to expected total yield before constraint violation occurs, are discussed. Such an optimal control law may be viewed as providing drift counteraction and is, therefore, referred to as drift counteraction stochastic optimal control. Two simulation examples highlight opportunities for applications of these results to hybrid electric vehicle (HEV) powertrain management and to oil extraction. 相似文献
5.
A stochastic games framework for verification and control of discrete time stochastic hybrid systems
Jerry Ding Maryam Kamgarpour Sean Summers Alessandro Abate John Lygeros Claire Tomlin 《Automatica》2013
We describe a framework for analyzing probabilistic reachability and safety problems for discrete time stochastic hybrid systems within a dynamic games setting. In particular, we consider finite horizon zero-sum stochastic games in which a control has the objective of reaching a target set while avoiding an unsafe set in the hybrid state space, and a rational adversary has the opposing objective. We derive an algorithm for computing the maximal probability of achieving the control objective, subject to the worst-case adversary behavior. From this algorithm, sufficient conditions of optimality are also derived for the synthesis of optimal control policies and worst-case disturbance strategies. These results are then specialized to the safety problem, in which the control objective is to remain within a safe set. We illustrate our modeling framework and computational approach using both a tutorial example with jump Markov dynamics and a practical application in the domain of air traffic management. 相似文献
6.
Effectiveness of customers’ loyalty programs has been the focal point of some recent studies. While empirical research shows mixed findings, analytical studies on the efficacy of loyalty programs are in their early stages. In this paper, we develop an analytical model on the profitability of loyalty programs in which customers’ valuations along with their satisfaction levels are incorporated as stochastic variables. The model consists of a revenue-maximizing firm selling a product through two periods. A loyalty reward is offered to two-period buyers in the form of an absolute-value discount on the price in the second period. The satisfaction level is represented by the difference between a customer’s original and post-purchase valuation. The formulation yields a stochastic programming problem with a nonlinear non-convex objective function. The problem is solved in terms of the model parameters. The results reveal that depending on the mean and variance of the satisfaction levels, the firm may be better off not to offer a loyalty reward. Specifically, if the mean of satisfaction levels turns out to be positive with a coefficient of variation less than a certain threshold, not adopting the loyalty program is optimal. 相似文献
7.
This paper investigates the small-gain type conditions on stochastic iISS (SiISS) systems and makes full use of these conditions in the design and analysis of the controller. The contributions are as follows: (1) A new proof of the stochastic LaSalle theorem is provided; (2) The small-gain type conditions on SiISS are developed and their relationship is discussed; (3) Based on the stochastic LaSalle theorem and SiISS small-gain type conditions, the adaptive controllers are designed to guarantee that all of the closed-loop signals are bounded almost surely and the stochastic closed-loop systems are globally (asymptotically) stable in probability. 相似文献
8.
Toivo Henningsson Author Vitae Erik Johannesson Author VitaeAuthor Vitae 《Automatica》2008,44(11):2890-2895
The standard approach in computer-controlled systems is to sample and control periodically. In certain applications, such as networked control systems or energy-constrained systems, it could be advantageous to instead use event-based control schemes. Aperiodic event-based control of first-order stochastic systems has been investigated in previous work. In any real implementation, however, it is necessary to have a well-defined minimum inter-event time. In this paper, we explore two such sporadic control schemes for first-order linear stochastic systems and compare the achievable performance to both periodic and aperiodic control. The results show that sporadic control can give better performance than periodic control in terms of both reduced process state variance and reduced control action frequency. 相似文献
9.
When designing optimal controllers for any system, it is often the case that the true state of the system is unknown to the controller. Imperfect state information must be taken into account in the controller’s design in order to preserve its optimality. The same is true when performing reachability calculations. To estimate the probability that the state of a stochastic system reaches, or stays within, some set of interest in a given time horizon, it is necessary to find a controller that drives the system to that set with maximum probability, given the controller’s knowledge of the true state of the system. To date, little work has been done on stochastic reachability calculations with partially observable states. The work that has been done relies on converting the reachability optimization problem to one with an additive cost function, for which theoretical results are well known. Our approach is to preserve the multiplicative cost structure when deriving a sufficient statistic that reduces the problem to one of perfect state information. Our transformation includes a change of measure that simplifies the distribution of the sufficient statistic conditioned on its previous value. We develop a dynamic programming recursion for the solution of the equivalent perfect information problem, proving that the recursion is valid, an optimal solution exists, and results in the same solution as to the original problem. We also show that our results are equivalent to those for the reformulated additive cost problem, and so such a reformulation is not required. 相似文献
10.
This paper considers the robustness of stochastic stability of Markovian jump linear systems in continuous- and discrete-time with respect to their transition rates and probabilities, respectively. The continuous-time (discrete-time) system is described via a continuous-valued state vector and a discrete-valued mode which varies according to a Markov process (chain). By using stochastic Lyapunov function approach and Kronecker product transformation techniques, sufficient conditions are obtained for the robust stochastic stability of the underlying systems, which are in terms of upper bounds on the perturbed transition rates and probabilities. Analytical expressions are derived for scalar systems, which are straightforward to use. Numerical examples are presented to show the potential of the proposed techniques. 相似文献
11.
This paper presents some enhancements associated with stochastic decomposition (SD). Specifically, we study two issues: (a) Are there any conditions under which the regularized version of SD generates a unique solution? (b) Is there a way to modify the SD algorithm so that a user can trade-off solution times with solution quality? The second issue addresses the scalability of SD for very large scale problems for which computational resources may be limited and the user may be willing to accept solutions that are “nearly optimal”. We show that by using bootstrapping (re-sampling) the regularized SD algorithm can be accelerated without significant loss of optimality. We report computational results that demonstrate the viability of this approach. 相似文献
12.
The solution of a stochastic control problem depends on the underlying model. The actual real world model may not be known precisely and so one solves the problem for a hypothetical model, that is in general different but close to the real one; the optimal (or nearly optimal) control of the hypothetical model is then used as solution for the real problem.In this paper, we assume that, what is not precisely known, is the underlying probability measure that determines the distribution of the random quantities driving the model. We investigate two ways to derive a bound on the suboptimality of the optimal control of the hypothetical problem when this control is used in the real problem. Both bounds are in terms of the Radon–Nikodym derivative of the underlying real world measure with respect to the hypothetical one. We finally investigate how the bounds compare to each other. 相似文献
13.
In this paper, the optimal viability decision problem of linear discrete-time stochastic systems with probability criterion is investigated. Under the condition of sequence-reachable discrete-time dynamic systems, the existence theorem of optimal viability strategy is given and the solving procedure of the optimal strategy is provided based on dynamic programming. A numerical example shows the effectiveness of the proposed methods. 相似文献
14.
并发约束程序设计语言COPS及其执行模型 总被引:1,自引:0,他引:1
约束程序设计尤其是约束逻辑程序设计与并发约束程序设计在AI程序设计领域占据着越来越重要的位置。传统逻辑程序设计的基“计算即为定理证明”的计算风格虽获得了简洁优美的操作语义特性,但也付出了执行效率低的代价,当应用系统规模增大时,其性能严重下降以致崩溃。针对传统逻辑程序设计的这种可伸缩性问题,设计了一个基于并发约束程序设计概念的说明性语言COPS,旨在从语言设计与执行模型两方面降低说明性程序的不确定性,提高搜索与运行效率。在语言设计方面,通过引入确定性语言成分,避免不确定计算用于确定性目标所浪费的系统开销;在执行模型方面,在目标的并发穿叉执行与数据驱动的并发同步机制的基础上,实现“优先执行确定目标”策略与“最少假定”策略,作为约束传播的延伸,最大幅度地剪枝搜索空间,降低搜索复杂性。COPS提供的知识表示、推理与并发机制使其成为构造agent程序的理想语言。论文给出COPS语言的语法规范与执行模型的操作语义描述。 相似文献
15.
16.
Yu. Nesterov Author Vitae 《Automatica》2008,44(6):1559-1568
We propose an alternative approach to stochastic programming based on Monte-Carlo sampling and stochastic gradient optimization. The procedure is by essence probabilistic and the computed solution is a random variable. We propose a solution concept in which the probability that the random algorithm produces a solution with an expected objective value departing from the optimal one by more than ? is small enough. We derive complexity bounds on the number of iterations of this process. We show that by repeating the basic process on independent samples, one can significantly reduce the number of iterations. 相似文献
17.
This paper considers the problem of deciding multi-period investments for maintenance and upgrade of electrical energy distribution networks. After describing the network as a constrained hybrid dynamical system, optimal control theory is applied to optimize profit under a complex incentive/penalty mechanism imposed by public authorities. The dynamics of the system and the cost function are translated into a mixed integer optimization model, whose solution gives the optimal investment policy over the multi-period horizon. While for a reduced-size test problem the pure mixed-integer approach provides the best optimal control policy, for real-life large-scale scenarios a heuristic solution is also introduced. Finally, the uncertainty associated with the dynamical model of the network is taken care of by adopting ideas from stochastic programming. 相似文献
18.
19.
A simulation-and-regression approach for stochastic dynamic programs with endogenous state variables
We investigate the optimum control of a stochastic system, in the presence of both exogenous (control-independent) stochastic state variables and endogenous (control-dependent) state variables. Our solution approach relies on simulations and regressions with respect to the state variables, but also grafts the endogenous state variable into the simulation paths. That is, unlike most other simulation approaches found in the literature, no discretization of the endogenous variable is required. The approach is meant to handle several stochastic variables, offers a high level of flexibility in their modeling, and should be at its best in non time-homogenous cases, when the optimal policy structure changes with time. We provide numerical results for a dam-based hydropower application, where the exogenous variable is the stochastic spot price of power, and the endogenous variable is the water level in the reservoir. 相似文献
20.
We consider an optimization problem in which the cost of a feasible solution depends on a set of unknown parameters (scenario) that will be realized. In order to assess the cost of implementing a given solution, its performance is compared with the optimal one under each feasible scenario. The positive difference between the objective values of both solutions defines the regret corresponding to a fixed scenario. The proposed optimization model will seek for a compromise solution by minimizing the expected regret where the expectation is taken respect to a probability distribution that depends on the same solution that is being evaluated, which is called solution-dependent probability distribution. We study the optimization model obtained by applying a specific family of solution-dependent probability distributions to the shortest path problem where the unknown parameters are the arc lengths of the network. This approach can be used to generate new models for robust optimization where the degree of conservatism is calibrated by using different families of probability distributions for the unknown parameters. 相似文献