共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the behaviour of a stochastic system composed of several identically distributed, but non independent, discrete-time absorbing Markov chains competing at each instant for a transition. The competition consists in determining at each instant, using a given probability distribution, the only Markov chain allowed to make a transition. We analyse the first time at which one of the Markov chains reaches its absorbing state. When the number of Markov chains goes to infinity, we analyse the asymptotic behaviour of the system for an arbitrary probability mass function governing the competition. We give conditions that ensure the existence of the asymptotic distribution and we show how these results apply to cluster-based distributed storage when the competition is handled using a geometric distribution. 相似文献
2.
Asymptotic stability of the optimal filter with respect to its initial conditions is investigated in this paper. Under the assumption that the observation function is one to one and the observation noise is sufficiently small, it is shown that exponential stability of the nonlinear filter holds for a large class of denumerable Markov chains, including all finite Markov chains. Throughout this paper, ergodicity of the signal process is not assumed. 相似文献
3.
An optimal control problem with constraints is considered on a finite interval for a non-stationary Markov chain with a finite state space. The constraints are given as a set of inequalities. The optimal solution existence is proved under a natural assumption that the set of admissible controls is non-empty. The stochastic control problem is reduced to a deterministic one and it is shown that the optimal solution satisfies the maximum principle, moreover it can be chosen within a class of Markov controls. On the basis of this result an approach to the numerical solution is proposed and its implementation is illustrated by examples. 相似文献
4.
Motivated by biological swarms occurring in nature, there is recent interest in developing swarms comprised completely of engineered agents. The main challenge for developing swarm guidance laws compared to earlier formation flying and multi‐vehicle coordination approaches is the sheer number of agents involved. While formation flying applications might involve up to 10 to 20 agents, swarms are desired to contain hundreds to many thousands of agents. In order to deal with the sheer size, the present paper makes a break with past deterministic methods, and considers the swarm as a statistical ensemble for which guidance can be performed from a probabilistic point of view. The probability‐based approach takes advantage of the law of large numbers, and leads to computationally tractable and implementable swarm guidance laws. Agents following a probabilistic guidance algorithm make statistically independent probabilistic decisions based solely on their own state, which ultimately guides the swarm to the desired density distribution in the configuration space. Two different synthesis methods are introduced for designing probabilistic guidance laws. The first is based on the Metropolis‐Hastings (M‐H) algorithm, and the second is based on using linear matrix inequalities (LMIs). The M‐H approach ensures convergent swarm behavior subject to enforced desired motion constraints, while the LMI approach additionally ensures exponential convergence with a prescribed decay rate, and allows minimization of a cost function that reflects fuel expenditure. In addition, both algorithms endow the swarm with the property of self‐repair, and the capability to strictly enforce zero‐probability keep‐out regions. This last property requires a slight generalization of the Perron‐Frobenius theory, and can be very useful in swarm applications that contain regions where no agents are allowed to go. Simulation examples are given to illustrate the methods and demonstrate desired properties of the guided swarm. 相似文献
5.
Rolando Cavazos-Cadena 《Systems & Control Letters》1995,24(5):373-383
This work considers denumerable state Markov decision processes with discrete time parameter. The performance of a control policy is measured by the (lim sup) expected average cost criterion, the action sets are compact metric and the cost function is continuous and bounded. Within this framework, necessary and sufficient conditions are given so that the vanishing interest rate (VIR) method — also known as the vanishing discount effect approach — yields an average optimal stationary policy. 相似文献
6.
Aristotle Arapostathis Steven I. Marcus 《Mathematics of Control, Signals, and Systems (MCSS)》1990,3(1):1-29
We investigate an algorithm applied to the adaptive estimation of partially observed finite-state Markov chains. The algorithm
utilizes the recursive equation characterizing the conditional distribution of the state of the Markov chain, given the past
observations. We show that the process “driving” the algorithm has a unique invariant measure for each fixed value of the
parameter, and following the ordinary differential equation method for stochastic approximations, establish almost sure convergence
of the parameter estimates to the solutions of an associated differential equation. The performance of the adaptive estimation
scheme is analyzed by examining the induced controlled Markov process with respect to a long-run average cost criterion.
This research was supported in part by the Air Force Office of Scientific Research under Grant AFOSR-86-0029, in part by the
National Science Foundation under Grant ECS-8617860 and in part by the DoD Joint Services Electronics Program through the
Air Force Office of Scientific Research (AFSC) Contract F49620-86-C-0045. 相似文献
7.
8.
Shun‐Pin Hsu 《国际强度与非线性控制杂志
》2012,22(5):492-503
》2012,22(5):492-503
In this work the controlled continuous‐time finite‐state Markov chain with safety constraints is studied. The constraints are expressed as a finite number of inequalities, whose intersection forms a polyhedron. A probability distribution vector is called safe if it is in the polyhedron. Under the assumptions that the controlled Markov chain is completely observable and the controller induces a unique stationary distribution in the interior of the polyhedron, the author identifies the supreme invariant safety set (SISS) where a set is called an invariant safety set if any probability distribution in the set is initially safe and remains safe as time evolves. In particular, the necessary and sufficient condition for the SISS to be the polyhedron itself is given via linear programming formulations. A closed‐form expression for the condition is also derived as the constraints impose only upper and/or lower bounds on the components of the distribution vectors. If the condition is not satisfied, a finite time bound is identified and used to characterize the SISS. Numerical examples are provided to illustrate the results. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
9.
This paper presents a novel method for modelling the spatio-temporal movements of tourists at the macro-level using Markov chains methodology. Markov chains are used extensively in modelling random phenomena which results in a sequence of events linked together under the assumption of first-order dependence. In this paper, we utilise Markov chains to analyse the outcome and trend of events associated with spatio-temporal movement patterns. A case study was conducted on Phillip Island, which is situated in the state of Victoria, Australia, to test whether a stationary discrete absorbing Markov chain could be effectively used to model the spatio-temporal movements of tourists. The results obtained showed that this methodology can indeed be effectively used to provide information on tourist movement patterns. One significant outcome of this research is that it will assist park managers in developing better packages for tourists, and will also assist in tracking tourists’ movements using simulation based on the model used. 相似文献
10.
This work is concerned with controlled Markov chains with bounded costs. Assuming that the transition probabilities satisfy a simultaneous Doeblin condition, it is shown that Schweitzer’s transformation on the transition law yields a strong ergodicity condition that implies that the solution to the average cost optimality equation can be approximated, at a geometric rate, via the value iteration scheme. 相似文献
11.
This paper presents a novel method for computing the multi-objective problem in the case of a metric state space using the Manhattan distance. The problem is restricted to a class of ergodic controllable finite Markov chains. This optimization approach is developed for converging to an optimal solution that corresponds to a strong Pareto optimal point in the Pareto front. The method consists of a two-step iterated procedure: (a) the first step consists on an approximation to a strong Pareto optimal point and, (b) the second step is a refinement of the previous approximation. We formulate the problem adding the Tikhonov's regularization method to ensure the convergence of the cost-functions to a unique strong point into the Pareto front. We prove that there exists an optimal solution that is a strong Pareto optimal solution and it is the closest solution to the utopian point of the Pareto front. The proposed solution is validated theoretically and by a numerical example considering the vehicle routing planning problem. 相似文献
12.
Ariel D. Procaccia 《Information Processing Letters》2008,108(6):390-393
Given an unknown tournament over {1,…,n}, we show that the query complexity of the question “Is there a vertex with outdegree n−1?” (known as a Condorcet winner in social choice theory) is exactly 2n−⌊log(n)⌋−2. This stands in stark contrast to the evasiveness of this property in general digraphs. 相似文献
13.
Holger Hermanns Joost-Pieter Katoen Joachim Meyer-Kayser Markus Siegle 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):153-172
Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking
of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and
continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov
chains, the Erlangen–Twente Markov Chain Checker E⊢MC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and
discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting
lessons learned during the development and application of E⊢MC2.
Published online: 19 November 2002
Correspondence to: Holger Hermanns 相似文献
14.
We propose a technique for the analysis of infinite-state graph transformation systems, based on the construction of finite structures approximating their behaviour. Following a classical approach, one can construct a chain of finite under-approximations (k-truncations) of the Winskel style unfolding of a graph grammar. More interestingly, also a chain of finite over-approximations (k-coverings) of the unfolding can be constructed. The fact that k-truncations and k-coverings approximate the unfolding with arbitrary accuracy is formalised by showing that both chains converge (in a categorical sense) to the full unfolding. We discuss how the finite over- and under-approximations can be used to check properties of systems modelled by graph transformation systems, illustrating this with some small examples. We also describe the Augur tool, which provides a partial implementation of the proposed constructions, and has been used for the verification of larger case studies. 相似文献
15.
Degradable fault-tolerant systems can be evaluated using rewarded continuous-time Markov chain (CTMC) models. In that context, a useful measure to consider is the distribution of the cumulative reward over a time interval [0,t]. All currently available numerical methods for computing that measure tend to be very expensive when the product of the maximum output rate of the CTMC model and t is large and, in that case, their application is limited to CTMC models of moderate size. In this paper, we develop two methods for computing bounds for the cumulative reward distribution of CTMC models with reward rates associated with states: BT/RT (Bounding Transformation/Regenerative Transformation) and BT/BRT (Bounding Transformation/Bounding Regenerative Transformation). The methods require the selection of a regenerative state, are numerically stable and compute the bounds with well-controlled error. For a class of rewarded CTMC models, class , and a particular, natural selection for the regenerative state the BT/BRT method allows us to trade off bound tightness with computational cost and will provide bounds at a moderate computational cost in many cases of interest. For a class of models, class , slightly wider than class , and a particular, natural selection for the regenerative state, the BT/RT method will yield tighter bounds at a higher computational cost. Under additional conditions, the bounds obtained using the less expensive version of BT/BRT and BT/RT seem to be tight for any value of t or not small values of t, depending on the initial probability distribution of the model. Class and class models with these additional conditions include both exact and bounding typical failure/repair performability models of fault-tolerant systems with exponential failure and repair time distributions and repair in every state with failed components and a reward rate structure which is a non-increasing function of the collection of failed components. We illustrate both the applicability and the performance of the methods using a large CTMC performability example of a fault-tolerant multiprocessor system. 相似文献
16.
Valued constraint satisfaction problem (VCSP) is an optimisation framework originally coming from Artificial Intelligence and generalising the classical constraint satisfaction problem (CSP). The VCSP is powerful enough to describe many important classes of problems. In order to investigate the complexity and expressive power of valued constraints, a number of algebraic tools have been developed in the literature. In this note we present alternative proofs of some known results without using the algebraic approach, but by representing valued constraints explicitly by combinations of other valued constraints. 相似文献
17.
18.
19.
In this note we prove that the equations satisfied by one-letter regular languages are exactly those satisfied by commutative regular languages. This answers a problem raised by Arto Salomaa. 相似文献
20.
针对3G系统支持多业务的特点,提出了一种基于多维状态马尔可夫模型(Markov)的无线信道容量规划方法。该方法的关键是建立话音业务和其他各类数据业务准确的业务模型,并将每类业务的服务需求映射成对无线信道的实际需求。对呼损率、业务呼叫到达率以及无线信道配置间的关系进行了深入分析。通过分析无线信道容量与业务呼叫到达率的关系,给出了系统扩容的依据。 相似文献