首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the behaviour of a stochastic system composed of several identically distributed, but non independent, discrete-time absorbing Markov chains competing at each instant for a transition. The competition consists in determining at each instant, using a given probability distribution, the only Markov chain allowed to make a transition. We analyse the first time at which one of the Markov chains reaches its absorbing state. When the number of Markov chains goes to infinity, we analyse the asymptotic behaviour of the system for an arbitrary probability mass function governing the competition. We give conditions that ensure the existence of the asymptotic distribution and we show how these results apply to cluster-based distributed storage when the competition is handled using a geometric distribution.  相似文献   

2.
In this paper we present data structures and distributed algorithms for CSL model checking-based performance and dependability evaluation. We show that all the necessary computations are composed of series or sums of matrix-vector products. We discuss sparse storage structures for the required matrices and present efficient sequential and distributed disk-based algorithms for performing these matrix-vector products. We illustrate the effectivity of our approach in a number of case studies in which continuous-time Markov chains (generated in a distributed way from stochastic Petri net specifications) with several hundreds of millions of states are solved on a workstation cluster with 26 dual-processor nodes. We show details about the memory consumption, the solution times, and the speedup. The distributed message-passing algorithms have been implemented in a tool called PARSECS, that also takes care of the distributed Markov chain generation and that can also be used for distributed CTL model checking of Petri nets.  相似文献   

3.
We consider the following decision problem: given a finite Markov chain with distinguished source and target states, and given a rational number r, does there exist an integer n such that the probability to reach the target from the source in n steps is r? This problem, which is not known to be decidable, lies at the heart of many model checking questions on Markov chains. We provide evidence of the hardness of the problem by giving a reduction from the Skolem Problem: a number-theoretic decision problem whose decidability has been open for many decades.  相似文献   

4.
We investigate an algorithm applied to the adaptive estimation of partially observed finite-state Markov chains. The algorithm utilizes the recursive equation characterizing the conditional distribution of the state of the Markov chain, given the past observations. We show that the process “driving” the algorithm has a unique invariant measure for each fixed value of the parameter, and following the ordinary differential equation method for stochastic approximations, establish almost sure convergence of the parameter estimates to the solutions of an associated differential equation. The performance of the adaptive estimation scheme is analyzed by examining the induced controlled Markov process with respect to a long-run average cost criterion. This research was supported in part by the Air Force Office of Scientific Research under Grant AFOSR-86-0029, in part by the National Science Foundation under Grant ECS-8617860 and in part by the DoD Joint Services Electronics Program through the Air Force Office of Scientific Research (AFSC) Contract F49620-86-C-0045.  相似文献   

5.
In this paper we present an explicit disk-based verification algorithm for Probabilistic Systems defining discrete time/finite state Markov Chains. Given a Markov Chain and an integer k (horizon), our algorithm checks whether the probability of reaching an error state in at most k steps is below a given threshold. We present an implementation of our algorithm within a suitable extension of the Murϕ verifier. We call the resulting probabilistic model checker FHP-Murϕ (Finite Horizon ProbabilisticMurϕ). We present experimental results comparing FHP-Murϕ with (a finite horizon subset of) PRISM, a state-of-the-art symbolic model checker for Markov Chains. Our experimental results show that FHP-Murϕ can handle systems that are out of reach for PRISM, namely those involving arithmetic operations on the state variables (e.g. hybrid systems). This research has been partially supported by MURST projects MEFISTO and SAHARA. This paper is a journal version of the conference paper [16].  相似文献   

6.
The probability distribution of a Markov chain is viewed as the information state of an additive optimization problem. This optimization problem is then generalized to a product form whose information state gives rise to a generalized notion of probability distribution for Markov chains. The evolution and the asymptotic behavior of this generalized or “risk-sensitive” probability distribution is studied in this paper and a conjecture is proposed regarding the asymptotic periodicity of risk-sensitive probability and proved in the two-dimensional case. The relation between a set of simultaneous non-linear and the set of periodic attractors is analyzed.  相似文献   

7.
研究了基于交互式马尔可夫链(IMC)的模型检验,IMC是集功能描述和性能刻画为一体的并发系统模型,模型检验是一种自动功能验证与性能评价技术。文中提出的模型检验算法结合了传统的功能验证与性能评价的功能,并且与现有的算法相一致。实验分析表明,该算法具有较高的性能,适用于大型复杂系统的验证和评价。  相似文献   

8.
模型检测中,Markov决策过程可以建模具有不确定性的系统,然而状态空间爆炸问题将会影响系统验证的成败与效率,互模拟等价可以用于系统状态的简约.在强互模拟关系的基础上,给出Markov决策过程模型弱互模拟等价关系的概念,导出了连续时间Markov决策过程及其内嵌离散时间Markov决策过程互模拟等价关系的内在联系;在强互模拟等价关系逻辑特征保持的基础上,给出弱互模拟等价关系下的逻辑保持性质,证明了弱互模拟等价的两个状态,同时满足除下一步算子外的连续随机逻辑公式,从而可以将原模型中的验证问题转换为简约后模型的验证问题,提高验证的效率.  相似文献   

9.
We prove that the optimal lumping quotient of a finite Markov chain can be constructed in O(mlgn) time, where n is the number of states and m is the number of transitions. Our proof relies on the use of splay trees (designed by Sleator and Tarjan [J. ACM 32 (3) (1985) 652-686]) to sort transition weights.  相似文献   

10.
A Markov chain is controlled by a decision maker receiving his observations of the state via a noisy memoriless channel. That information is encoded causally. The encoder is assumed to have perfect channel feedback information.Separation results are derived and used to prove that encoding is useless for a class of symmetric channels.This paper extends the results of the authors (1983) by using methods similar to those of that paper.  相似文献   

11.
An optimal control problem with constraints is considered on a finite interval for a non-stationary Markov chain with a finite state space. The constraints are given as a set of inequalities. The optimal solution existence is proved under a natural assumption that the set of admissible controls is non-empty. The stochastic control problem is reduced to a deterministic one and it is shown that the optimal solution satisfies the maximum principle, moreover it can be chosen within a class of Markov controls. On the basis of this result an approach to the numerical solution is proposed and its implementation is illustrated by examples.  相似文献   

12.
The (approximate) regenerative block-bootstrap for bootstrapping general Harris Markov chains has recently been developed. It is built on the renewal properties of the chain, or of a Nummelin extension of the latter. It has theoretical properties that surpass other existing methods within the Markovian framework. The practical issues related to the implementation of this specific resampling method are discussed. Various simulation studies for investigating its performance and comparing it to other bootstrap resampling schemes, standing as natural candidates in the Markov setting are presented.  相似文献   

13.
基于马氏链遗传与繁衍模型的随机L-系统   总被引:1,自引:0,他引:1  
生物基因从亲代到子代的遗传具有马氏性,基于马氏链遗传与自繁衍模型的随机L-系统,将虚拟生物的形态生成模型与遗传生物学结合起来,同时,为人工生命的实现提供了有益的尝试。  相似文献   

14.
In this work the controlled continuous‐time finite‐state Markov chain with safety constraints is studied. The constraints are expressed as a finite number of inequalities, whose intersection forms a polyhedron. A probability distribution vector is called safe if it is in the polyhedron. Under the assumptions that the controlled Markov chain is completely observable and the controller induces a unique stationary distribution in the interior of the polyhedron, the author identifies the supreme invariant safety set (SISS) where a set is called an invariant safety set if any probability distribution in the set is initially safe and remains safe as time evolves. In particular, the necessary and sufficient condition for the SISS to be the polyhedron itself is given via linear programming formulations. A closed‐form expression for the condition is also derived as the constraints impose only upper and/or lower bounds on the components of the distribution vectors. If the condition is not satisfied, a finite time bound is identified and used to characterize the SISS. Numerical examples are provided to illustrate the results. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
This paper presents a novel method for modelling the spatio-temporal movements of tourists at the macro-level using Markov chains methodology. Markov chains are used extensively in modelling random phenomena which results in a sequence of events linked together under the assumption of first-order dependence. In this paper, we utilise Markov chains to analyse the outcome and trend of events associated with spatio-temporal movement patterns. A case study was conducted on Phillip Island, which is situated in the state of Victoria, Australia, to test whether a stationary discrete absorbing Markov chain could be effectively used to model the spatio-temporal movements of tourists. The results obtained showed that this methodology can indeed be effectively used to provide information on tourist movement patterns. One significant outcome of this research is that it will assist park managers in developing better packages for tourists, and will also assist in tracking tourists’ movements using simulation based on the model used.  相似文献   

16.
In this paper pattern recognition algorithms for controlled Markov chains for different control actions are presented. It is shown how to use these algorithms in sequential diagnosis when we classify sequences of medical tests of patients under treatment.  相似文献   

17.
Recent developments in the analysis of large Markov models facilitate the fast approximation of transient characteristics of the underlying stochastic process. Fluid analysis makes it possible to consider previously intractable models whose underlying discrete state space grows exponentially as model components are added. In this work, we show how fluid-approximation techniques may be used to extract passage-time measures from performance models. We focus on two types of passage measure: passage times involving individual components, as well as passage times which capture the time taken for a population of components to evolve.Specifically, we show that for models of sufficient scale, global passage-time distributions can be well approximated by a deterministic fluid-derived passage-time measure. Where models are not of sufficient scale, we are able to generate upper and lower approximations for the entire cumulative distribution function of these passage-time random variables, using moment-based techniques. Additionally, we show that, for passage-time measures involving individual components, the cumulative distribution function can be directly approximated by fluid techniques.Finally, using the GPA tool, we take advantage of the rapid fluid computation of passage times to show how a multi-class client-server system can be optimised to satisfy multiple service level agreements.  相似文献   

18.
Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen–Twente Markov Chain Checker E⊢MC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting lessons learned during the development and application of E⊢MC2. Published online: 19 November 2002 Correspondence to: Holger Hermanns  相似文献   

19.
Matrix exponential distributions and rational arrival processes have been proposed as an extension to pure Markov models. The paper presents an approach where these process types are used to describe the timing behavior in quantitative models like queueing networks, stochastic Petri nets or stochastic automata networks. The resulting stochastic process, which is called a rational process, is defined and it is shown that the matrix governing the behavior of the process has a structured representation which allows one to represent the matrix in a very compact form.  相似文献   

20.
We propose a new approximate numerical algorithm for the steady-state solution of general structured ergodic Markov models. The approximation uses a state-space encoding based on multiway decision diagrams and a transition rate encoding based on a new class of edge-valued decision diagrams. The new method retains the favorable properties of a previously proposed Kronecker-based approximation, while eliminating the need for a Kronecker-consistent model decomposition. Removing this restriction allows for a greater utilization of event locality, which facilitates the generation of both the state-space and the transition rate matrix, thus extends the applicability of this algorithm to larger and more complex models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号