首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
This paper discusses trace semantics of interactive Markov chains (IMCs) that are a generalisation of continuous-time Markov chains. In IMCs besides probabilistic alternatives there may be a nondeterministic choice between several action-labelled transitions in a state. We analyse several variants of trace equivalences that arise from the different ways one has to resolve nondeterministic branching. Button pushing testing scenarios are used to motivate each abstraction level induced by the trace semantics and associated equivalences are sorted according to their distinguishing power.  相似文献   

2.
Equivalence relations can be used to reduce the state space of a system model, thereby permitting more efficient analysis. We study backward stochastic bisimulation in the context of model checking continuous-time Markov chains against continuous stochastic logic (CSL) properties. While there are simple CSL properties that are not preserved when reducing the state space of a continuous-time Markov chain using backward stochastic bisimulation, we show that the equivalence can nevertheless be used in the verification of a practically significant class of CSL properties. We consider an extension of these results to Markov reward models and continuous stochastic reward logic. Furthermore, we identify the logical properties for which the requirement on the equality of state-labeling sets (normally imposed on state equivalences in a model-checking context) can be omitted from the definition of the equivalence, resulting in a better state-space reduction  相似文献   

3.
In this paper, we study the complexity of deciding readiness and failure equivalences for finite state processes and recursively defined processes specified by normed context-free grammars (CFGs) in Greibach normal form (GNF). The results are as follows: (1) Readiness and failure equivalences for processes specified by normed GNF CFGs are both undecidable. For this class of processes, the regularity problem with respect to failure or readiness equivalence is also undecidable. Moreover, all these undecidability results hold even for locally unary processes. In the unary case, these problems become decidable. In fact, they are Πp2-complete, We also show that with respect to bisimulation equivalence, the regularity for processes specified by normed GNF CFGs is NL-complete. (2) Readiness and failure equivalences for finite state processes are PSPACE-complete. This holds even for locally unary finite state processes. These two equivalences are co-NP-complete for unary finite state processes. Further, for acyclic finite state processes, readiness and failure equivalences are co-NP-complete and they are NL-complete in the unary case. (3) For finite tree processes, we show that finite trace, readiness, and failure equivalences are all L-complete. Further, the results remain true for the unary case. Our results provide a complete characterization of the computational complexity of deciding readiness and failure equivalences for several important classes of processes.  相似文献   

4.
Several approaches have been proposed to relax behavioral equivalences for fine-grain models including probabilities and time. All of them face two problems behind the notion of approximation, i.e., the lack of transitivity and the efficiency of the verification algorithm. While the typical equivalence under approximation is bisimulation, we present a relaxation of Markovian testing equivalence in a process algebraic framework. In this coarser setting, we show that it is particularly intuitive to manage separately three different dimensions of the approximation-execution time, event probability, and observed behavior-by illustrating in each case, results concerning the two problems mentioned above. Finally, a unified definition combining the three orthogonal aspects is provided in order to favor trade-off analyses.  相似文献   

5.
Consideration is given to the control of continuous-time linear systems that possess randomly jumping parameters which can be described by finite-state Markov processes. The relationship between appropriately defined controllability, stabilizability properties, and the solution of the infinite time jump linear quadratic (JLQ) optimal control problems is also examined. Although the solution of the continuous-time Markov JLQ problem with finite or infinite time horizons is known, only sufficient conditions for the existence of finite cost, constant, stabilizing controls for the infinite time problem appear in the literature. In this paper necessary and sufficient conditions are established. These conditions are based on new definitions of controllability, observability, stabilizability, and detectability that are appropriate for continuous-time Markovian jump linear systems. These definitions play the same role for the JLQ problem as the deterministic properties do for the linear quadratic regulator (LQR) problem  相似文献   

6.
A continuous-time Markov decision process (CTMDP) is a generalization of a continuous-time Markov chain in which both probabilistic and nondeterministic choices co-exist. This paper presents an efficient algorithm to compute the maximum (or minimum) probability to reach a set of goal states within a given time bound in a uniform CTMDP, i.e., a CTMDP in which the delay time distribution per state visit is the same for all states. It furthermore proves that these probabilities coincide for (time-abstract) history-dependent and Markovian schedulers that resolve nondeterminism either deterministically or in a randomized way.  相似文献   

7.
Markov chains are a well known tool to model temporal properties of many phenomena, from text structure to fluctuations in economics. Because they are easy to generate, Markovian sequences, i.e. temporal sequences having the Markov property, are also used for content generation applications such as text or music generation that imitate a given style. However, Markov sequences are traditionally generated using greedy, left-to-right algorithms. While this approach is computationally cheap, it is fundamentally unsuited for interactive control. This paper addresses the issue of generating steerable Markovian sequences. We target interactive applications such as games, in which users want to control, through simple input devices, the way the system generates a Markovian sequence, such as a text, a musical sequence or a drawing. To this aim, we propose to revisit Markov sequence generation as a branch and bound constraint satisfaction problem (CSP). We propose a CSP formulation of the basic Markovian hypothesis as elementary Markov Constraints (EMC). We propose algorithms that achieve domain-consistency for the propagators of EMCs, in an event-based implementation of CSP. We show how EMCs can be combined to estimate the global Markovian probability of a whole sequence, and accommodate for different species of Markov generation such as fixed order, variable-order, or smoothing. Such a formulation, although more costly than traditional greedy generation algorithms, yields the immense advantage of being naturally steerable, since control specifications can be represented by arbitrary additional constraints, without any modification of the generation algorithm. We illustrate our approach on simple yet combinatorial chord sequence and melody generation problems and give some performance results.  相似文献   

8.
We analyze the tracking performance of the least mean square (LMS) algorithm for adaptively estimating a time varying parameter that evolves according to a finite state Markov chain. We assume the Markov chain jumps infrequently between the finite states at the same rate of change as the LMS algorithm. We derive mean square estimation error bounds for the tracking error of the LMS algorithm using perturbed Lyapunov function methods. Then combining results in two-time-scale Markov chains with weak convergence methods for stochastic approximation, we derive the limit dynamics satisfied by continuous-time interpolation of the estimates. Unlike most previous analyzes of stochastic approximation algorithms, the limit we obtain is a system of ordinary differential equations with regime switching controlled by a continuous-time Markov chain. Next, to analyze the rate of convergence, we take a continuous-time interpolation of a scaled sequence of the error sequence and derive its diffusion limit. Somewhat remarkably, for correlated regression vectors we obtain a jump Markov diffusion. Finally, two novel examples of the analysis are given for state estimation of hidden Markov models (HMMs) and adaptive interference suppression in wireless code division multiple access (CDMA) networks.  相似文献   

9.
On designing of sliding-mode control for stochastic jump systems   总被引:7,自引:0,他引:7  
In this note, we consider the problems of stochastic stability and sliding-mode control for a class of linear continuous-time systems with stochastic jumps, in which the jumping parameters are modeled as a continuous-time, discrete-state homogeneous Markov process with right continuous trajectories taking values in a finite set. By using Linear matrix inequalities (LMIs) approach, sufficient conditions are proposed to guarantee the stochastic stability of the underlying system. Then, a reaching motion controller is designed such that the resulting closed-loop system can be driven onto the desired sliding surface in a limited time. It has been shown that the sliding mode control problem for the Markovian jump systems is solvable if a set of coupled LMIs have solutions. A numerical example is given to show the potential of the proposed techniques.  相似文献   

10.
The trace equivalence of BPP was shown to be undecidable by Hirshfeld. We show that all the preorders and equivalences except bisimulation in Glabbeek’s linear time-branching time spectrum are undecidable for BPP. The results are obtained by extending Hirshfeld’s encoding of Minsky machines into BPP. We also show that those preorders and equivalences are undecidable even for a restriction of BPP to 2-labels.  相似文献   

11.
In this paper, robust H control for a class of uncertain stochastic Markovian jump systems (SMJSs) with interval and distributed time-varying delays is investigated. The jumping parameters are modelled as a continuous-time, finite-state Markov chain. By employing the Lyapunov-Krasovskii functional and stochastic analysis theory, some novel sufficient conditions in terms of linear matrix inequalities are derived to guarantee the mean-square asymptotic stability of the equilibrium point. Numerical simulations are given to demonstrate the effectiveness and superiority of the proposed method comparing with some existing results.  相似文献   

12.
Summary. In this paper we extend the theory of processes with durational actions that has been proposed in [1,2] to describe and reason about the performance of systems. We associate basic actions with lower and upper time bounds, that specify their possible different durations. Depending on how the lower and upper time bounds are fixed, eager actions (those which happen as soon as they can), lazy actions (those which can wait arbitrarily long before firing) as well as patient actions (those which can be delayed for a while) can be modelled. Processes are equipped with a (soft) operational semantics which is consistent with the original one and is well-timed (observation traces are ordered with respect to time). The bisimulation-based equivalence defined on top of the new operational semantics, timed equivalence, turns out to be a congruence and, within the lazy fragment of the algebra, refines untimed equivalences. Decidability and automatic checking of timed equivalence are also stated by resorting to a finite alternative characterization which is amenable to an automatic treatment by using standard algorithms. The relationships with other timed calculi and equivalences proposed in the literature are also established. Received: 22 May 1998 / 8 November 2000  相似文献   

13.
Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen–Twente Markov Chain Checker E⊢MC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting lessons learned during the development and application of E⊢MC2. Published online: 19 November 2002 Correspondence to: Holger Hermanns  相似文献   

14.
This paper is concerned with the stability analysis and stabilization of networked discrete-time and sampled-data linear systems with random packet losses. Asymptotic stability, mean-square stability, and stochastic stability are considered. For networked discrete-time linear systems, the packet loss period is assumed to be a finite-state Markov chain. We establish that the mean-square stability of a related discrete-time system which evolves in random time implies the mean-square stability of the system in deterministic time by using the equivalence of stability properties of Markovian jump linear systems in random time. We also establish the equivalence of asymptotic stability for the systems in deterministic discrete time and in random time. For networked sampled-data systems, a binary Markov chain is used to characterize the packet loss phenomenon of the network. In this case, the packet loss period between two transmission instants is driven by an identically independently distributed sequence assuming any positive values. Two approaches, namely the Markov jump linear system approach and randomly sampled system approach, are introduced. Based on the stability results derived, we present methods for stabilization of networked sampled-data systems in terms of matrix inequalities. Numerical examples are given to illustrate the design methods of stabilizing controllers.  相似文献   

15.
This paper is devoted to investigating delay-dependent robust exponential stability for a class of Markovian jump impulsive stochastic reaction-diffusion Cohen-Grossberg neural networks (IRDCGNNs) with mixed time delays and uncertainties. The jumping parameters, determined by a continuous-time, discrete-state Markov chain, are assumed to be norm bounded. The delays are assumed to be time-varying and belong to a given interval, which means that the lower and upper bounds of interval time-varying delays are available. By constructing a Lyapunov–Krasovskii functional, and using poincarè inequality and the mathematical induction method, several novel sufficient criteria ensuring the delay-dependent exponential stability of IRDCGNNs with Markovian jumping parameters are established. Our results include reaction-diffusion effects. Finally, a Numerical example is provided to show the efficiency of the proposed results.  相似文献   

16.
Defining operational semantics for a process algebra is often based either on labeled transition systems that account for interaction with a context or on the so-called reduction semantics: we assume to have a representation of the whole system and we compute unlabeled reduction transitions (leading to a distribution over states in the probabilistic case). In this paper we consider mixed models with states where the system is still open (towards interaction with a context) and states where the system is already closed. The idea is that (open) parts of a system “P” can be closed via an operator “PG” that turns already synchronized actions whose “handle” is specified inside “G” into prioritized reduction transitions (and, therefore, states performing them into closed states). We show that we can use the operator “PG” to express multi-level priorities and external probabilistic choices (by assigning weights to handles inside G), and that, by considering reduction transitions as the only unobservable τ transitions, the proposed technique is compatible, for process algebra with general recursion, with both standard (probabilistic) observational congruence and a notion of equivalence which aggregates reduction transitions in a (much more aggregating) trace based manner. We also observe that the trace-based aggregated transition system can be obtained directly in operational semantics and we present the “aggregating” semantics. Finally, we discuss how the open/closed approach can be used to also express discrete and continuous (exponential probabilistic) time and we show that, in such timed contexts, the trace-based equivalence can aggregate more with respect to traditional lumping based equivalences over Markov Chains.  相似文献   

17.
The usage of process algebras for the performance modeling and evaluation of concurrent systems turned out to be convenient due to their feature of compositionality. A particularly simple and elegant solution in this field is the calculus of Interactive Markov Chains (IMCs), where the behavior of processes is just represented by Continuous Time Markov Chains extended with action transitions representing process interaction. The main advantage of IMCs with respect to other existing approaches is that a notion of bisimulation which abstracts from τ-transitions (“complete” interactions) can be defined which is a congruence. However in the original definition of the calculus of IMCs the high potentiality of compositionally minimizing the system state space given by the usage of a “weak” notion of equivalence and the elegance of the approach is somehow limited by the fact that the equivalence adopted over action transitions is a finer variant of Milner's observational congruence that distinguishes τ-divergent “Zeno” processes from non-divergent ones. In this paper we show that it is possible to reformulate the calculus of IMCs in such a way that we can just rely on simple standard observational congruence. Moreover we show that the new calculus is the first Markovian process algebra allowing for a new notion of Markovian bisimulation equivalence which is coarser than the standard one.  相似文献   

18.
We establish the existence of optimal scheduling strategies for time-bounded reachability in continuous-time Markov decision processes, and of co-optimal strategies for continuous-time Markov games. Furthermore, we show that optimal control does not only exist, but has a surprisingly simple structure: the optimal schedulers from our proofs are deterministic and timed positional, and the bounded time can be divided into a finite number of intervals, in which the optimal strategies are positional. That is, we demonstrate the existence of finite optimal control. Finally, we show that these pleasant properties of Markov decision processes extend to the more general class of continuous-time Markov games, and that both early and late schedulers show this behaviour.  相似文献   

19.
Markov chains are extensively used in modeling different aspects of engineering and scientific systems, such as performance of algorithms and reliability of systems. Different techniques have been developed for analyzing Markovian models, for example, Markov Chain Monte Carlo based simulation, Markov Analyzer, and more recently probabilistic model-checking. However, these techniques either do not guarantee accurate analysis or are not scalable. Higher-order-logic theorem proving is a formal method that has the ability to overcome the above mentioned limitations. However, it is not mature enough to handle all sorts of Markovian models. In this paper, we propose a formalization of Discrete-Time Markov Chain (DTMC) that facilitates formal reasoning about time-homogeneous finite-state discrete-time Markov chain. In particular, we provide a formal verification on some of its important properties, such as joint probabilities, Chapman-Kolmogorov equation, reversibility property, using higher-order logic. To demonstrate the usefulness of our work, we analyze two applications: a simplified binary communication channel and the Automatic Mail Quality Measurement protocol.  相似文献   

20.
Motivated by various control applications, this paper develops stability analysis of discrete-time systems with regime switching, in which the dynamics are modulated by Markov chains with two time scales. Using the high contrast of the different transition rates among the Markovian states, singularly perturbed Markov chains are used in the formulations. Taking into consideration of the regime changes and time-scale separation makes the stability analysis a difficult task. In this work, asymptotic stability analysis is carried out using perturbed Liapunov function techniques. It is demonstrated that if the limit system is stable, then the original system is also stable. In addition, we examine path excursion, derive bounds on mean recurrence time, and obtain the associated probability bounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号