首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Markov chain usage models support test planning, test automation, and analysis of test results. In practice, transition probabilities for Markov chain usage models are often specified using a cycle of assigning, verifying, and revising specific values for individual transition probabilities. For large systems, such an approach can be difficult for a variety of reasons. We describe an improved approach that represents transition probabilities by explicitly preserving the information concerning test objectives and the relationships between transition probabilities in a format that is easy to maintain and easy to analyze. Using mathematical programming, transition probabilities are automatically generated to satisfy test management objectives and constraints. A more mathematical treatment of this approach is given in References [ 1 ] (Poore JH, Walton GH, Whittaker JA. A constraint‐based approach to the representation of software usage models. Information and SoftwareTechnology 2000; at press) and [ 2 ] (Walton GH. Generating transition probabilities for Markov chain usage models. PhD Thesis, University of Tennessee, Knoxville, TN, May 1995.). In contrast, this paper is targeted at the software engineering practitioner, software development manager, and test manager. This paper also adds to the published literature on Markov chain usage modeling and model‐based testing by describing and illustrating an iterative process for usage model development and optimization and by providing some recommendations for embedding model‐based testing activities within an incremental development process. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

2.
Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games—a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies.   相似文献   

3.
The optimization problems of Markov control processes (MCPs) with exact knowledge of system parameters, in the form of transition probabilities or infinitesimal transition rates, can be solved by using the concept of Markov performance potential which plays an important role in the sensitivity analysis of MCPs. In this paper, by using an equivalent infinitesimal generator, we first introduce a definition of discounted Poisson equations for semi-Markov control processes (SMCPs), which is similar to that for MCPs, and the performance potentials of SMCPs are defined as solution of the equation. Some related optimization techniques based on performance potentials for MCPs may be extended to the optimization of SMCPs if the system parameters are known with certainty. Unfortunately, exact values of the distributions of the sojourn times at some states or the transition probabilities of the embedded Markov chain for a large-scale SMCP are generally difficult or impossible to obtain, which leads to the uncertainty of the semi-Markov kernel, and thereby to the uncertainty of equivalent infinitesimal transition rates. Similar to the optimization of uncertain MCPs, a potential-based policy iteration method is proposed in this work to search for the optimal robust control policy for SMCPs with uncertain infinitesimal transition rates that are represented as compact sets. In addition, convergence of the algorithm is discussed.  相似文献   

4.
张德平  徐宝文 《计算机科学》2011,38(12):135-138
基于统计测试的Markov使用链模型对安全关键系统的可靠性估计提出了一种有效的方法。该方法利用重要抽样技术在保证佑计的无偏性条件下,以可靠性估计的方差最小为目的,通过Ali-Silvey距离度量两个分布之间的差异,调整各个状态之间的转移概率分布,修正测试剖面,增加关键操作的遍历概率。最后给出了软件可靠性估计的最优测试剖面生成迭代算法。仿真结果表明,该方法能明显降低估计方差,在提高估计精度的同时能有效地加速统计测试。  相似文献   

5.
Common sense sometimes predicts events to be likely or unlikely rather than merely possible. We extend methods of qualitative reasoning to predict the relative likelihoods of possible qualitative behaviors by viewing the dynamics of a system as a Markov chain over its transition graph. This involves adding qualitative or quantitative estimates of transition probabilities to each of the transitions and applying the standard theory of Markov chains to distinguish persistent states from transient states and to calculate recurrence times, settling times, and probabilities for ending up in each state. Much of the analysis depends solely on qualitative estimates of transition probabilities, which follow directly from theoretical considerations and which lead to qualitative predictions about entire classes of systems. Quantitative estimates for specific systems are derived empirically and lead to qualitative and quantitative conclusions, most of which are insensitive to small perturbations in the estimated transition probabilities. The algorithms are straightforward and efficient.  相似文献   

6.
Constrained Markov decision problems (CMDPs) with the average cost criterion and a single ergodic chain, or the discounted cost with a general multichain structure, are considered. Conditions for stability of the optimal value and control to changes of the parameters of the problem, such as immediate costs, transition probabilities, and the discount factor, are established. Singular constrained problems, for which the optimal value and controls exhibit discontinuities, are studied  相似文献   

7.
基于重要抽样的软件统计测试加速   总被引:2,自引:0,他引:2  
本文提出一种基于重要抽样的软件统计测试加速方法,该方法通过调整软件Markov链使用模型的迁移概率,在根据统计测试结果得到软件可靠性无偏估计的前提下,可以有效提高安全攸关软件的测试效率,部分解决了安全攸关软件统计测试时间和费用开销过大的问题。同时,本文给出了计算优化迁移概率的模拟退火算法。实验仿真结果表明,该方法可以有效地提高安全攸关软件统计测试的效率。  相似文献   

8.
转移概率部分未知的随机Markov 跳跃系统的镇定控制   总被引:1,自引:0,他引:1  
盛立  高明 《控制与决策》2011,26(11):1716-1720
研究一类随机Markov跳跃系统的稳定性与镇定控制问题.此类系统跳跃过程的转移概率部分未知,包括转移概率完全已知和完全未知两种情形,因而更具一般性.首先,给出保证随机Markov跳跃系统均方渐近稳定的充分性判据,并设计了相应的状态反馈镇定控制器;然后,基于矩阵的奇异值分解给出了系统静态输出反馈镇定控制器的设计方法,并将其归结为求解一组线性矩阵不等式(LMIs)的可行性问题;最后,通过数值仿真验证了所得结论的正确性.  相似文献   

9.
A single-channel retrial queuing system with the input flow of demands of different types is considered. The sojourn time in the orbit of a demand of any type is exponentially distributed. An embedded Markov chain is set up. The explicit and approximate formulas for the transition probabilities of the chain are derived and are used to determine the steady-state probabilities.  相似文献   

10.
马尔科夫链的粒子群优化算法全局收敛性分析   总被引:6,自引:0,他引:6  
本文对粒子群优化算法的全局收敛性进行了分析,给出了粒子速度和位置的一步转移概率,然后从粒子状态所构成的马尔科夫链着手,分析了此马尔科夫链的一系列性质,证明了粒子状态空间的可约性和非齐次性,并验证粒子状态空间是非常返态的,最后表明马尔科夫链不存在平稳过程的条件,继而从转移概率的角度证明了算法不是全局收敛的.  相似文献   

11.
基于Monte Carlo方法的自适应多模型诊断   总被引:3,自引:0,他引:3  
多模型混合系统的模型切换服从有限状态的Markov链,其转移概率通常假定是已知的.当模型转移概率未知的时候,本文基于Monte Carlo粒子滤波器给出了混合系统状态估计的一种自适应算法.该算法假定未知的转移概率先验分布为Dirichlet分布,首先通过采样得到一组模型序列的随机样本,利用其中状态的转移次数计算先验转移概率,使用量测信息对样本更新选择后,获得模型转移概率的一种迭代的后验估计值,同时由粒子滤波器得到系统状态和模型概率的后验估计.将该方法用于混合系统的状态监测和故障诊断,仿真结果表明了算法的有效性.  相似文献   

12.
Stochastic image processing tools have been widely used in digital image processing in order to improve the quality of the images. Markov process is one of the well-known mathematical modeling tools in stochastic theory. In this study, a Markov chain model has been developed and applied to image denoising. The transition probabilities were obtained from Fokker–Planck diffusion equation. According to the results, the proposed Markov chain model supplies very good peak signal to noise ratio values along with low computational cost.  相似文献   

13.
This article proposes a methodology for designing a partially mode delay dependent ? controller design for discrete-time systems with random communication delays. Communication delays between sensors and controller are modelled by a finite state Markov chain where the transition probability matrix is partially known. Stability criteria are obtained based on Lyapunov–Krasovskii functional and a novel methodology for designing a partially mode delay dependent state feedback controller has been proposed. The controller is obtained by solving linear matrix inequality optimisation problems using cone complimentarity linearisation algorithm. A numerical example is provided to illustrate the effectiveness of the proposed controller.  相似文献   

14.
Software usage models are the basis for statistical testing. They derive their structure from specifications and their probabilities from evolving knowledge about the intended use of the software product. The evolving knowledge comes from developers, customers and testers of the software system in the form of relationships that should hold among the parameters of a model. When software usage models are encoded as Markov chains, their structure can be represented by a system of linear constraints, and many of the evolving relationships among model parameters can be represented by convex constraints. Given a Markov chain usage model as a system of convex constraints, mathematical programming can be used to generate the Markov chain transition probabilities that represent a specific software usage model.  相似文献   

15.
Discrete-event systems modeled as continuous-time Markov processes and characterized by some integer-valued parameter are considered. The problem addressed is that of estimating performance sensitivities with respect to this parameter by directly observing a single sample path of the system. The approach is based on transforming the nominal Markov chain into a reduced augmented chain, the stationary-state probabilities which can be easily combined to obtain stationary-state probability sensitivities with respect to the given parameter. Under certain conditions, the reduced augmented chain state transitions are observable with respect to the state transitions of the system itself, and no knowledge of the nominal Markov-chain state of the transition rates is required. Applications for some queueing systems are included. The approach incorporates estimation of unknown transition rates when needed and is extended to real-valued parameters  相似文献   

16.
We derive expressions for computing the parameters of the resulting discrete channel formed by frequency hopping between an arbitrary number of original channels defined by simple Markov chains for any hopping slot lengths. In obtaining the expressions, we aggregate the graph that described the hopping process. The expressions define transition probabilities of the graph defined by a Markov chain reduced to two states.  相似文献   

17.
This short paper is concerned with the Bayesian estimation problem for a linear system with the interrupted observation mechanism that is expressed in terms of the stationary two-state Markov chain with unknown transition probabilities. Derived is the approximate minimum variance adaptive estimator algorithm coupled with the estimation of the unknown transition probabilities.  相似文献   

18.
Experiments in text recognition with the modified viterbi algorithm   总被引:1,自引:0,他引:1  
In this paper a modification of the Viterbi algorithm is formally described, and a measure of its complexity is derived. The modified algorithm uses aheuristic to limit the search through a directed graph or trellis. The effectiveness of the algorithm is investigated via exhaustive experimentation on an input of machine-printed text. The algorithm assumes language to be a Markov chain and uses transition probabilities between characters. The results empirically answer the long-standing question of what is the benefit, if any, of using transition probabilities that depend on the length of a word and their position in it.  相似文献   

19.
基于EM的启动子序列半监督学习   总被引:1,自引:0,他引:1  
启动子的预测对于基因的定位有重要意义.已有多种对启动子进行预测的算法,涉及到信号搜索、内容搜索和CpG岛搜索等多种策略.基于马尔可夫模型的启动子分类方法也有研究,其中的转移概率都是直接通过统计已标号训练样本序列得来的.将半监督学习思想引入启动子序列分析中,推导出转移概率等参数的最大似然估计公式.实验中将待测试基因序列片段同已标号训练样本混合,利用得出的参数值对基因序列片段进行识别,使用少量的已标号的样本数据能得出较好的启动子识别结果.  相似文献   

20.
张凯  刘京菊 《计算机科学》2021,48(5):294-300
从攻击者角度对网络进行入侵路径分析对于指导网络安全防御具有重要意义。针对现有的基于吸收Markov链的分析方法中存在的对状态转移情形考虑不全面的问题和状态转移概率计算不合理的问题,提出了一种基于吸收Markov链的入侵路径分析方法。该方法在生成攻击图的基础上,根据攻击图中实现状态转移所利用的漏洞的可利用性得分,充分考虑了非吸收节点状态转移失败的情况,提出了一种新的状态转移概率计算方法,将攻击图映射到吸收Markov链模型;利用吸收Markov链的状态转移概率矩阵的性质,计算入侵路径中节点的威胁度排序和入侵路径长度的期望值。实验结果表明,该方法能够有效计算节点威胁度排序和路径长度期望;通过对比分析,该方法的计算结果相比现有方法更符合网络攻防的实际情况。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号