首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a new model, derived from the hidden Markov model (HMM), to learn Boolean vector sequences. Our HMM with patterns (HMMP) is a simple, hybrid, and interpretable model that uses Boolean patterns to define emission probability distributions attached to states. Vectors consistent with a given pattern are equally probable, while inconsistent ones have probability zero to be emitted. We define an efficient learning algorithm for this model, which relies on the maximum likelihood principle, and proceeds by iteratively simplifying the structure and updating the parameters of an initial specific HMMP that represents the learning sequences. HMMPs and our learning algorithm are applied to the built-in self-test (BIST) for integrated circuits, which is one of the key microelectronic problems. An HMMP is learned from a test sequence set that covers most of the potential faults of the circuit at hand. Then, this HMMP is used as test sequence generator. The experiments carried out show that learned HMMPs have a very high fault coverage  相似文献   

2.
In this paper,a simulation system of pseudo-random testing is described first to investigate thecharacteristics of pseudo-random testing.Several interesting experimental results are obtained.It isfound out that initial states of pseudo-random sequences have little effect on fault coverage.Fixedconnection between LFSR outputs and circuit inputs in which the number of LFSR stages m is less thanthe number of circuit inputs n leads to low fault coverage,and the fault coverage is reduced as mdecreases.The local unrandomness of pseudo-random sequences is exposed clearly.Generally,when anLFSR is employed as a pseudo-random generator,there are at least as many LFSR stages as circuitinputs.However,for large circuits under test with hundreds of inputs,there are drawbacks of using anLFSR with hundreds of stages.In the paper,a new design for a pseudo-random pattern generator isproposed in which m相似文献   

3.
Many activities in business process management, such as process retrieval, process mining, and process integration, need to determine the similarity or the distance between two processes. Although several approaches have recently been proposed to measure the similarity between business processes, neither the definitions of the similarity notion between processes nor the measure methods have gained wide recognition. In this paper, we define the similarity and the distance based on firing sequences in the context of workflow nets (WF-nets) as the unified reference concepts. However, to many WF-nets, either the number of full firing sequences or the length of a single firing sequence is infinite. Since transition adjacency relations (TARs) can be seen as the genes of the firing sequences which describe transition orders appearing in all possible firing sequences, we propose a practical similarity definition based on the TAR sets of two processes. It is formally shown that the corresponding distance measure between processes is a metric. An algorithm using model reduction techniques for the efficient computation of the measure is also presented. Experimental results involving comparison of different measures on artificial processes and evaluations on clustering real-life processes validate our approach.  相似文献   

4.
The transition set semantics (Wang and Jiao, LNCS 6128:84–103, 2010) partitions the Petri net behaviors in a canonical way such that behaviors in an equivalence class have the same canonical transition set sequence. This article extends the semantics in two ways: firstly, the semantics is parameterized by the basic relation on the structural transitions to define different variants; secondly, the semantics for the infinite firing sequences of the net is defined. We prove that these extensions still preserve the well-definedness, soundness and completeness of the semantics. Furthermore, we show how to recognize some infinite sequences called back-loops in the view of this new semantics.  相似文献   

5.
ContextThe generation of dynamic test sequences from a formal specification, complementing traditional testing methods in order to find errors in the source code.ObjectiveIn this paper we extend one specific combinatorial test approach, the Classification Tree Method (CTM), with transition information to generate test sequences. Although we use CTM, this extension is also possible for any combinatorial testing method.MethodThe generation of minimal test sequences that fulfill the demanded coverage criteria is an NP-hard problem. Therefore, search-based approaches are required to find such (near) optimal test sequences.ResultsThe experimental analysis compares the search-based technique with a greedy algorithm on a set of 12 hierarchical concurrent models of programs extracted from the literature. Our proposed search-based approaches (GTSG and ACOts) are able to generate test sequences by finding the shortest valid path to achieve full class (state) and transition coverage.ConclusionThe extended classification tree is useful for generating of test sequences. Moreover, the experimental analysis reveals that our search-based approaches are better than the greedy deterministic approach, especially in the most complex instances. All presented algorithms are actually integrated into a professional tool for functional testing.  相似文献   

6.
Structural code coverage criteria have been studied since the early seventies, and now they are well supported by commercial and open-source tools and are commonly embedded in several advanced industrial processes. Most industrial applications still refer to simple criteria, like statement and branch coverage, and consider complex criteria, like modified condition decision coverage, only rarely and often driven by the requirements of certification agencies. The industrial value of structural criteria is limited by the difficulty of achieving high coverage, due to both the complexity of deriving test cases that execute specific uncovered elements and the presence of many infeasible elements in the code. In this paper, we propose a technique that both generates test cases that execute yet uncovered branches and identifies infeasible branches that can be eliminated from the computation of the branch coverage. In this way, we can increase branch coverage to closely approximate full coverage, thus improving its industrial value. The algorithm combines symbolic analysis, abstraction refinement, and a novel technique named coarsening, to execute unexplored branches, identify infeasible ones, and mitigate the state space explosion problem. In the paper, we present the technique and illustrate its effectiveness through a set of experimental results obtained with a prototype implementation.  相似文献   

7.
Simulations of DNA Computing with In Vitro Selection   总被引:1,自引:0,他引:1  
An attractive feature of DNA-based computers is the large number of possible sequences (4 n ) of a given length n with which to represent information. The problem, however, is that any given sequence is not necessarily independent of the other sequences, and thus, reactions among them can interfere with the reliability and efficiency of the computation. Independent sequences might be manufactured in the test tube using evolutionary methods. To this end, an in vitro selection has been developed that selects maximally mismatched DNA sequences. In order to understand the behavior of the protocol, a computer simulation of the protocol was done, results of which showed that Watson-Crick pairs of independent oligonucleotides were preferentially selected. In addition, to explore the computational capability of the selection protocol, a design is presented that generates the Fibonacci sequence of numbers.  相似文献   

8.
We present a technique for refining the design of relational storage for XML data. The technique is based on XML key propagation: given a set of keys on XML data and a mapping (transformation) from the XML data to relations, what functional dependencies must hold on the relations produced by the mapping? With the functional dependencies one can then convert the relational design into, e.g. 3NF, BCNF, and thus develop efficient relational storage for XML data. We provide several algorithms for computing XML key propagation. One algorithm is to check whether a functional dependency is propagated from a set of XML keys via a predefined mapping; this allows one to determine whether or not the relational design is in a normal form. The others are to compute a minimum cover for all functional dependencies that are propagated from a set of XML keys and hold on a universal relation; these provide guidance for how to design a relational schema for storing XML data. These algorithms show that XML key propagation and its associated minimum cover can be computed in polynomial time. Our experimental results verify that these algorithms are efficient in practice. We also investigate the complexity of propagating other XML constraints to relations. The ability to compute XML key propagation is a first step toward establishing a connection between XML data and its relational representation at the semantic level.  相似文献   

9.
This paper introduces a simple but nontrivial set of local transformation rules for designingControl-NOT(CNOT)-based combinatorial circuits. We also provide a proof that the rule set iscomplete, namely, for any two equivalent circuits,S 1 andS 2, there is a sequence of transformations, each of them in the rule set, which changesS 1 toS 2. Two applications of the rule set are also presented. One is to simulate Resolution with only polynomial overhead by the rule set. Therefore we can conclude that the rule set is reasonably powerful. The other is to reduce the cost of CNOT-based circuits by using the transformations in the rule set. This implies that the rule set might be used for the practical circuit design. Currently Graduate School of Information Science, Nara Institute of Science and Technology Kazuo Iwama, Ph.D.: Professor of Informatics, Kyoto University, Kyoto 606-8501, Japan. Received BE, ME, and Ph.D. degrees in Electrical Engineering from Kyoto University in 1978, 1980 and 1985, respectively. His research interests include algorithms, complexity theory and quantum computation. Editorial board of Information Processing Letters and Parallel Computing. Council Member of European Association for Theoretical Computer Science (EATCS). Shigeru Yamashita, Ph.D.: Associate Professor of Graduate School of Information Science, Nara Instutute of Science and Technology, Nara 630-0192, Japan. He received his B.E., M.E. and Ph.D. degrees in information science from Kyoto University, Kyoto, Japan, in 1993, 1995 and 2001, respectively. His research interests include new type of computer architectures and quantum computation. He received the 2000 IEEE Circuits and Systems Society Transactions on Computer-Aided Design of Integrated Circuits and Systems Best Paper Award.  相似文献   

10.
The prediction of coding sequences has received a lot of attention during the last decade. We can distinguish two kinds of methods, those that rely on training with sets of example and counter-example sequences, and those that exploit the intrinsic properties of the DNA sequences to be analyzed. The former are generally more powerful but their domains of application are limited by the availability of a training set. The latter avoid this drawback but can only be applied to sequences that are long enough to allow computation of the statistics. Here, we present a method that fills the gap between the two approaches. A learning step is applied using a set of sequences that are assumed to contain coding and non-coding regions, but with the boundaries of these regions unknown. A test step then uses the discriminant function obtained during the learning to predict coding regions in sequences from the same organism. The learning relies upon a correspondence analysis and prediction is presented on a graphical display. The method has been evaluated on a sample of yeast sequences, and the analysis of a set of expressed sequence tags from the Eucalyptus globulus-Pisolithus tinctorius ectomycorrhiza illustrates the relevance of the approach in its biological context.  相似文献   

11.
靳立运  邝继顺  王伟征 《计算机工程》2011,37(12):268-269,272
自反馈测试方法TVAC在时序电路中的应用研究还处于起步阶段。为此,研究其在同步全扫描时序电路测试中的应用,提出2种测试结构,并对ISCAS89电路进行实验。实验结果表明,与加权伪随机方法和循环自测试方法相比,该方法可用较少测试矢量达到较高故障覆盖率。  相似文献   

12.
The regular model-checking approach is a set of techniques aimed at exploring symbolically infinite state spaces. These techniques proceed by representing sets of configurations of the system under analysis by regular languages, and the transition relation between these configurations by a transformation over such languages. The set of reachable configurations can then be computed by repeatedly applying the transition relation, starting from a representation of the initial set of configurations, until a fixed point is reached. In order for this computation to terminate, it is generally needed to introduce so-called acceleration operators, the purpose of which is to explore in one computation step infinitely many paths in the transition graph of the system. A simple form of acceleration operator is one that is associated to a cycle in the transition graph, computing the set of states that can be obtained by following this cycle arbitrarily many times. The computation of acceleration operators is strongly dependent on the type of the data values that are manipulated by the system, and on the symbolic representation chosen for handling sets of such values. In this survey, we describe acceleration operators suited for the regular state-space exploration of systems relying on FIFO communication channels, as well as those based on integer and real variables.  相似文献   

13.
This paper presents a library based on improving sequences and demonstrates that they are effective for pruning unnecessary computations while retaining program clarity. An improving sequence is a monotonic sequence of approximation values of a final value that are improved gradually according to some ordering relation. A computation using improving sequences proceeds by demanding for the next approximation value. If an approximation value in the middle of the improving sequence has sufficient information to yield the result of some part of the program, the computations that produce the remaining values can be pruned. By combining suitable improving sequences and primitive functions defined for the sequences, we can write efficient programs in the same form as simple and naive programs. We give examples that show the effectiveness of improving sequences and show by program calculation that a simple minimax-like program using improving sequences implements a well-known branch-and-bound searching algorithm.  相似文献   

14.
Some design-for-testability techniques, such as level-sensitive scan design, scan path, and scan/set, reduce test pattern generation of sequential circuits to that of combinational circuits by enhancing the controllability and/or observability of all the memory elements. However, even for combinational circuits, 100 percent test coverage of large-scale circuits is generally very difficult to achieve. This article presents DFT methods aimed at achieving total coverage. Two methods are compared: One, based on testability analysis, involves the addition of test points to improve testability before test pattern generation. The other method employs a test pattern generation algorithm (the FAN algorithm). Results show that 100 percent coverage within the allowed limits is difficult with the former approach. The latter, however, enables us to generate a test pattern for any detectable fault within the allowed time limits, and 100 percent test coverage is possible.  相似文献   

15.
We present an architecture for query processing in the relational model extended with transaction time. The architecture integrates standard query optimization and computation techniques with new differential computation techniques. Differential computation computes a query incrementally or decrementally from the cahced and indexed results of previous computations. The use of differential computation techniques is essential in order to provide efficient processing of queries that access very large temporal relations. Alternative query plans are integrated into a state transition network, where the state space includes backlogs of base relations, cached results from previous computations, a cache index, and intermediate results; the transitions include standard relational algebra operators, operators for constructing differential files, operators for differential computation, and combined operators. A rule set is presented to prune away parts of state transition networks that are not promising, and dynamic programming techniques are used to identify the optimal plans from the remaining state transition networks. An extended logical access path serves as a structuring index on the cached results and contains, in addition, vital statistics for the query optimization process (including statistics about base relations, backlogs, and queries-previously computed and cached, previously computed, or just previously estimated).  相似文献   

16.
DNA computation exploits the computational power inherent in molecules for information processing. However, in order to perform the computation correctly, a set of good DNA sequences is crucial. A lot of work has been carried out on designing good DNA sequences to archive a reliable molecular computation. In this article, the ant colony system (ACS) is introduced as a new tool for DNA sequence design. In this approach, the DNA sequence design is modeled as a path-finding problem, which consists of four nodes, to enable the implementation of the ACS. The results of the proposed approach are compared with other methods such as the genetic algorithm.  相似文献   

17.
Presents a method of generating test sequences for concurrent programs and communication protocols that are modeled as communicating nondeterministic finite-state machines (CNFSMs). A conformance relation, called trace-equivalence, is defined within this model, serving as a guide to test generation. A test generation method for a single nondeterministic finite-state machine (NFSM) is developed, which is an improved and generalized version of the Wp-method that generates test sequences only for deterministic finite-state machines. It is applicable to both nondeterministic and deterministic finite-state machines. When applied to deterministic finite-state machines, it yields usually smaller test suites with full fault coverage than the existing methods that also provide full fault coverage, provided that the number of states in implementation NFSMs are bounded by a known integer. For a system of CNFSMs, the test sequences are generated in the following manner: a system of CNFSMs is first reduced into a single NFSM by reachability analysis; then the test sequences are generated from the resulting NFSM using the generalized Wp-method  相似文献   

18.
ContextThe current validation tests for nuclear software are routinely performed by random testing, which leads to uncertain test coverage. Moreover, validation tests should directly verify the system’s compliance with the original user’s needs. Unlike current model-based testing methods, which are generally based on requirements or design models, the proposed model is derived from the original user’s needs in text through domain-specific ontology, and then used to generate validation tests systematically.ObjectiveOur first goal is to develop an objective, repeatable, and efficient systematic validation test scheme that is effective for large systems, with analyzable test coverage. Our second goal is to provide a new model-based validation testing method that reflects the user’s original safety needs.MethodA model-based scenario test case generation for nuclear digital safety systems was designed. This was achieved by converting the scenarios described in natural language in a Safety Analysis Report (SAR) prepared by the power company for licensing review, to Unified Modeling Language (UML) sequence diagrams based on a proposed ontology of a related regulatory standard. Next, we extracted the initial environmental parameters and the described operational sequences. We then performed variations on these data to systematically generate a sufficient number of scenario test cases.ResultsTest coverage criteria, which are the equivalence partition coverage of initial environment, the condition coverage, the action coverage and the scenario coverage, were met using our method.ConclusionThe proposed model-based scenario testing can provide improved testing coverage than random testing. A test suite based on user needs can be provided.  相似文献   

19.
Some systems interact with their environment at physically distributed interfaces called ports and we separately observe sequences of inputs and outputs at each port. As a result we cannot reconstruct the global sequence that occurred and this reduces our ability to distinguish different systems in testing or in use. In this paper we explore notions of conformance for an input output transition system that has multiple ports, adapting the widely used ioco implementation relation to this situation. We consider two different scenarios. In the first scenario the agents at the different ports are entirely independent. Alternatively, it may be feasible for some external agent to receive information from more than one of the agents at the ports of the system, these local behaviours potentially being brought together and here we require a stronger implementation relation. We define implementation relations for these scenarios and prove that in the case of a single-port system the new implementation relations are equivalent to ioco. In addition, we define what it means for a test case to be controllable and give an algorithm that decides whether this condition holds. We give a test generation algorithm to produce sound and complete test suites. Finally, we study two implementation relations to deal with partially specified systems.  相似文献   

20.
基于多扫描链的内建自测试技术中的测试向量生成   总被引:1,自引:0,他引:1  
针对基于多扫描链的内建自测试技术,提出了一种测试向量生存方法。该方法用一个线性反馈移位寄存器(LFSR)作为伪随机测试向量生成器,同时给所有扫描链输入测试向量,并通过构造具有最小相关度的多扫描链克服扫描链间的相关性对故障覆盖率的影响。此外该方法经过模拟确定难测故障集,并针对这外难测故障集利用ATPG生成最小确定性测试向量集。最后丙依据得到的最小测试向量集来设计位改变逻辑电路,利用们改变逻辑电路控制改变扫描链上特定的值来实现对难测故障的检测,从而实现被测电路和故障完全检测。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号