首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The trace assertion method is a module interface specification method based on the finite state machine model. To support this method, we plan to develop a specification simulation tool, a trace simulator, that symbolically interprets trace assertions of trace specifications and simulates the externally observable behavior of the modules specified. We first present the trace assertion method. Then we formally define trace rewriting systems and show how trace rewriting, a technique similar to term rewriting, can be applied to implement trace simulation  相似文献   

2.
Verifying data refinements using a model checker   总被引:2,自引:1,他引:1  
In this paper, we consider how refinements between state-based specifications (e.g., written in Z) can be checked by use of a model checker. Specifically, we are interested in the verification of downward and upward simulations which are the standard approach to verifying refinements in state-based notations. We show how downward and upward simulations can be checked using existing temporal logic model checkers.In particular, we show how the branching time temporal logic CTL can be used to encode the standard simulation conditions. We do this for both a blocking, or guarded, interpretation of operations (often used when specifying reactive systems) as well as the more common non-blocking interpretation of operations used in many state-based specification languages (for modelling sequential systems). The approach is general enough to use with any state-based specification language, and we illustrate how refinements between Z specifications can be checked using the SAL CTL model checker using a small example.  相似文献   

3.
In this paper, we investigate the connection between fragments of associative-commutative Term Rewriting and fragments of Mobile Ambients, a powerful model for mobile and distributed computations. The connection can be used to transfer decidability and undecidability results for important computational properties like reachability from one formalism to the other. Furthermore, it can be viewed as a vehicle to apply tools based on rewriting for the simulation and validation of specifications given in Mobile Ambients.  相似文献   

4.
We present an overview of the Java PathExplorer runtime verification tool, in short referred to as JPAX. JPAX can monitor the execution of a Java program and check that it conforms with a set of user provided properties formulated in temporal logic. JPAX can in addition analyze the program for concurrency errors such as deadlocks and data races. The concurrency analysis requires no user provided specification. The tool facilitates automated instrumentation of a program's bytecode, which when executed will emit an event stream, the execution trace, to an observer. The observer dispatches the incoming event stream to a set of observer processes, each performing a specialized analysis, such as the temporal logic verification, the deadlock analysis and the data race analysis. Temporal logic specifications can be formulated by the user in the Maude rewriting logic, where Maude is a high-speed rewriting system for equational logic, but here extended with executable temporal logic. The Maude rewriting engine is then activated as an event driven monitoring process. Alternatively, temporal specifications can be translated into automata or algorithms that can efficiently check the event stream. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.  相似文献   

5.
We present a general framework for studying equational specifications with pre-defined structures. The axioms of the specifications are to define new structures in addition to the given ones. In particular, they may define a new operator only partially over some given domain. Our approach allows one to assign easily semantics to such specifications in a denotational and operational fashion. In order to enable functional-style computations, we introduce a semantically enriched notion of term rewriting. This rewrite relation also allows us to infer the consistency of the specification. For the latter purpose one has to show confluence modulo the given structures. We outline how to obtain criteria easily for confluence and termination of the rewrite relation of discourse by generalizing results of the classical syntactic rewrite theory.  相似文献   

6.
Symbolic simulation and uninterpreted functions have long been staple techniques for formal hardware verification. In recent years, we have adapted these techniques for the automatic, formal verification of low-level embedded software—specifically, checking the equivalence of different versions of assembly language programs. Our approach, though limited in scalability, has proven particularly promising for the intricate code optimizations and complex architectures typical of high-performance embedded software, such as for DSPs and VLIW processors. Indeed, one of our key findings was how easy it was to create or retarget our verification tools to different, even very complex, machines. The resulting tools automatically verified or found previously unknown bugs in several small sequences of industrial and published example code. This paper provides an introduction to these techniques and a review of our results.  相似文献   

7.
8.
Test purposes have been presented as a solution to avoid the state space explosion when selecting test cases from formal models. Although such techniques work very well with regard to the speed of the test derivation, they leave the tester with one important task that influences the quality of the overall testing process: test purposes have to be formulated manually. In this paper, we present an approach that assists a test engineer with test purpose design in two ways: it allows automatic generation of coverage based test suites and can be used to automatically exercise those aspects of the system that are missed by hand-crafted test purposes. We consider coverage of Lotos specifications, and show how labeled transition systems derived from such specifications have to be extended in order to allow the application of logical coverage criteria to Lotos specifications. We then show how existing tools can be used to efficiently derive test cases and suggest how to use the coverage information to minimize test suites while generating them.  相似文献   

9.
We explore the features of rewriting logic and, in particular, of the rewriting logic language Maude as a logical and semantic framework for representing and executing inference systems. In order to illustrate the general ideas we consider two substantial case studies. In the first one, we represent both the semantics of Milner’s CCS and a modal logic for describing local capabilities of CCS processes. Although a rewriting logic representation of the CCS semantics is already known, it cannot be directly executed in the default interpreter of Maude. Moreover, it cannot be used to answer questions such as which are the successors of a process after performing an action, which is used to define the semantics of Hennessy-Milner modal logic. Basically, the problems are the existence of new variables in the righthand side of the rewrite rules and the nondeterministic application of the semantic rules, inherent to CCS. We show how these problems can be solved in a general, not CCS dependent way by controlling the rewriting process by means of reflection. This executable specification plus the reflective control of rewriting can be used to analyze CCS processes. The same techniques are also used to implement a symbolic semantics for LOTOS in our second case study. The good properties of Maude as a metalanguage allow us to implement a whole formal tool where LOTOS specifications without restrictions in their data types (given as ACT ONE specifications) can be executed. In summary, we present Maude as an executable semantic framework by providing easy-tool-building techniques for a language given its operational semantics.Research supported by CICYT projects Desarrollo Formal de Sistemas Distribuidos (TIC97-0669-C03-01) and Desarrollo Formal de Sistemas Basados en Agentes Móviles (TIC2000-0701-C02-01).  相似文献   

10.
State-based, formal methods have been successfully applied to the automatic verification of cache coherence in sequentially consistent systems. However, coherence in shared memory multiprocessors under a relaxed memory model is much more complex to verify automatically. With relaxed memory models, incoming invalidations and outgoing updates can be delayed in each cache while processors are allowed to race ahead. This buffering of memory accesses considerably increases the amount of state in each cache and the complexity of protocol interactions. Moreover, because caches can hold inconsistent copies of the same data for long periods of time, coherence cannot be verified by simply checking that cached copies are identical at all times. This paper makes two major contributions. First, we demonstrate how to model and verify cache coherence under a relaxed memory model in the context of state-based verification methods. Frameworks for modeling the hardware and for generating correct memory access sequences driving the hardware model are developed. We also show correctness properties which must be verified on the hardware model. Second, we demonstrate a successful application of a state-based verification tool called SSM for the verification of the delayed protocol, an aggressive protocol for relaxed memory models. SSM is based on an abstraction technique preserving the properties to verify. We show that with classical, explicit approaches the verification of cache coherence is realistically unfeasible because of the state space explosion problem, whereas SSM is able to verify protocols both at both behavioral and message-passing levels.  相似文献   

11.
Cache coherence in shared-memory multiprocessor systems has been studied mostly from an architecture viewpoint, often by means of aggregating metrics. In many cases, aggregate events provide insufficient information for programmers to understand and optimize the coherence behavior of their applications. A better understanding would be given by source code correlations of not only aggregate events, but also finer granularity metrics directly linked to high-level source code constructs, such as source lines and data structures. In this paper, we explore a novel application-centric approach to studying coherence traffic. We develop a coherence analysis framework based on incremental coherence simulation of actual reference traces. We provide tool support to extract these reference traces and synchronization information from OpenMP threads at runtime using dynamic binary rewriting of the application executable. These traces are fed to ccSIM, our cache-coherence simulator. The novelty of ccSIM lies in its ability to relate low-level cache coherence metrics (such as coherence misses and their causative invalidations) to high-level source code constructs including source code locations and data structures. We explore the degree of freedom in interleaving data traces from different processors and assess simulation accuracy in comparison to metrics obtained from hardware performance counters. Our quantitative results show that: 1) Cache coherence traffic can be simulated with a considerable degree of accuracy for SPMD programs, as the invalidation traffic closely matches the corresponding hardware performance counters. 2) Detailed, high-level coherence statistics are very useful in detecting, isolating, and understanding coherence bottlenecks. We use ccSIM with several well-known benchmarks and find coherence optimization opportunities leading to significant reductions in coherence traffic and savings in wall-clock execution time  相似文献   

12.
Recent accounts of accidents draw attention to “automation surprises” that arise in safety critical systems. An automation surprise can occur when a system behaves differently from the expectations of the operator. Interface mode changes are one class of such surprises that have significant impact on the safety of a dynamic interactive system. They may take place implicitly as a result of other system action. Formal specifications of interactive systems provide an opportunity to analyse problems that arise in such systems. In this paper we consider the role that an interactor based specification has as a partial model of an interactive system so that mode consequences can be checked early in the design process. We show how interactor specifications can be translated into the SMV model checker input language and how we can use such specifications in conjunction with the model checker to analyse potential for mode confusion in a realistic case. Our final aim is to develop a general purpose methodology for the automated analysis of interactive systems. This verification process can be useful in raising questions that have to be addressed in a broader context of analysis.  相似文献   

13.
Formal specifications can help resolve both ambiguity issues and correctness problems in verifying complex hardware designs. This new methodology shows how specifications can also help design productivity by automating many procedures that are now done manually. Input sequences, output assertions, and a simulation coverage metric for the design under verification are all generated directly from the specification  相似文献   

14.
Real-time (RT) systems include hardware and software components interacting in a tight fashion. Although formal methods for RT systems development have advanced, they are sometimes difficult to apply in practical applications, and scalability is compromised as the complexity of the system scales up. Instead, using modeling and simulation (M&S) methods and tools has showed to be useful for verification of practical aspects of RT systems (and having the advantage to be able to including models of the physical environment they interact with). Although several efforts exist in M&S of RT systems, none of them has considered problems of transient overloading in the RT systems specifications. Here, we introduce a new theoretical framework called I-DEVS (imprecise discrete event systems specification) with the goal of guaranteeing responses to inputs within specified time constraints under such transient overloading conditions. The solution presented here has the advantages of a formal specification and the practicality of an M&S-based approach. We also discuss how to define hierarchical models running in RT, and we present a set of tools that can be applied to develop RT-embedded applications, and RT simulations.  相似文献   

15.
Multi-Processor Systems on Chip (MPSoC) run multiple independent applications, often developed by different parties. The applications share the hardware resources, e.g. processors, memories and interconnect. The sharing typically causes interference between the applications, which severely complicates system integration and verification. Even if the applications are verified in isolation, the system designer must verify the combined behaviour, leading to an explosion in design complexity. Composable MPSoCs have no interference between applications, thus allowing independent design and verification. For an MPSoC to be composable, all the hardware resources must offer composability. A particularly challenging resource is the processors, often purchased as off-the-shelf intellectual property.In this work we present the design and implementation of CompOSe, a light-weight (only 1500 lines of code) composable operating system for MPSoCs. CompOSe uses fixed-size time slices, coupled with a composable scheduler, to enable composable processor sharing. Using instances of ARM7, ARM11 and the Xilinx MicroBlaze we experimentally demonstrate the ability to provide temporal composability, even in the presence of dynamic application behaviour and multiple use cases. We do so using a diverse set of processor architectures, without requiring any hardware modifications. We also show how CompOSe allows slack to be distributed within and between applications through a novel two-level scheduler and slack-distribution system.  相似文献   

16.
17.
18.
The aim of the open distributed processing (ODP) information viewpoint is to describe the semantics of the information and of the information processing in a system, from a global point of view, without having to worry about other considerations, such as how the information will be finally distributed or implemented or the technology used to achieve such implementation. Although several notations have been proposed to model this ODP viewpoint, they are not expressive enough to faithfully represent all the information concepts, or they tend to suffer from a lack of (formal) support, or both. In this paper, we explore the use of Maude as a formal notation for writing ODP information specifications. Maude is an executable rewriting logic language especially well suited for the specification of object-oriented open and distributed systems. We show how Maude offers a simple, natural, and accurate way of modeling the ODP information viewpoint concepts, allows the execution of the specifications produced, and offers good tool support for reasoning about them.  相似文献   

19.
3-D data visualization is very useful for medical imaging and computational fluid dynamics. Volume rendering can be used to exhibit the shape and volumetric properties of 3-D objects. However, volume rendering requires a considerable amount of time to process the large volume of data. To deliver the necessary rendering rates, parallel hardware architectures such as distributed memory multicomputers offer viable solutions. The challenge is to design efficient parallel algorithms that utilize the hardware parallelism effectively. In this paper, we present two efficient parallel volume rendering algorithms, the 1D-partition and 2D-partition methods, based on the shear-warp factorization for distributed memory multicomputers. The 1D-partition method has a performance bound on the size of the volume data. If the number of processors is less than a threshold, the 1D-partition method can deliver a good rendering rate. If the number of processors is over a threshold, the 2D-partition method can be used. To evaluate the performance of these two algorithms, we implemented the proposed methods along with the slice data partitioning, volume data partitioning, and sheared volume data partitioning methods on an IBM SP2 parallel machine. Six volume data sets were used as the test samples. The experimental results show that the proposed methods outperform other compatible algorithms for all test samples. When the number of processors is over a threshold, the experimental results also demonstrate that the 2D-partition method is better than the 1D-partition method.  相似文献   

20.
In this paper we show how string rewriting methods can be applied to give a new method of computing double cosets. Previous methods for double cosets were enumerative and thus restricted to finite examples. Our rewriting methods do not suffer this restriction and we present some examples of infinite double coset systems which can now easily be solved using our approach. Even when both enumerative and rewriting techniques are present, our rewriting methods will be competitive because they (i) do not require the preliminary calculation of cosets; and (ii) as with single coset problems, there are many examples for which rewriting is more effective than enumeration.Automata provide the means for identifying expressions for normal forms in infinite situations and we show how they may be constructed in this setting. Further, related results on logged string rewriting for monoid presentations are exploited to show how witnesses for the computations can be provided and how information about the subgroups and the relations between them can be extracted. Finally, we discuss how the double coset problem is a special case of the problem of computing induced actions of categories which demonstrates that our rewriting methods are applicable to a much wider class of problems than just the double coset problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号