首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
The recent years have seen increasingly widespread use of highly concurrent data structures in both multi-core and distributed computing environments, thereby escalating the priority for verifying their correctness. Quasi linearizability is a quantitative variation of the standard linearizability correctness condition to allow more implementation freedom for performance optimization. However, ensuring that the implementation satisfies the quantitative aspect of this new correctness condition is often an arduous task. In this paper, we propose the first automated method for formally verifying quasi linearizability of the implementation model of a concurrent data structure with respect to its sequential specification. The method is based on checking a relaxed version of the refinement relation between the implementation model and the specification model through explicit state model checking. Our method can directly handle concurrent systems where each thread or process makes infinitely many method calls. Furthermore, unlike many existing verification methods, it does not require the user to supply annotations of the linearization points. We have implemented the new method in the PAT verification framework. Our experimental evaluation shows that the method is effective in verifying the new quasi linearizability requirement and detecting violations.  相似文献   

2.
We present a comprehensive refinement calculus for the development of sequential, real-time programs from real-time specifications. A specification may include not only execution time limits, but also requirements on the behaviour of outputs over the duration of the execution of the program. The approach allows refinement steps that separate timing constraints and functional requirements. New rules are provided for handling timing constraints, but the refinement of components implementing functional requirements is essentially the same as in the standard refinement calculus. The product of the refinement process is a program in the target programming language extended with timing deadline directives. The extended language is a machine-independent, real-time programming language. To provide valid machine code for a particular model of machine, the machine code produced by a compiler must be analysed to guarantee that it meets the specified timing deadlines. Received: 27 September 1997 / 13 June 2000  相似文献   

3.
Although many programming languages contain exception handling mechanisms, their formal treatment — necessary for rigorous development — can be complex. Nevertheless, this paper presents a simple incorporation ofexit commands and exception blocks into a rigorous program development method. The refinement calculus, chosen for the exercise, is a method of developing imperative programs. It is based on weakest preconditions, although they are not used explicitly during program construction; they merely justify the general method. In the style of the refinement calculus, program development laws are given that introduce and allow the manipulation ofexits. The soundness of the new laws is shown using weakest preconditions (as for the existing refinement calculus laws). The extension of weakest preconditions needed to handleexits is a variation on earlier work of Cristian; the variation is necessary to handle nondeterminism.  相似文献   

4.
Summary A new technique for proving timing properties for timing-based algorithms is described; it is an extension of the mapping techniques previously used in proofs of safety properties for asynchronous concurrent systems. The key to the method is a way of representing a system with timing constraints as an automaton whose state includes predictive timing information. Timing assumptions and timing requirements for the system are both represented in this way. A multi-valued mapping from the assumptions automaton to the requirements automaton is then used to show that the given system satisfies the requirements. One type of mapping is based on a collection of progress functions providing measures of progress toward timing goals. The technique is illustrated with two examples, a simple resource manager and a two-process race system. Nancy A. Lynch received the B.S. degree in mathematics from Brooklyn College, Brooklyn, NY, in 1968, and the Ph.D. degree in mathematics from the Massachusetts Institute of Technology, Cambridge, MA, in 1972. She is presently a professor of computer science and electrical engineering at Massachusetts Institute of Technology. She has also been on the computer science faculty at Georgia Institute of Technology and on the mathematics faculty at Tufts University and the University of Southern California. Her research interests are in distributed and real-time computing and theoretical computer science. In particular, she has worked on formal models and verification methods, on algorithm design and analysis, and on impossibility results. She also likes to hike and ski. Hagit Attiya received the B.Sc. degree in Mathematics and Computer Science from the Hebrew University of Jerusalem, in 1981, the M.Sc. and Ph.D. degrees in Computer Science from the Hebrew University of Jerusalem, in 1983 and 1987, respectively. She is presently a senior lecturer at the department of Computer Science at the Technion, Israel Institute of Technology. Prior to this, she has been a post-doctoral research associate at the Laboratory for Computer Science at M.I.T. Her general research interests are distributed computation and theoretical computer science. More specific interests include fault-tolerance, timing-based and asynchronous algorithms.This work was supported by ONR contracts N00014-85-K-0168 and N00014-91-J-1046, by NSF grants CCR-8611442 and CCR-8915206, and by DARPA contracts N00014-87-K-0825 and N00014-89-J-1988  相似文献   

5.
6.
7.
Algorithms designed for VLSI implementation are commonly described by directed graphs, in which the nodes represent functional units and the arcs indicate communication links. We give a denotational semantics for such a graph in terms of the least fixed point of a set of (mutually recursive) function definitions, describing the outputs produced at each node as a function of time. This semantics is consistent with the conventional clocked operational semantics of the system. A retiming is a systematic modification of the internode delays of a design, often used to convert an algorithm design into a systolic form. The utility of such retimings in optimizing the behavior of designs is well known. We use fixed-point semantics to provide simple proofs of the correctness of certain retiming transformations from the literature and to justify other design transformations such as pipelining.  相似文献   

8.
The advantage of COOZ(Complete Object-Oriented Z) is to specify large scale software,but it does not support refinement calculus.Thus its application is comfined for software development.Including refinement calculus into COOZ overcomes its disadvantage during design and implementation.The separation between the design and implementation for structure and notation is removed as well .Then the software can be developed smoothly in the same frame.The combination of COOZ and refinement calculus can build object-oriented frame,in which the specification in COOZ is refined stepwise to code by calculus.In this paper,the development model is established.which is based on COOZ and refinement calculus.Data refinement is harder to deal with in a refinement tool than ordinary algorithmic refinement,since data refinement usually has to be done on a large program component at once.As to the implementation technology of refinement calculus,the data refinement calculator is constructed and an approach for data refinement which is based on data refinement calculus and program window inference is offered.  相似文献   

9.
In the refinement calculus, program statements are modelled as predicate transformers. A product operator for predicate transformers was introduced by Martin [18] and Naumann [25] using category theoretic considerations. In this paper, we look more closely at the refinement-oriented properties of this operator and at its applications. We also generalise the definition of the product operator to form what we call a fusion operator. Together, the fusion and product operators provide us with algebraic ways of composing program statements in the refinement calculus in order to model effects such as conjunction of specifications, simultaneous execution, and embedding of smaller programs into larger contexts. Received: 12 February 1996 / 25 August 1998  相似文献   

10.
Essentially, a connection graph is merely a data structure for a set of clauses indicating possible refutations. The graph itself is not an inference system. To use the graph, one has to introduce operations on the graph. In this paper, we shall describe a method to obtain rewriting rules from the graph, and then to show that these rewriting rules can be used to generate a refutation plan that may correspond to a large number of linear resolution refutations. Using this method, many redundant resolution steps can be avoided.  相似文献   

11.
A digital communication network can be modeled as an adversarial queueing network. An adversarial queueing network is defined to be stable if the number of packets stags bounded over time. A central question is to determine which adversarial queueing networks are stable under every work-conserving packet routing policy. Our main result is that stability of an adversarial queueing network is implied by stability of an associated fluid queueing network  相似文献   

12.
The algebra of M-nets, a high-level class of labelled Petri nets, was introduced in order to cope with the size problem of the low-level Petri box calculus, especially when applied as semantical domain for parallel programming languages. General, unrestricted and parameterised refinement and recursion operators, allowing to represent the (possibly recursive and concurrent) procedure call mechanism, are introduced into the M-net calculus.  相似文献   

13.
Henninger  S. 《Software, IEEE》1994,11(5):48-59
Component libraries are the dominant paradigm for software reuse, but they suffer from a lack of tools that support the problem-solving process of locating relevant components. Most retrieval tools assume that retrieval is a simple matter of matching well-formed queries to a repository. But forming queries can be difficult. A designer's understanding of the problem evolves while searching for a component, and large repositories often use an esoteric vocabulary. CodeFinder is a retrieval system that combines retrieval by reformulation (which supports incremental query construction) and spreading activation (which retrieves items related to the query) to help users find information. I designed it to investigate the hypothesis that this design makes for a more effective retrieval system. My study confirmed that it was more helpful to users seeking relevant information with ill-defined tasks and vocabulary mismatches than other query systems. The study supports the hypothesis that combining techniques effectively satisfies the kind of information needs typically encountered in software design  相似文献   

14.
In this paper, we give a numerical algorithm able to prove whether a set SS described by nonlinear inequalities is path-connected or not. To our knowledge, no other algorithm (numerical or symbolic) is able to deal with this type of problem. The proposed approach uses interval arithmetic to build a graph which has exactly the same number of connected components as SS. Examples illustrate the principle of the approach.  相似文献   

15.
This paper discusses alternative methods for constructing a 0–1 integer programming problem from a propositional calculus problem and the use of the resulting mathematical program to solve the related logic problem. This paper also identifies some special structures associated with the constraint sets and discusses several fundamental results concerning methods of preprocessing the logical inferences into constraints.  相似文献   

16.
This paper is about specification and verification of processes, modelled as CCS-agents. We show, by means of examples that Hennessy-Milner Logic (HML) with recursion is a suitable language for expressing implicit or partial specifications. By extending this specification language withrefinement operators, i.e. operators that describe the internal structure of a system, we obtain a calculus for stepwise refinement of agents from a specification in HML to a realisation in CCS. The method is demonstrated by proving the alternating-bit protocol under weak assumptions about the unreliable media.This paper has also be presented at the BCS-FACS workshop on Specification and Verification of Concurrent Systems, University of Stirling, July 1988, under the title: Hennessy-Milner logic with recursion as a specification language, and a refinement calculus based on it.  相似文献   

17.
A uniform treatment of specifications, programs, and programming is presented. The treatment is based on adding a specification statement to a given procedural language and defining its semantics. The extended language is thus a specification language and programs are viewed as a subclass of specifications. A partial ordering on specifications/programs corresponding to ‘more defined’ is defined. In this partial ordering the program/specification hybrids that arise in the construction of a program by stepwise refinement form a monotonic sequence. We show how Dijkstra's calculus for the derivation of programs corresponds to constructing this monotonic sequence. Formalizing the calculus thus gives some insight into the intellectual activity it demands and allows us to hint at further developments.  相似文献   

18.
Visual measurements of modeled 3-D landmarks provide strong constraints on the location and orientation of a mobile robot. To make the landmark-based robot navigation approach widely applicable, it is necessary to automatically build the landmark models. A substantial amount of effort has been invested by computer vision researchers over the past 10 years on developing robust methods for computing 3-D structure from a sequence of 2-D images. However, robust computation of 3-D structure, with respect to even small amounts of input image noise, has remained elusive. The approach adopted in this article is one of model extension and refinement. A partial model of the environment is assumed to exist and this model is extended over a sequence of frames. As will be shown in the experiments, prior knowledge of the small partial model greatly enhances the robustness of the 3-D structure computations. The initial 3-D model may have errors and these are also refined over the sequence of frames. © 1992 John Wiley & Sons, Inc.  相似文献   

19.
As the Internet grows, it is becoming less feasible for customers and merchants to manually visit each web site, analyze the information there, and make sound business decisions regarding the trading of goods or services. To cope with this evolution, software agents can be designed that are capable of automating the more routine, tedious, and time-consuming tasks involved in current trading processes. At a higher level agents may also be able to negotiate and make autonomous decisions and commitments on behalf of their owners.

This paper describes an agent implementation using the situation calculus, which offers a possibly unifying paradigm for dynamic agents. Interesting applications are currently being developed. Our contribution is a situation calculus agent system developed for e-business. Ongoing work is focused on implementing this system in an open marketplace environment.  相似文献   


20.
Using typed lambda calculus to implement formal systems on a machine   总被引:1,自引:0,他引:1  
Much research has been devoted in building computer systems for checking proofs or for developing interactively correct proofs in specific logical systems. However, implementing a proof environment for a specific logical system is both complex and time-consuming, this-together with the proliferation of logics-suggests that a uniform and reliable alternative is desirable. One such alternative is the Edinburgh Logical Framework (LF), developed in the late eighties at the LFCS (Laboratory for Foundations of Computer Science). The LF is a logic-independent tool which, given a specification for a logical system, synthesizes a proof editor and checker for that system. Its specification language is based on a general theory of logics, which enables one to capture uniformities and idiosyncrasies of a large class of logics without sacrificing generality for tractability. Peculiarities (such as side conditions on rule application, variable occurrence or formula formation) are expressed at the level of the specification. In this paper we are going to provide a broad illustration of its applicability and discuss to what extent it is successful. The analysis (of the formal presentation) of a system carried out through encoding often illuminates the system itself. This paper will also deal with this phenomenon.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号