首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
并发程序与并发系统可以拥有非常高的执行效率和相对串行系统较快的响应速度,在现实中有着非常广泛的应用。但是并发程序与并发系统往往难以保证其实现的正确性,实际应用程序运行中的错误会带来严重的后果。同时,并发程序执行时的不确定性会给其正确性验证带来巨大的困难。在形式化验证方法中,人们可以通过交互式定理证明器严格地对并发程序进行验证。本文对在交互式定理证明中可用于描述并发程序正确性的验证目标进行总结,它们包括霍尔三元组、可线性化、上下文精化和逻辑原子性。交互式定理证明方法中常用程序逻辑对程序进行验证,本文分析了基于并发分离逻辑、依赖保证逻辑、关系霍尔逻辑等理论研究的系列成果与相应形式化方案,并对使用了这些方法的程序验证工具和程序验证成果进行了总结。  相似文献   

2.
可线性化被公认为并发对象正确性标准,但其已被证明不能作为含有随机语句的并发对象的正确性标准。为此,Golab等人提出了强可线性化概念,它在可线性化的定义上增加了前缀保持性质,对并发对象具有更强的约束性。关于强可线性化的研究集中在使用特定的基本对象构造满足强可线性化性质的并发对象的可行性。对常见的并发对象的强可线性化性质的检测和验证方面的研究较为少见。  相似文献   

3.
We present a framework for the specification and verification of reactive concurrent programs using general-purpose mechanical theorem proving. We define specifications for concurrent programs by formalizing a notion of refinements analogous to stuttering trace containment. The formalization supports the definition of intuitive specifications of the intended behavior of a program. We present a collection of proof rules that can be effectively orchestrated by a theorem prover to reason about complex programs using refinements. The proof rules systematically reduce the correctness proof for a concurrent program to the definition and proof of an invariant. We include automated support for discharging this invariant proof with a predicate abstraction tool that leverages the existing theorems proven about the components of the concurrent programs. The framework is integrated with the ACL2 theorem prover and we demonstrate its use in the verification of several concurrent programs in ACL2.  相似文献   

4.
Concurrent data structures are usually designed to satisfy correctness conditions such as sequential consistency or linearizability. In this paper, we consider the following fundamental question: What guarantees are provided by these conditions for client programs? We formally show that these conditions can be characterized in terms of observational refinement. Our study also provides a new understanding of sequential consistency and linearizability in terms of abstraction of dependency between computation steps of client programs.  相似文献   

5.
The recent years have seen increasingly widespread use of highly concurrent data structures in both multi-core and distributed computing environments, thereby escalating the priority for verifying their correctness. Quasi linearizability is a quantitative variation of the standard linearizability correctness condition to allow more implementation freedom for performance optimization. However, ensuring that the implementation satisfies the quantitative aspect of this new correctness condition is often an arduous task. In this paper, we propose the first automated method for formally verifying quasi linearizability of the implementation model of a concurrent data structure with respect to its sequential specification. The method is based on checking a relaxed version of the refinement relation between the implementation model and the specification model through explicit state model checking. Our method can directly handle concurrent systems where each thread or process makes infinitely many method calls. Furthermore, unlike many existing verification methods, it does not require the user to supply annotations of the linearization points. We have implemented the new method in the PAT verification framework. Our experimental evaluation shows that the method is effective in verifying the new quasi linearizability requirement and detecting violations.  相似文献   

6.
It is shown that a combination of specification and program refinement may be applied to deriving efficient concurrent rule-based programs. Specification refinement is used to generate an initial rule-based program that is refined into a program which is highly concurrent and efficient. This program derivation strategy is divided into two major tasks. The first task relies on specification refinement. Techniques similar to those employed in the derivation of UNITY programs are used to produce a correct rule-based program having a static knowledge base. The second task involves program refinement and is specific to the development of concurrent rule-based programs. It relies heavily on the availability of a computational model, such as Swarm, that has the ability to dynamically restructure the knowledge base. The ways in which a Swarm program can be translated to OPS5 specifically, given some restrictions, while maintaining the correctness criteria are discussed  相似文献   

7.
Contextual refinement is a compositional approach to compositional verification of concurrent objects.There has been much work designing program logics to prove the contextual refinement between the object implementation and its abstract specification.However,these program logics for contextual refinement verification cannot support objects with resource ownership transfer,which is a common pattern in many concurrent objects,such as the memory management module in OS kernels,which transfers the allocated memory block between the object and clients.In this paper,we propose a new approach to give abstract and implementation independent specifications to concurrent objects with ownership transfer.We also design a program logic to verify contextual refinement of concurrent objects w.r.t.their abstract specifications.We have successfully applied our logic to verifying an implementation of the memory management module,where the implementation is an appropriately simplified version of the original version from a real-world preemptive OS kernel.  相似文献   

8.
General logic programs are those that contain both positive and negative subgoals in their clause bodies. For such programs Fitting proposed an elegant 3-valued minimum model semantics that avoids some impracticalities of previous approaches. Here we present a method to compute this Fitting model for deductive databases. We introducepartial relations, which are the semantic objects associated with predicate symbols, and define algebraic operators over them. The first step in our model computation method is to convert the database rules into partial relation definitions involving these operators. The second step is to build the minimum model iteratively. We give algorithms for both steps and show their termination and correctness. We also suggest extensions to our method for computing the well-founded model proposed by van Gelder, Ross and Schlipf.  相似文献   

9.
A calculus of refinements for program derivations   总被引:1,自引:0,他引:1  
Summary A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.  相似文献   

10.
Program transformation techniques have been extensively studied in the framework of functional and logic languages, where they were applied mainly to obtain more efficient and readable programs. All these works are based on the Unfold/Fold program transformation method developed by Burstall and Darlington in the context of their recursive equational language. The use of Unfold/Fold based transformations for concurrent languages is a relevant issue that has not yet received an adequate attention. In this paper we define a transformation methodology for CCS. We give a set of general rules which are a specialization of classical program transformation rules, such as Fold and Unfold. Moreover, we define the general form of other rules, “oriented” to the goal of a transformation strategy, and we give conditions for the correctness of these rules. We prove that a strategy using the general rules and a set of goal oriented rules is sound, i.e. it transforms CCS programs into equivalent ones. We show an example of application of our method. We define a strategy to transform, if possible, a full CCS program into an equivalent program whose semantics is a finite transition system. We show that, by means of our methodology, we are able to a find finite representations for a class of CCS programs which is larger than the ones handled by the other existing methods. Our transformational approach can be seen as unifying in a common framework a set of different techniques of program analysis. A further advantage of our approach is that it is based only on syntactic transformations, thus it does not requires any semantic information. Received: 24 April 1997 / 19 November 1997  相似文献   

11.
We give an overview of correctness criteria specific to concurrent shared-memory programs and runtime verification techniques for verifying these criteria. We cover a spectrum of criteria, from ones focusing on low-level thread interference such as races to higher-level ones such as linearizability. We contrast these criteria in the context of runtime verification. We present the key ideas underlying the runtime verification techniques for these criteria and summarize the state of the art. Finally, we discuss the issue of coverage for runtime verification for concurrency and present techniques that improve the set of covered thread interleavings.  相似文献   

12.
Algebras of imperative programming languages have been successful in reasoning about programs. In general an algebra of programs is an algebraic structure with programs as elements and with program compositions (sequential composition, choice, skip) as algebra operations. Various versions of these algebras were introduced to model partial correctness, total correctness, refinement, demonic choice, and other aspects. We introduce here an algebra which can be used to model total correctness, refinement, demonic and angelic choice. The basic model of our algebra are monotonic Boolean transformers (monotonic functions from a Boolean algebra to itself).  相似文献   

13.
Fidge  C. Kearney  P. Utting  M. 《Software, IEEE》1997,14(2):99-106
Developing concurrent real-time programs is one of computer science's greatest challenges. Not only is such software expensive to manufacture, but its role in safety-critical systems demands that it be correct. Formal methods of program specification and refinement could strengthen the mathematical precision used to develop such software. Nevertheless, formalisms that embrace both real-time and concurrency requirements are only just emerging. The Quartz method treats time and functional behavior with equal importance in the development process. The authors argue that by modeling program development in a unified framework, we can increase our confidence in the correctness of real-time concurrent code  相似文献   

14.
We present a Theory of Specifications based on Martin-Löf's type theory, with rules for simultaneously constructing programs and their correctness proofs. The theory contains types for representing specifications whose corresponding notion of implementation is that of a pair formed by a program and a correctness proof. The rules of the theory are such that in implementations the program parts appear mixed together with the proof parts. A confluent and normalizing computational relation performs the task of separating programs from proofs. As a consequence, every implementation computes to a pair composed of a program and a proof of its correctness, and so the program extraction procedure is immediate.  相似文献   

15.
We describe how the HOL theorem prover can be used to check and apply rules of program refinement. The rules are formulated in the refinement calculus, which is a theory of correctness preserving program transformations. We embed a general command notation with a predicate transformer semantics in the logic of the HOL system. Using this embedding, we express and prove rules for data refinement and superposition refinement of initialized loops. Applications of these proof rules to actual program refinements are checked using the HOL system, with the HOL system generating these conditions. We also indicate how the HOL system is used to prove the verification conditions. Thus, the HOL system can provide a complete mechanized environment for proving program refinements.  相似文献   

16.
Program slicing is an effective technique for analyzing concurrent programs. However, when a conventional closure-based slicing algorithmfor sequential programs is applied to a concurrent interprocedural program, the slice is usually imprecise owing to the intransitivity of interference dependence. Interference dependence arises when a statement uses a variable defined in another statement executed concurrently. In this study, we propose a global dependence analysis approach based on a program reachability graph, and construct a novel dependence graph calledmarking-statement dependence graph (MSDG), in which each vertex is a 2-tuple of program state and statement. In contrast to the conventional program dependence graph where the vertex is a statement, the dependence relation in MSDG is transitive. When traversing MSDG, a precise slice will be obtained. To enhance the slicing efficiency without loss of precision, our slicing algorithm adopts a hybrid strategy. The procedures containing interaction statements between threads are inlined and sliced by the slicing algorithm based on program reachability graphs while allowing other procedures to be sliced as sequential programs. We have implemented our algorithm and three other representative slicing algorithms, and conducted an empirical study on concurrent Java programs. The experimental results show that our algorithm computes more precise slices than the other algorithms. Using partial-order reduction techniques, which are effective for reducing the size of a program reachability graph without loss of precision, our algorithm is optimized, thereby improving its performance to some extent.  相似文献   

17.
A key index of the performance of a rule based program used in real time monitoring and control is its response time, defined by the longest program execution time before a fixed point of the program is reached from a start state. Previous work in computing the response time bounds for rule based programs effectively assumes that all rules take the same amount of firing time. It is also assumed that if two rules are enabled, then either one of them may be scheduled first for firing. These assumptions can result in loose bounds, especially in the case programmers choose to impose a priority structure on the set of rules. We remove the uniform firing cost assumption and discuss how to get tighter bounds by taking rule priority information into account. We show that the rule suppression relation we previously introduced can be extended to incorporate rule priority information. A bound derivation algorithm for programs whose potential trigger relations satisfy an acyclicity condition is presented, followed by its correctness proof and an analysis example  相似文献   

18.
This paper is concerned with a sufficient condition under which a concept class is learnable in Gold’s classical model of identification in the limit from positive data. The standard principle of learning algorithms working under this model is called the MINL strategy, which is to conjecture a hypothesis representing a minimal concept among the ones consistent with the given positive data. The minimality of a concept is defined with respect to the set-inclusion relation – the strategy is semantics-based. On the other hand, refinement operators have been developed in the field of learning logic programs, where a learner constructs logic programs as hypotheses consistent with given logical formulae. Refinement operators have syntax-based definitions – they are defined based on inference rules in first-order logic. This paper investigates the relation between the MINL strategy and refinement operators in inductive inference. We first show that if a hypothesis space admits a refinement operator with certain properties, the concept class will be learnable by an algorithm based on the MINL strategy. We then present an additional condition that ensures the learnability of the class of unbounded finite unions of concepts. Furthermore, we show that under certain assumptions a learning algorithm runs in polynomial time.  相似文献   

19.
This paper explores locality in proofs of global safety properties of concurrent programs. Model checking on the full state space is often infeasible due to state explosion. A local proof, in contrast, is a collection of per-process invariants, which together imply the desired global safety property. Local proofs can be more compact than global proofs, but local reasoning is also inherently incomplete. In this paper, we present an algorithm for safety verification that combines local reasoning with gradual refinement. The algorithm gradually exposes facts about the internal state of components, until either a local proof or a real error is discovered. The refinement mechanism ensures completeness. Experiments show that local reasoning can have significantly better performance over the traditional reachability computation. Moreover, for some parameterized protocols, a local proof can be used as the basis of a correctness proof over all instances.  相似文献   

20.
Procedural models are a powerful tool for quickly creating a variety of computer graphics content. However, authoring them is challenging, requiring both programming and artistic expertise. In this paper, we present a method for learning procedural models from a small number of example objects. We focus on the modular design setting, where objects are constructed from a common library of parts. Our procedural representation is a probabilistic program that models both the discrete, hierarchical structure of the examples as well as the continuous variability in their spatial arrangements of parts. We develop an algorithm for learning such programs from examples, using combinatorial search over program structures and variational inference to estimate continuous program parameters. We evaluate our method by demonstrating its ability to learn programs from examples of ornamental designs, spaceships, space stations, and castles. Experiments suggest that our learned programs can reliably generate a variety of new objects that are perceptually indistinguishable from hand‐crafted examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号