首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a new theoretical result concerning Hoare Logic. It is shown here that the verification conditions that support a Hoare Logic program derivation are themselves sufficient to construct a correct implementation of the given pre-, and post-condition specification. This property is mainly of theoretical interest, though it is possible that it may have some practical use, for example if predicative programming methodology is adopted. The result is shown to hold for both the original, partial correctness, Hoare logic, and also a variant for total correctness derivations.  相似文献   

2.
We introduce a new algebraic model for program variables, suitable for reasoning about recursive procedures with parameters and local variables in a mechanical verification setting. We give a predicate transformer semantics to recursive procedures and prove refinement rules for introducing recursive procedure calls, procedure parameters, and local variables. We also prove, based on the refinement rules, Hoare total correctness rules for recursive procedures, and parameters. We introduce a special form of Hoare specification statement which alone is enough to fully specify a procedure. Moreover, we prove that this Hoare specification statement is equivalent to a refinement specification. We implemented this theory in the PVS theorem prover.This work is based on an earlier work: Reasoning about recursive procedures with parameters. In Proceedings of the Workshop on Mechanized Reasoning about Languages with Variable Binding, 2003, Uppsala, Sweden, ACM Press.Received March 2004Revised October 2004Accepted February 2005 by C. B. Jones  相似文献   

3.
Inspired by Hoare’s rule for recursive procedures, we present three proof rules for the equivalence between recursive programs. The first rule can be used for proving partial equivalence of programs; the second can be used for proving their mutual termination; the third rule can be used for proving the equivalence of reactive programs. There are various applications to such rules, such as proving equivalence of programs after refactoring and proving backward compatibility.  相似文献   

4.
Summary Main issue is: The actual termination problem for finitely interpreted non-deterministic ALGOL-like programs without procedure selfapplication and without global variables is algorithmically solvable. This result offers a new and substantial application of a theorem of Lipton: The above mentioned programs, restricted to deterministic ones, have a sound and relatively complete Hoare logic. So we conjecture: ALGOL-like programs (even non-deterministic ones with formal sharing of variables) without procedure selfapplication and without global variables have a sound and relatively complete Hoare deduction system with axioms and inference rules which reflect the syntactical structure of programs.  相似文献   

5.
In the paper we generalize the while-rule in Hoare calculus to an infinite one and then present a sufficient condition much weaker than the expressiveness for Cook‘2 relative completeness theorem with respect to our new axiomatic system.Using the extended Hoare calculus we can derive true Hoare formulas which contain while statements free of loop invariants.It is also pointed out that the weak condition is a first order property and therefore provides a possible approach to the characterization of relative completeness which is also a first order property.  相似文献   

6.
Using a predicate transformer semantics of programs, we introduce statements for heap operations and separation logic operators for specifying programs that manipulate pointers. We prove a powerful Hoare total correctness rule for mutually recursive procedures manipulating pointers. The rule combines earlier proof rules for (mutually) recursive procedures with the frame rule for pointer programs. The theory, including the proofs, is implemented in the theorem prover PVS. In this implementation program variables and addresses can store values of almost any type of the theorem prover.  相似文献   

7.
We propose a user-centric rule filtering method that allows to identify association rules that exhibit a certain user-specified temporal behavior with respect to rule evaluation measures. The method can considerably reduce the number of association rules that have to be assessed manually after a rule induction. This is especially necessary if the rule set contains many rules as it is the case for the task of finding rare patterns inside the data. For the proposed method, we will reuse former work on the visualization of association rules [M. Steinbrecher, R. Kruse, Visualization of possibilistic potentials, in: Foundations of Fuzzy Logic and Soft Computing, in: Lecture Notes in Comput. Sci., vol. 4529, Springer-Verlag, Berlin/Heidelberg, 2007, pp. 295–303] and use an extension of it to motivate and assess the presented filtering technique. We put the focus on rules that are induced from a data set that contains a temporal variable and build our approach on the requirement that temporally ordered sets of association rules are available, i.e., one set for every time frame. To illustrate this, we propose an ad-hoc learning method along the way. The actual rule filtering is accomplished by means of fuzzy concepts. These concepts use linguistic variables to partition rule-related domains of interest, such as the confidence change rate. The original rule sets are then matched against these user concepts and result in only those rules that match the respective concepts to a predefined extent. We provide empirical evidence by applying the proposed methods to hand-crafted as well as real-world data sets and critically discuss the current state and further prospects.  相似文献   

8.
This paper develops the theory that a set of rules is consistent if it is not possible that (1) the conditions of the rules in the set are all satisfied (2) there is no exception to either one of the rules, and (3) the consequences of the rules are incompatible. To this purpose the notion of consistency is generalised to make it cover rules and is relativised to a background of constraints. It is argued that a similar theory is also useful to characterise the consistency of deontic sentences of the ought-to-do type. The theory about rule consistency is formalised by means of Rule Logic, in which rules are treated as constraints on the possible worlds in which they exist. Rule Logic itself is introduced by giving a model-theory for it. It is characterised by means of constraints on worlds that are possible according to Rule Logic. The formal theory is refined by disallowing ungrounded exceptions to rules. To that purpose an additional constraint is imposed on worlds that are possible according to Rule Logic.  相似文献   

9.
SCOOP is a concurrent programming language with a new semantics for contracts that applies equally well in concurrent and sequential contexts. SCOOP eliminates race conditions and atomicity violations by construction. However, it is still vulnerable to deadlocks. In this paper we describe how far contracts can take us in verifying interesting properties of concurrent systems using modular Hoare rules and show how theorem proving methods developed for sequential Eiffel can be extended to the concurrent case. However, some safety and liveness properties depend upon the environment and cannot be proved using the Hoare rules. To deal with such system properties, we outline a SCOOP Virtual Machine (SVM) as a fair transition system. The SVM makes it feasible to use model-checking and theorem proving methods for checking global temporal logic properties of SCOOP programs. The SVM uses the Hoare rules where applicable to reduce the number of steps in a computation. P. J. Brooke, R. F. Paige and Dong Jin Song This work was conducted under an NSERC Discovery grant.  相似文献   

10.
为了能够将哲学逻辑中的公理系统运用到行为时序逻辑的研究中。对行为时序逻辑公式的语义进行形式化定义.从语义和语法两方面研究行为时序逻辑公理系统和具有自反性质的线性时序逻辑公理系统之间的联系.提出并证明行为时序逻辑公式转换为自反线性时序逻辑公式的定理。按照集合论和模型论的思想,定义行为时序逻辑中项和行为时序逻辑原子公式的概念。定义Lesilie Lamport所提出的行为时序逻辑公式的语义。证明自反线性时序逻辑公理系统适用于行为时序逻辑公理系统.以此为基础证明行为时序逻辑的简单规则、基本规则和附加规则。  相似文献   

11.
对高可信软件需求的增加使得指针程序的验证成为近期的研究热点.指针逻辑作为Hoare逻辑的扩展,可以对指针程序进行精确的分析.介绍一个针对指针逻辑的自动定理证明器的设计和实现,描述了一些算法.实验结果表明,该定理证明器可以完全自动的证明用类C语言编写的关于单链表,双链表和二叉树的指针程序的验证条件,并生成机器可检查的证明.  相似文献   

12.
13.
14.
Implementing temporal integrity constraints using an active DBMS   总被引:2,自引:0,他引:2  
The paper proposes a general architecture for implementing temporal integrity constraints by compiling them into a set of active DBMS rules. The modularity of the design allows easy adaptation to different environments. Both differences in the specification languages and in the target rule systems can be easily accommodated. The advantages of this architecture are demonstrated on a particular temporal constraint compiler. This compiler allows automatic translation of integrity constraints formulated in Past Temporal Logic into rules of an active DBMS (in the current version of the compiler two active DBMS are supported: Starburst and INGRES). During the compilation the set of constraints is checked for the safe evaluation property. The result is a set of SQL statements that includes all the necessary rules needed for enforcing the original constraints. The rules are optimized to reduce the space overhead introduced by the integrity checking mechanism. There is no need for an additional runtime constraint monitor. When the rules are activated, all updates to the database that violate any of the constraints are automatically rejected (i.e., the corresponding transaction is aborted). In addition to straightforward implementation, this approach offers a clean separation of application programs and the integrity checking code  相似文献   

15.
Solving a quantified constraint satisfaction problem (QCSP) is usually a hard task due to its computational complexity. Exact algorithms play an important role in solving this problem, among which backtrack algorithms are effective. In a backtrack algorithm, an important step is assigning a variable by a chosen value when exploiting a branch, and thus a good value selection rule may speed up greatly. In this paper, we propose two value selection rules for existentially and universally quantified variables, respectively, to avoid unnecessary searching. The rule for universally quantified variables is prior to trying failure values in previous branches, and the rule for existentially quantified variables selects the promising values first. Two rules are integrated into the state-of-the-art QCSP solver, i.e., QCSPSolve, which is an exact solver based on backtracking. We perform a number of experiments to evaluate improvements brought by our rules. From computational results, we can conclude that the new value selection rules speed up the solver by 5 times on average and 30 times at most. We also show both rules perform well particularly on instances with existentially and universally quantified variables occurring alternatively.  相似文献   

16.
This paper distinguishes several different approaches to organising a weakest pre-condition (WP) calculus in a theorem prover. The implementation of two of these approaches for Java within the LOOP project is described. This involves the WP-infrastructures in the higher order logic of the theorem prover PVS, together with associated rules and strategies for automatically proving JML specifications for Java implementations. The soundness of all WP-rules has been proven on the basis of the underlying Java semantics. These WP-calculi are integrated with the existing Hoare logic, and together form a verification toolkit in PVS: typically one uses Hoare logic rules to break a large verification task up into smaller parts that can be handled automatically by one of the WP-strategies.  相似文献   

17.
Probabilistic annotations generalise standard Hoare Logic [20] to quantitative properties of probabilistic programs. They can be used to express critical expected values over program variables that must be maintained during program execution. As for standard program development, probabilistic assertions can be checked mechanically relative to an appropriate program semantics. In the case that a mechanical prover is unable to complete such validity checks then a counterexample to show that the annotation is incorrect can provide useful diagnostic information. In this paper, we provide a definition of counterexamples as failure traces for probabilistic assertions within the context of the pB language [19], an extension of the standard B method [1] to cope with probabilistic programs. In addition, we propose algorithmic techniques to find counterexamples where they exist, and suggest a ranking mechanism to return ‘the most useful diagnostic information’ to the pB developer to aid the resolution of the problem.  相似文献   

18.
Logic programs can be evaluated bottom-up by repeatedly applying all rules, in “iterations”, until the fixpoint is reached. However, it is often desirable-and, in some cases, e.g. programs with stratified negation, it is even necessary to guarantee the semantics-to apply the rules in some order. We present two algorithms that apply rules in a specified order without repeating inferences. One of them (GSN) is capable of dealing with a wide range of rule orderings, but with a little more overhead than the well-known seminaive algorithm (which we call BSN). The other (PSN) handles a smaller class of rule orderings, but with no overheads beyond those in BSN. We also demonstrate that by choosing a good ordering, we can reduce the number of rule applications (and thus the number of joins). We present a theoretical analysis of rule orderings and identify orderings that minimize the number of rule applications (for all possible instances of the base relations) with respect to a class of orderings called fair orderings. We also show that though nonfair orderings may do a little better on some data sets, they can do much worse on others. The analysis is supplemented by performance results  相似文献   

19.
Researchers have developed visual discrimination models (VDMs) that can predict a human observer's ability to detect a target object superposed on an image. These models incorporate sophisticated knowledge of the properties of the human visual system. In the predictive approach, termed conventional VDM usage, two input images with and without a target are analyzed by an algorithm that calculates a just-noticeable-difference (JND) index, which is a taken as a measure of the detectability of the target. A new method of using the VDM is described, termed channelized VDM, which involves finding the linear combination of the VDM-generated channels (which are not used in conventional VDM analysis) that has optimal classification ability between normal and abnormal images. The classification ability can be measured using receiver operating characteristic (ROC) or two alternative forced choice (2AFC) experiments, and in special cases they can also be predicted by signal detection theory (SDT) based model-observer methods. In this study simulated background and nodule containing regions were used to validate the new method. It was found that the channelized VDM predictions were in excellent qualitative agreement with human-observer validated SDT predictions. Either VDM method (conventional or channelized) has potential applicability to soft-copy display optimization. An advantage of any VDM-based approach is that complex effects, such as visual masking, are automatically accounted for, which effects are usually not included in SDT-based methods.  相似文献   

20.
Analysis and verification of pointer programs are still difficult problems so far. This paper uses a shape graph logic and a shape system to solve these problems in two stages. First, shape graphs at every program point are constructed using an analysis tool. Then, they are used to support the verification of other properties (e.g., orderedness). Our prototype supports automatic verification of programs manipulating complex data structures such as splay trees, treaps, AVL trees and AA trees, etc. The proposed shape graph logic, as an extension to Hoare logic, uses shape graphs directly as assertions. It can be used in the analysis and verification of programs manipulating mutable data structures. The benefit using shape graphs as assertions is that it is convenient for acquiring the relations between pointers in the verification stage. The proposed shape system requires programmers to provide lightweight shape declarations in recursive structure type declarations. It can help rule out programs that construct shapes deviating from what programmers expect (reflected in shape declarations) in the analysis stage. As a benefit, programmers need not provide specifications (e.g., pre-/post-conditions, loop invariants) about pointers. Moreover, we present a method doing verification in the second stage using traditional Hoare logic rules directly by eliminating aliasing with the aid of shape graphs. Thus, verification conditions could be discharged by general theorem provers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号