首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A method for analysing the inverse of a first-order functional program is proposed. This method is based on denotational semantics: we analyse the inverse image of a Scott open set under the continuous function which the program denotes. Inverse image analysis is one possible way of extending strictness analysis to languages with lazy data structures and could perhaps be used to optimise code in implementations of such languages.  相似文献   

2.
A graphics software standard has to specify precisely what the software is expected to do. For this purpose, the paper exhibits a formal framework for the specification of software modules which may be structured hierarchically and which may be based on abstract data types. An important aspect concerns the special mathematical semantics of the data types: The semantics of a type is its class of all finitely generated models. This semantics enables a uniform definition of a standard which, nevertheless, may cover a large variety of implementations on very different graphics hardware devices. The kernel of this approach is some axiomatic characterization of the notion of finitely generated images. Another important aspect is that, from such a formal software specification, an implementation may be derived. At least in principle, the careful documentation of such a program derivation could serve as a basis for a verification-oriented certification of graphics standard implementations.  相似文献   

3.
Most programming languages ignore the problem of undefined variables and permit compilers to allow the leftover contents of the memory cells belonging to such variables to be referenced. Although efficient, this type of semantics does not support software engineering, because defects might arise in subtle ways and might be difficult to locate. Two other types of semantics that better support software engineering are to either initialize all variables to some default, e.g. zero, or to require that all references to undefined variables be treated as errors. However, these types of semantics are obviously more expensive than simply ignoring the undefined variables entirely. In this paper, we propose a simple technique that works equally well for both of these latter two types of semantics, and whose efficiency compares favorably for certain realistic programs with more traditional implementations of these semantics. Furthermore, we provide a mechanism for using this technique through Ada implementations of two abstract data types where undefined variables respectively exhibit these two types of semantics, and whose implementations of these semantics use our technique. These abstract data types allow our technique to be selectively used in strictly those situations where the cost of the technique is justified. We provide practical examples illustrating these situations.  相似文献   

4.
Logical relations are a fundamental and powerful tool for reasoning about programs in languages with parametric polymorphism. Logical relations suitable for reasoning about observational behavior in polymorphic calculi supporting various programming language features have been introduced in recent years. Unfortunately, the calculi studied are typically idealized, and the results obtained for them offer only partial insight into the impact of such features on observational behavior in implemented languages. In this paper we show how to bring reasoning via logical relations closer to bear on real languages by deriving results that are more pertinent to an intermediate language for the (mostly) lazy functional language Haskell like GHC Core. To provide a more fine-grained analysis of program behavior than is possible by reasoning about program equivalence alone, we work with an abstract notion of relating observational behavior of computations which has among its specializations both observational equivalence and observational approximation. We take selective strictness into account, and we consider the impact of different kinds of computational failure, e.g., divergence versus failed pattern matching, because such distinctions are significant in practice. Once distinguished, the relative definedness of different failure causes needs to be considered, because different orders here induce different observational relations on programs (including the choice between equivalence and approximation). Our main contribution is the construction of an entire family of logical relations, parameterized over a definedness order on failure causes, each member of which characterizes the corresponding observational relation. Although we deal with properties very much tied to types, we base our results on a type-erasing semantics since this is more faithful to actual implementations.  相似文献   

5.
The semantics of PROLOG programs is usually given in terms of the model theory of first-order logic. However, this does not adequately characterizethe computational behavior of PROLOG programs. PROLOG implementations typically use a sequential evaluation strategy based on the textual order of clauses and literals in a program, as well as nonlogical features like cut. In this work we develop a denotational semantics that captures thecomputational behavior of PROLOG. We present a semantics for “cut-free” PROLOG, which is then extended to PROLOG with cut. For each case we develop a congruence proof that relates the semantics to a standard operational interpreter. As an application of our denotational semantics, we show the correctness of some standard “folk” theorems regarding transformations on PROLOG programs.  相似文献   

6.
In aspect-oriented programming (AOP) languages, advice evaluation is usually considered as part of the base program evaluation. This is also the case for certain pointcuts, such as if pointcuts in AspectJ, or simply all pointcuts in higher-order aspect languages like AspectScheme. While viewing aspects as part of base level computation clearly distinguishes AOP from reflection, it also comes at a price: because aspects observe base level computation, evaluating pointcuts and advice at the base level can trigger infinite regression. To avoid these pitfalls, aspect languages propose ad-hoc mechanisms, which increase the complexity for programmers while being insufficient in many cases. After shedding light on the many facets of the issue, this paper proposes to clarify the situation by introducing levels of execution in the programming language, thereby allowing aspects to observe and run at specific, possibly different, levels. We adopt a defensive default that avoids infinite regression, and gives advanced programmers the means to override this default using level-shifting operators. We then study execution levels both in practice and in theory. First, we study the relevance of the issues addressed by execution levels in existing aspect-oriented programs. We then formalize the semantics of execution levels and prove that the default semantics is indeed free of a certain form of infinite regression, which we call aspect loops. Finally, we report on existing implementations of execution levels for aspect-oriented extensions of Scheme, JavaScript and Java, discussing their implementation techniques and current applications.  相似文献   

7.
Backward compatibility is the property that an old version of a library can safely be replaced by a new version without breaking existing clients. Formal reasoning about backward compatibility requires an adequate semantic model to compare the behavior of two library implementations. In the object-oriented setting with inheritance and callbacks, such a model must account for the complex interface between library implementations and clients.In this paper, we develop a fully abstract trace-based semantics for class libraries in object-oriented languages, in particular for Java-like sealed packages. Our approach enhances a standard operational semantics such that the change of control between the library and the client context is made explicit in terms of interaction labels. By using traces over these labels, we abstract from the data representation in the heap, support class hiding, and provide fully abstract package denotations. Soundness and completeness of the trace semantics is proven using specialized simulation relations on the enhanced operational semantics. The simulation relations also provide a proof method for reasoning about backward compatibility.  相似文献   

8.
Contracts are a proven tool in software development. They provide specifications for operations that may be statically verified or dynamically validated by contract monitoring.We investigate the properties of contract monitoring for languages with contracts and effects using a monadic semantics. We study three combinations of evaluation orders and contract monitoring styles: call-by-value and call-by-name with eager monitoring and call-by-name with delayed monitoring.In each case, an effect system ensures that contract monitoring does not change the meaning of a program and guarantees that contract monitoring is idempotent. The monadic semantics enables us to study design choices, to formalize implementations, to pinpoint the differences between contracts in the three combinations, and to verify algebraic laws.  相似文献   

9.
Relational program reasoning is concerned with formally comparing pairs of executions of programs. Prominent examples of relational reasoning are program equivalence checking (which considers executions from different programs) and detecting illicit information flow (which considers two executions of the same program). The abstract logical foundations of relational reasoning are, by now, sufficiently well understood. In this paper, we address some of the challenges that remain to make the reasoning practicable. Two major ones are dealing with the feature richness of programming languages such as C and with the weakly structured control flow that many real-world programs exhibit. A popular approach to control this complexity is to define the analyses on the level of an intermediate program representation (IR) such as one generated by modern compilers. In this paper we describe the ideas and insights behind IR-based relational verification. We present a program equivalence checker for C programs that operates on LLVM IR. To extend the reach of the approach and to make it more efficient, we show how dynamic analyses can be employed to support and strengthen the static verification. The effectiveness of the approach is demonstrated by automatically verifying equivalence of functions from different implementations of the standard C library.  相似文献   

10.
In several areas, including Temporal DataBases (TDB), Presburger arithmetic has been chosen as a standard reference for the semantics of languages representing periodic time, and to study their expressiveness. On the other hand, the proposal of most symbolic languages in the AI literature has not been paired with an adequate semantic counterpart, making the task of studying the expressiveness of such languages and of comparing them a very complex one. In this paper, we first define a representation language which enables us to handle each temporal point as a complex object enriched with all the structure it is immersed in, and then we use it in order to provide a Presburger semantics for classes of symbolic languages coping with periodicity. Finally, we use the semantics to compare a few AI and TDB symbolic approaches.  相似文献   

11.
Ken Slonneger 《Software》1993,23(12):1379-1397
Several authors have suggested translating denotational semantics into prototype interpreters written in high-level programming languages to provide evaluation tools for language designers. These implementations have generally been understandable when restricted to direct denotational semantics. This paper considers using two declarative programming languages, Prolog and Standard ML, to implement an interpreter that follows the continuation semantics of a small imperative programming language, called Gull. Each of the two declarative languages presents certain difficulties related to evaluation strategies and expressiveness. The implementations are compared in terms of their ease of use for prototyping, their resemblance to the denotational definitions, and their efficiency.  相似文献   

12.
The plethora of concurrent declarative language families, each with subtly different semantics, makes the design and implementation of static analyses for these languages a demanding task. However, many of the languages share underlying structure, and if this structure can be exploited, static analysis techniques can be shared across language families. These techniques can thus provide a common kernel for the implementation of quality compilers for this entire language class. The purpose of this paper is to exploit the similarities of non-strict functional and concurrent logic languages in the design of a common intermediate language (CIL). The CIL is introduced incrementally, giving at each step the rationale for its extension. As an application, we present, in CIL form, some state-of-the-art static partitioning algorithms from the literature. This allows us to “uncover” the relative advantages and disadvantages of the analyses, and determine promising directions for improving static partitioning.  相似文献   

13.
One of the major purposes of a high-level language is to provide a large measure of machine-Independence in the specification of algorithms. Definitions of languages such as FORTRAN IV and ALGOL 60 encourage compatibility between various implementations. Language specifications are inadequate in that they normally underdefine a language. In particular, the specifications do not normally demand a response to a language violation. The freedom normally given to an implementor to decide the degree and nature of error detection and response hinders portability and may lead to-unexpected results when moving code from one machine to another or even when changing implementations on the same machine. To support the contention that languages should specify a response to violations, an analysis of four FORTRAN IV implementations and a FORTRAN IV verifier was conducted. The study showed that different implementations often lead to different results for the same illegal program. A study of programmers also revealed that they cannot be relied upon to avoid language violations without compiler aids.  相似文献   

14.
Static analysis of declarative languages deals with the detection, at compile time, of program properties that can be used to better understand the program semantics and to improve the efficiency of program evaluation. In logical update languages, an interesting problem is the detection of conflicting updates, inserting and deleting the same fact, for transactions based on set-oriented updates and active rules. In this paper, we investigate this topic in the context of the U-Datalog language, a set-oriented update language for deductive databases, based on a deferred semantics. We first formally define relevant properties of U-Datalog programs, mainly related to update conflicts. Then, we prove that the defined properties are decidable and we propose an algorithm to detect such conditions. Finally, we show how the proposed techniques can be applied to other logical update languages. Our results are based on the concept of labeling and query-tree.  相似文献   

15.
16.
The formalisation of object-oriented languages is essential for describing the implementation details of specific programming languages or for developing program verification techniques. However there has been relatively little formalisation work aimed at abstractly describing the fundamental concepts of object-oriented programming, separate from specific language considerations or suitability for a particular verification style. In this paper we address this issue by formalising a language that includes the core object-oriented programming language concepts of field tests and updates, methods, constructors, subclassing, multithreading, and synchronisation, built on top of standard sequential programming constructs. The abstract syntax is relatively close to the core of typical object-oriented programming languages such as Java. A novel aspect of the syntax is that objects and classes are encapsulated within a single syntactic term, including their fields and methods. Furthermore, class terms are structured according to the class hierarchy, and objects appear as subterms of their class (and method instances as subterms of the relevant object). This helps to narrow the gap between how a programmer thinks about their code and the underlying mathematical objects in the semantics. The semantics is defined operationally, so that all actions a program may take, such as testing or setting local variables and fields, or invoking methods on other objects, appear on the labels of the transitions. A process-algebraic style of interprocess communication is used for object and class interactions. A benefit of this label-based approach to the semantics is that a separation of concerns can be made when defining the rules of the different constructs, and the rules tend to be more concise. The basic rules for individual commands may be composed into more powerful rules that operate at the level of classes and objects. The traces generated by the operational semantics are used as the basis for establishing equivalence between classes.  相似文献   

17.
One approach to model checking program source code is to view a model checker as a target machine. In this setting, program source code is translated to a model checker’s input language using a process that shares much in common with program compilation. For example, well-defined intermediate program representations are used to stage the translation through a series of analyses and optimizing transformations and target-specific details are isolated in code generation modules.In this paper, we present the Bandera Intermediate Representation (BIR)—a guarded-assignment transformation system language that has been designed to support the translation of Java programs to a variety of model checkers. BIR includes constructs, such as inheritance, dynamic creation of data, and locking primitives, that are designed to model the semantics of Java primitives. BIR also includes several non-deterministic choice constructs that support abstraction in modeling and specification of properties of dynamic heap structures.We have developed a BIR-based tool infrastructure that has been applied to develop customized analysis frameworks for several different input languages using different model checking tools. We present BIR’s type system and operational semantics in sufficient detail to support similar applications by other researchers. This semantics details several state space reductions and state space search variations. We describe the translation of Java to BIR and how BIR is translated to the input languages of several model checkers.  相似文献   

18.
Verilog代数语义研究   总被引:1,自引:0,他引:1  
给出了Verilog的代数语义.这是一个等式公理体系,它将Verilog语义特征通过代数规则简洁而准确地表达出来;并且这个代数语义相对于已经所作的操作语义模型来讲是可靠的,即所有的这些代数规则左右两边的进程在操作语义的观察模型下都是互模拟的.研究了此代数语义的相对完备性,即参照前面的操作语义模型,相对于扩展Verilog语言的一个子集而言,此代数语义是完备的.即所有符合这样语法的程序,如果它们是互模拟等价的,那么它们同样可以在所提出的代数系统中被推导相等.在完备性证明过程中,采用范式方法,即构造一种语法上特殊的程序,任何属于上述子集中的一个程序通过该代数规则都能够被转化为范式程序,而且范式程序在操作语义模型下是互模拟的当且仅当它们是语法相同的.上述结果具有重要的理论意义,因为现有的进程代数理论主要是针对管道通信并行语言而展开的,而对于像Verilog这种以共享变量通信为基础的复杂并行语言研究还是比较少的,对此类复杂的基于共享变量的并行语言的进程代数理论研究提出了一种通用、有效的方法.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号