首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Recent deductive approaches to reasoning about action and chance allow us to model objects and methods in a deductive framework. In these approaches, inheritance of methods comes for free, whereas overriding of methods is unsupported. In this paper, we present an equational logic framework for objects, methods, inheritance and overriding of methods. Overriding is achieved via the concept of specificity, which states that more specific methods are preferred to less specific ones. Specificity is computed with the help of negation as failure. We specify equational logic programs and show that their completed versions behave as intended. Furthermore, we prove that SLDENF-resolution is complete if the equational theory is finitary, the completed programs are consistent and no derivation flounders or is infinite. Moreover, we give syntactic conditions which guarantee that no derivation flounders or is infinite. Finally, we discuss how the approach can be extended to reasoning about the past in the context of incompletely specified objects or situations. It will turn out that constructive negation is needed to solve these problems.  相似文献   

2.
This paper presents a mathematical theory underlying a systematic method for constructingProlog programs calledstepwise enhancement. Stepwise enhancement dictates building a program starting with askeleton program which constitutes the basic control flow for the problem to be solved, and adding extra computations to the skeleton program by using well-understood programming techniques. Each extra computation can be developed independently, and the separate enhancements combined to produce the final program. While intuition and motivation have focused onProlog, the methods are applicable to logic programming languages more generally. The central concept in our mathematical theory for stepwise enhancement is that of a program map between two logic programs. Our definition of a program map from an enhancement to its skeleton guarantees the lifting of computations, the essence of the enhancement methodology. In this paper, we give definitions of program map and extensions, show that the definitions preserve the property of computations lifting, give examples of extensions and programming techniques which generate them, and point to directions for future work.  相似文献   

3.
Metal-level compositions of object logic programs are naturally implemented by means of meta-programming techniques. Metainterpreters defining program compositions however suffer from a computational overhead that is due partly to the interpretation layer present in all meta-programs, and partly to the specific interpretation layer needed to deal with program compositions. We show that meta-interpreters implementing compositions of object programs can be fruitfully specialised w.r.t. meta-level queries of the form Demo (E, G), where E denotes a program expression and G denotes a (partially instantiated) object level query. More precisely, we describe the design and implementation of declarative program specialiser that suitably transforms such meta-interpreters so as to sensibly reduce — if not to completely remove — the overhead due to the handling of program compositions. In many cases the specialiser succeeds in eliminating also the overhead due to meta-interpretation. Antonio Brogi, Ph.D.: He is currently assistant professor in the Department of Computer Science at the University of Pisa, Italy. He received his Laurea Degree in Computer Science (1987) and his Ph. D. in Computer Science (1993) from the University of Pisa. His research interests include programming language design and semantics, logic programming, deductive databases, and software coordination. Simone Contiero: He is currently a Ph. D. student at the Department of Computer Science, University of Pisa (Italy). He received his Laurea Degree in Computer Science from the University of Pisa in 1994. His research interests are in high-level programming languages, metaprogramming and logic-based coordination of software.  相似文献   

4.
Although modularisation is basic to modern computing, it has been little studied for logic-based programming. We treat modularisation for equational logic programming using the institution of category-based equational logic in three different ways: (1) to provide a generic satisfaction condition for equational logics; (2) to give a category-based semantics for queries and their solutions; and (3) as an abstract definition of compilation from one (equational) logic programming language to another. Regarding (2), we study soundness and completeness for equational logic programming queries and their solutions. This can be understood as ordinary soundness and completeness in a suitable “non-logical” institution. Soundness holds for all module imports, but completeness only holds for conservative module imports. Category-based equational signatures are seen as modules, and morphisms of such signatures as module imports. Regarding (3), completeness corresponds to compiler correctness. The results of this research applies to languages based on a wide class of equational logic systems, including Horn clause logic, with or without equality; all variants of order and many sorted equational logic, including working modulo a set of axioms; constraint logic programming over arbitrary user-defined data types; and any combination of the above. Most importantly, due to the abstraction level, this research gives the possibility to have semantics and to study modularisation for equational logic programming developed over non-conventional structures. Received April 15, 1994/April 12, 1995  相似文献   

5.
The paradigm of disjunctive logic programming(DLP)enhances greatly the expressive power of normal logic programming(NLP)and many(declarative)semantics have been defined for DLP to cope with various problems of knowledge representation in artificial intelligence.However,the expressive ability of the semantics and the soundness of program transformations for DLP have been rarely explored.This paper defines an immediate consequence operatro T^GP for each disjunctive program and shows that T^GP has the least and computable fixpoint Lft(P),Lft is,in fact,a program transformation for DLP,which transforms all disjunctive programs into negative programs.It is shown that Lft preserves many key semantics,including the disjunctive stable models,well-founded model,disjunctive argunent semantics DAS,three-valued models,ect.Thic means that every disjunctive program P has a unique canonical form Lft(P)with respect to these semantics.As a result,the work in this paper provides a unifying framework for studying the expressive ability of various semantics for DLP On the other hand,the computing of the above semantics for negative programs is ust a trivial task,therefore,Lft(P)is also an optimization method for DLP.Another application of Lft is to derive some interesting semantic results for DLP.  相似文献   

6.
Spatial logics have been proposed to reason locally and modularly on algebraic models of distributed systems. In this paper we define the spatial equational logic A π L whose models are processes of the applied π-calculus. This extension of the π-calculus allows term manipulation and records communications as aliases in a frame, thus augmenting the predefined underlying equational theory. Our logic allows one to reason locally either on frames or on processes, thanks to static and dynamic spatial operators. We study the logical equivalences induced by various relevant fragments of A π L, and show in particular that the whole logic induces a coarser equivalence than structural congruence. We give characteristic formulae for some of these equivalences and for static equivalence. Going further into the exploration of A π L’s expressivity, we also show that it can eliminate standard term quantification.  相似文献   

7.
The problem of unifying pairs of terms with respect to an equational theory (as well as detecting the unsatisfiability of a system of equations) is, in general, undecidable. In this work, we define a framework based on abstract interpretation for the (static) analysis of the unsatisfiability of equation sets. The main idea behind the method is to abstract the process of semantic unification of equation sets based on narrowing. The method consists of building an abstract narrower for equational theories, and executing the sets of equations to be detected for unsatisfiability in the approximated narrower. As an instance of our framework, we define a new analysis whose accuracy is enhanced by some simple loop-checking technique. This analysis can also be actively used for pruning the search tree of an incremental equational constraint solver, and can be integrated with other methods in the literature. Standard methods are shown to be an instance of our framework. To the best of our knowledge, this is the first framework proposed for approximating equational unification.  相似文献   

8.
Reasoning about the termination of equational programs in sophisticated equational languages such as Elan, Maude, OBJ, CafeOBJ, Haskell, and so on, requires support for advanced features such as evaluation strategies, rewriting modulo, use of extra variables in conditions, partiality, and expressive type systems (possibly including polymorphism and higher-order). However, many of those features are, at best, only partially supported by current term rewriting termination tools (for instance mu-term, C i ME, AProVE, TTT, Termptation, etc.) while they may be essential to ensure termination. We present a sequence of theory transformations that can be used to bridge the gap between expressive membership equational programs and such termination tools, and prove the correctness of such transformations. We also discuss a prototype tool performing the transformations on Maude equational programs and sending the resulting transformed theories to some of the aforementioned standard termination tools.  相似文献   

9.
The relationship between Reiter's default logic and general logic programs, which may contain negative subgoals in rule bodies, has been discussed in the literature by translating logic programs to default logic theories. Here, we present a method to translate some default logic theories to general logic programs, and study the extensions of default logic theories with the stable model semantics of logic programming. Based on the translation method, we show that another semantics of logic programming, the well-founded semantics, can be used to define a new version of default logic, which is more cautious than the original one. This enables the existing proof procedures for the well-founded semantics to perform default inference. We also study the property of cumulative monotonicity for both default logic theories and general logic programs under the two different semantics. As a direct application of the translation method, logic programs can be used to make default inference for semantic networks with exceptions. La relation entre la logique par défaut de Reiter et les programmes logiques généraux, qui peuvent comporter des sous-buts négatifs dans les ensembles de règies, a déjàété discutée en traduisant les programmes logiques en théories de la logique par défaut. Dans cet article, les auteurs proposent une méthode pour traduire certaines théories de la logique par defaut en programmes logiques généraux et étudient les extensions des theories de la logique par defaut avec la sémantique de modéle stable de la programmation logique. En se basant sur la méthode de traduction, les auteurs démontrent qu'une autre sémantique de la programmation logique, la sémantique bien fondée, peut ětre utilisée pour définir une nouvelle version de la logique par défaut, qui est plus prudente que la première. Cela permet aux procédures de preuve existantes de la sémantique bien fondée d' effectuer l' inférence par défaut. Les auteurs examinent également les caracteristiques de la monotonicité cumulative pour les theories de la logique par défaut et les programmes logiques généraux selon les deux différentes sémantiques. Une application directe de la méthode de traduction est la possibilityé d' utiliser les programmes logiques pour effectuer de l' inférence par défaut pour les réseaux sémantiques avec exceptions.  相似文献   

10.
A theory for a type system for logic programs is developed which addressesthe question of well-typing, type inference, and compile-time and run-time type checking. A type is a recursively enumerable set of ground atoms, which is tuple-distributive. The association of a type to a program is intended to mean that only ground atoms that are elements of the type may be derived from the program. A declarative definition of well-typed programs is formulated, based on an intuitive approach related to the fixpoint semantics of logic programs. Whether a program is well typed is undecidable in general. We define a restricted class of types, called regular types, for which type checking is decidable. Regular unary logic programs are proposed as a specification language for regular types. An algorithm for type-checking a logic program with respect to a regular type definition is described, and its complexity is analyzed. Finally, the practicality of the type system is discussed, and some examples are shown. The type system has been implemented in FCP for FCP and is incorporated in the Logix system.  相似文献   

11.
12.
We introduce a reasoning infrastructure for proving statements about resource consumption in a fragment of the Java Virtual Machine Language (JVML). The infrastructure is based on a small hierarchy of program logics, with increasing levels of abstraction: at the top there is a type system for a high-level language that encodes resource consumption. The infrastructure is designed to be used in a proof-carrying code (PCC) scenario, where mobile programs can be equipped with formal evidence that they have predictable resource behaviour.  相似文献   

13.
Consider the class of programs P where the greatest fixpoint of TP is equal to the complement of the finite failure set of P. Programs in this calss possess some important properties which others do not. The main result in this paper proves that this class is representative of all programs. Specifically, we call the programs in this class canonical and we prove that for any program P1, there exists a semantically equivalent program P2 which is canonical.  相似文献   

14.
15.
We present a technique for the compilation of bottom-up and mixed logic derivations into PROLOG-programs. It is obtained as an extension of a program transformation technique called Compiling Control. We illustrate its applications in three different domains: solving numerical problems, integrity checking in deductive databases and theorem proving. The aim is to obtain efficient PROLOG programs for problems in which a non-top-down control is most appropriate.Work partly supported by ESPRIT BRA COMPULOG (project 3012).Supported by the Belgian I.W.O.N.L.-I.R.S.I.A. under contract number 5203. Author for correspondence.Supported by the Belgian National Fund for Scientific Research.  相似文献   

16.
17.
A parallel-execution model that can concurrently exploit AND and OR parallelism in logic programs is presented. This model employs a combination of techniques in an approach to executing logic problems in parallel, making tradeoffs among number of processes, degree of parallelism, and combination bandwidth. For interpreting a nondeterministic logic program, this model (1) performs frame inheritance for newly created goals, (2) creates data-dependency graphs (DDGs) that represent relationships among the goals, and (3) constructs appropriate process structures based on the DDGs. (1) The use of frame inheritance serves to increase modularity. In contrast to most previous parallel models that have a large single process structure, frame inheritance facilitates the dynamic construction of multiple independent process structures, and thus permits further manipulation of each process structure. (2) The dynamic determination of data dependency serves to reduce computational complexity. In comparison to models that exploit brute-force parallelism and models that have fixed execution sequences, this model can reduce the number of unification and/or merging steps substantially. In comparison to models that exploit only AND parallelism, this model can selectively exploit demand-driven computation, according to the binding of the query and optional annotations. (3) The construction of appropriate process structures serves to reduce communication complexity. Unlike other methods that map DDGs directly onto process structures, this model can significantly reduce the number of data sent to a process and/or the number of communication channels connected to a process  相似文献   

18.
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible—where simple is being understood in some clearly defined and meaningful way.  相似文献   

19.
The paper gives an overview on the DSPL programming environment, an integrated approach to automate system design and implementation of applications run on dedicated parallel systems. The programming environment consists of a data-flow language and an integrated set of tools. The tools automatically derive a software model from the given application program. Based on the model, the design decisions as the network topology, the task mapping and schedule as well as the optimal use of buffers are computed. Finally, the design decisions are automatically implemented by transforming the application program in executable code for the chosen processor network. The DSPL programming environment integrates model-based optimization techniques and program transformation techniques. The integration allows to include new aspects in the optimization process. Especially optimizations crucial to the semantics of the program can be included. The most important examples of such optimizations are the enforcement of the schedule in case of data-dependent execution of tasks and the transformation of buffered communication to unbuffered communication. Both aspects are crucial to the generation of efficient parallel implementations. The integration of the two aspects is supported by a formal framework. This allows to formally prove the correctness of the program optimizations performed by the programming environment.  相似文献   

20.
This article introduces the notion of CAS-equivalent logic programs: logic programs with identical correct answer substitutions. It is shown that the notions CAS-equivalence, refutational equivalence, and logical equivalence do not coincide in the case of definite clause logic programs. Least model criteria for refutational and CAS-equivalence are suggested and their correctness is proved. The least model approach is illustrated by two proofs of CAS-equivalence. It is shown that the symmetric extension of a logic program subsumes the symmetry axiom and the symmetric homogeneous form of a logic program with equality subsumes the symmetry, transitivity, and predicate substitutivity axioms of equality. These results contribute towards the goal of building equality into standard Prolog without introducing additional inference rules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号