首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recently, the well-founded semantics of a logic programP has been strengthened to the well-founded semantics-by-case (WFC) and this in turn has been strengthened to the extended well-founded semantics (WFE). Both WFC(P) and WFE(P) have thelogical consequence property, namely, if an atomAj is true in the theory Th(P), thenAj is true in the semantics as well. However, neither WFC nor WFE has the GCWA property, i.e., if an atomAj is false in all minimal models ofP,Aj may not be false in WFC(P) (resp. WFE(P)). We extend the ideas in WFC and WFE to define a strong well-founded semantics WFS which has the GCWA property. The strong semantics WFS(P) is defined by combining GCWA with the notion ofderived rules. Here we use a new Type-III derived rules in addition to those used in WFC and WFE. The relationship between WFS and WFC is also clarified.  相似文献   

2.
In this paper we present and compare some classical problem-solving methods for computing the stable models of logic programs with negation. Using a graph theoretic representation of logic programs and their stable models, we discuss and compare linear programming, propositional satisfiability, constraint satisfaction, and graph methods.  相似文献   

3.
4.
During the last decade, the integration of functional and logic languages has received widespread attraction for the purpose of offering two different programming styles in one system simultaneously. The main goal is to incorporate the characteristics of the two paradigms coherently, without degrading the performance of the whole system. However, few languages have achieved this goal. Some of them, even though they have a rich set of functions, perform poorly. Others are efficient, but lose some important facilities. The paper proposes a functional logic language Lazy Aflog and its abstract machine FWAM-II as an expressive and efficient mechanism for this incorporation. Lazy Aflog is an extension of logic language in which functions are reduced in the extended unification, called E-unification with lazy evaluation. This extended unification allows Lazy Aflog to process infinite data structures and higher-order functions naturally. FWAM-II is an extension of the Warren Abstract Machine (WAM) in which the instructions and run-time structures to provide the suspension/reactivation of functional closure are added. These facilities enable FWAM-II to support not only resolution but also infinite data structures and higher-order functions efficiently. In addition, the experimental results show that Lazy Aflog and FWAM-II could be a good compromise between expressiveness and efficiency of the integration.  相似文献   

5.
It is desirable to answer queries posed to deductive databases by computing fixpoints because such computations are directly amenable to set-oriented fact processing. However, the classical fixpoint procedures based on bottom-up processing — the naive and semi-naive methods — are rather primitive and often inefficient. In this article, we rely on bottom-up meta-interpretation for formalizing a new fixpoint procedure that performs a different kind of reasoning: We specify a top-down query answering method, which we call the Backward Fixpoint Procedure. Then, we reconsider query evaluation methods for recursive databases. First, we show that the methods based on rewriting on the one hand, and the methods based on resolution on the other hand, implement the Backward Fixpoint Procedure. Second, we interpret the rewritings of the Alexander and Magic Set methods as specializations of the Backward Fixpoint Procedure. Finally, we argue that such a rewriting is also needed in a database context for implementing efficiently the resolution-based methods. Thus, the methods based on rewriting and the methods based on resolution implement the same top-down evaluation of the original database rules by means of auxiliary rules processed bottom-up.  相似文献   

6.
Logic Programs with Annotated Disjunctions (LPADs) provide a simple and elegant framework for representing probabilistic knowledge in logic programming. In this paper we consider the problem of learning ground LPADs starting from a set of interpretations annotated with their probability. We present the system ALLPAD for solving this problem. ALLPAD modifies the previous system LLPAD in order to tackle real world learning problems more effectively. This is achieved by looking for an approximate solution rather than a perfect one. A number of experiments have been performed on real and artificial data for evaluating ALLPAD, showing the feasibility of the approach. Editors: Stephen Muggleton, Ramon Otero, Simon Colton.  相似文献   

7.
Abstract

In the past we developed a semantics for a restricted annotated logic language for inheritance reasoning. Here we generalize it to annotated Horn logic programs. We first provide a formal account of the language, describe its semantics, and provide an interpreter written in Prolog for it. We then investigate its relationship to Belnap's 4-valued logic, Gelfond and Lifschitz's semantics for logic programs with negation, Brewka's prioritized default logics and other annotated logics due to Kifer et al.  相似文献   

8.
A method known asclosed environments can be used to represent variable bindings for OR-parellel logic programs without relying on a shared memory or common address space. The representation is based on a procedure that trans-forms stack frames after unification, taking into account problems with common unbound ancestors and shared instances of complex terms. Closed environments were developed for the AND/OR Process Model, but may be applicable to other OR-parallel models.  相似文献   

9.
We consider the notion of strong equivalence [V. Lifschitz, D. Pearce, A. Valverde, Strongly equivalent logic programs, ACM Transactions on Computational Logic 2 (4) (2001) 526-541] of normal propositional logic programs under the infinite-valued semantics [P. Rondogiannis, W.W. Wadge, Minimum model semantics for logic programs with negation-as-failure, ACM Transactions on Computational Logic 6 (2) (2005) 441-467] (which is a purely model-theoretic semantics that is compatible with the well-founded one). We demonstrate that two such programs are strongly equivalent under the infinite-valued semantics if and only if they are logically equivalent in the corresponding infinite-valued logic. In particular, we show that strong equivalence of normal propositional logic programs is decidable, and more specifically coNP-complete. Our results have a direct implication for the well-founded semantics since, as we demonstrate, if two programs are strongly equivalent under the infinite-valued semantics, then they are also strongly equivalent under the well-founded semantics.  相似文献   

10.
Continuations are used to define the flow of messages between low level tasks in a parallel logic programming language. A combination of compiler and runtime operations reduces message traffic by up to 50% when success continuations are passed as parameters in messages that start new processes. Continuations are also the key to fast task switching, a critical operation in this fine grain parallel system. Data from sample programs shows the effectiveness of continuations in reducing message traffic and the speed with which task switches are performed on a typical host architecture.Supported by NSF Grant CCR-8707177 and grants from Motorola, Inc, and Hewlett-Packard Corp.  相似文献   

11.
Model transformation by example is a novel approach in model-driven software engineering to derive model transformation rules from an initial prototypical set of interrelated source and target models, which describe critical cases of the model transformation problem in a purely declarative way. In the current paper, we automate this approach using inductive logic programming (Muggleton and Raedt in J Logic Program 19-20:629–679, 1994) which aims at the inductive construction of first-order clausal theories from examples and background knowledge.
Dániel Varró (Corresponding author)Email:
  相似文献   

12.
More specific versions of definite logic programs are introduced. These are versions of a program in which each clause is further instantiated or removed and which have an equivalent set of successful derivations to those of the original program, but a possibly increased set of finitely failed goals. They are better than the original program because failure in a non-successful derivation may be detected more quickly. Furthermore, information about allowed variable bindings which is hidden in the original program may be made explicit in a more specific version of it. This allows better static analysis of the program's properties and may reveal errors in the original program. A program may have several more specific versions but there is always a most specific version which is unique up to variable renaming. Methods to calculate more specific versions are given and it is characterized when they give the most specific version.  相似文献   

13.
Checking if a program has an answer set, and if so, compute its answer sets are just some of the important problems in answer set logic programming. Solving these problems using Gelfond and Lifschitz's original definition of answer sets is not an easy task. Alternative characterizations of answer sets for nested logic pro- grams by Erdem and Lifschitz, Lee and Lifschitz, and You et al. are based on the completion semantics and various notions of tightness. However, the notion of tightness is a local notion in the sense that for different answer sets there are, in general, different level mappings capturing their tightness. This makes it hard to be used in the design of algorithms for computing answer sets. This paper proposes a characterization of answer sets based on sets of generating rules. From this char- acterization new algorithms are derived for computing answer sets and for per- forming some other reasoning tasks. As an application of the characterization a sufficient and necessary condition for the equivalence between answer set seman- tics and completion semantics has been proven, and a basic theorem is shown on computing answer sets for nested logic programs based on an extended notion of loop formulas. These results on tightness and loop formulas are more general than that in You and Lin's work.  相似文献   

14.
In ‘multi-adjoint logic programming’, MALP in brief, each fuzzy logic program is associated with its own ‘multi-adjoint lattice’ for modelling truth degrees beyond the simpler case of true and false, where a large set of fuzzy connectives can be defined. On this wide repertoire, it is crucial to connect each implication symbol with a proper conjunction thus conforming constructs of the form (←i, &i) called ‘adjoint pairs’, whose use directly affects both declarative and operational semantics of the MALP framework. In this work, we firstly show how the strong dependence of adjoint pairs can be largely weakened for an interesting ‘sub-class’ of MALP programs. Then, we reason in a similar way till conceiving a ‘super-class’ of fuzzy logic programs beyond MALP, which definitively drops out the need for using adjoint pairs, since the new semantics behaviour relies on much more relaxed lattices than multi-adjoint ones.  相似文献   

15.
This paper concerns the exploitation of user transparent inherent parallelism of pure Prolog programs using program transformation. We describe a novel paradigmenumerate-and-filter for transforming generate-and-test programs for execution under the committed-choice model extended to incorporate multiple solutions based on set enumeration. The paradigm simulates OR-parallelism by stream AND-parallelism integrating OR-parallelism, AND-parallelism, and stream parallelism. Generate-and-test programs are classified into three categories:simple generate-and-test, recursively embedded generate-and-test, and deeply intertwined generate-and-test. The intermediate programs are further transformed to reduce structure copying and metacalls. Algorithms are presented and demonstrated by transforming the representative examples of different classes of generate-and-test programs to Flat Concurrent Prolog equivalents. Statistics show that the techniques are efficient.Funded in part by Cleveland Advanced Manufacturing Program through the State of Ohio as a part of its core research program grant to Center of Automation and Intelligent Systems Research, Case Western Reserve University and NSF equipment grant CDA-8820390 to Kent State University.  相似文献   

16.
17.
The paradigm of disjunctive logic programming(DLP)enhances greatly the expressive power of normal logic programming(NLP)and many(declarative)semantics have been defined for DLP to cope with various problems of knowledge representation in artificial intelligence.However,the expressive ability of the semantics and the soundness of program transformations for DLP have been rarely explored.This paper defines an immediate consequence operatro T^GP for each disjunctive program and shows that T^GP has the least and computable fixpoint Lft(P),Lft is,in fact,a program transformation for DLP,which transforms all disjunctive programs into negative programs.It is shown that Lft preserves many key semantics,including the disjunctive stable models,well-founded model,disjunctive argunent semantics DAS,three-valued models,ect.Thic means that every disjunctive program P has a unique canonical form Lft(P)with respect to these semantics.As a result,the work in this paper provides a unifying framework for studying the expressive ability of various semantics for DLP On the other hand,the computing of the above semantics for negative programs is ust a trivial task,therefore,Lft(P)is also an optimization method for DLP.Another application of Lft is to derive some interesting semantic results for DLP.  相似文献   

18.
We present a generic scheme for the declarative debugging of programs that are written in rewriting-based languages that are equipped with narrowing. Our aim is to provide an integrated development environment in which it is possible to debug a program and then correct it automatically. Our methodology is based on the combination (in a single framework) of a semantics-based diagnoser that identifies those parts of the code that contain errors and an inductive learner that tries to repair them, once the bugs have been located in the program. We develop our methodology in several steps. First, we associate with our programs a semantics that is based on a (continuous) immediate consequence operator, TR, which models the answers computed by narrowing and is parametric w.r.t. the evaluation strategy, which can be eager or lazy. Then, we show that, given the intended specification of a program R, it is possible to check the correctness of R by a single step of TR. In order to develop an effective debugging method, we approximate the computed answers semantics of R and derive a finitely terminating bottom-up abstract diagnosis method, which can be used statically. Finally, a bug-correction program synthesis methodology attempts to correct the erroneous components of the wrong code. We propose a hybrid, top-down (unfolding-based) as well as bottom-up (induction-based), correction approach that is driven by a set of evidence examples which are automatically produced as an outcome by the diagnoser. The resulting program is proven to be correct and complete w.r.t. the considered example sets. Our debugging framework does not require the user to provide error symptoms in advance or to answer difficult questions concerning program correctness. An implementation of our debugging system has been undertaken which demonstrates the workability of our approach.  相似文献   

19.
This article introduces the notion of CAS-equivalent logic programs: logic programs with identical correct answer substitutions. It is shown that the notions CAS-equivalence, refutational equivalence, and logical equivalence do not coincide in the case of definite clause logic programs. Least model criteria for refutational and CAS-equivalence are suggested and their correctness is proved. The least model approach is illustrated by two proofs of CAS-equivalence. It is shown that the symmetric extension of a logic program subsumes the symmetry axiom and the symmetric homogeneous form of a logic program with equality subsumes the symmetry, transitivity, and predicate substitutivity axioms of equality. These results contribute towards the goal of building equality into standard Prolog without introducing additional inference rules.  相似文献   

20.
In this paper,an optimal method to handle cyclic and acyclic data relations in the linear recursive queries is proposed.High efficiency is achieved by integrating graph traversal mechanisms into a top-down evaluation.In such a way,the subsumption checks and the identification of cyclic data can be done very efficiently.First,based on the subsumption checks,the search space can be reduced drastically by avoiding any redundant expansion operation.In fact,in the case of non-cyclic data,the proposed algorithm requires only linear time for evaluating a linear recursive query.On the other hand,in the case of cyclic data,by using the technique for isolaiting strongly connected components a lot of answers can be generatded directly in terms of the intermediate results and the relevant path information instead of evaluating them by performing algebraic operations.Since the cost of generating an answer in much less than that of evaluating an answer by algebraic operations,the bime consumption for cyclic data can be reduced by an order of magnitude or more.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号