首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
SATCHMORE was introduced as a mechanism to integrate relevancy testing with the model-generation theorem prover SATCHMO. This made it possible to avoid invoking some clauses that appear in no refutation, which was a major drawback of the SATCHMO approach. SATCHMORE relevancy, however, is driven by the entire set of negative clauses and no distinction is accorded to the query negation. Under unfavorable circumstances, such as in the presence of large amounts of negative data, this can reduce the efficiency of SATCHMORE. In this paper we introduce a further refinement of SATCHMO called SATCHMOREBID: SATCHMORE with BIDirectional relevancy. SATCHMOREBID uses only the negation of the query for relevancy determination at the start. Other negative clauses are introduced on demand and only if a refutation is not possible using the current set of negative clauses. The search for the relevant negative clauses is performed in a forward chaining mode as opposed to relevancy propagation in SATCHMORE which is based on backward chaining. SATCHMOREBID is shown to be refutationally sound and complete. Experiments on a prototype SATCHMOREBID implementation point to its potential to enhance the efficiency of the query answering process in disjunctive databases. Donald Loveland, Ph.D.: He is Emeritus Professor of Computer Science at Duke University. He received his Ph.D. in mathematics from New York University and taught at NYU and CMU prior to joining Duke in 1973. His research in automated deduction includes defining the model elimination proof procedure and the notion of linear resolution. He is author of one book and editor/co-editor of two other books on automated theorem proving. He has done research in the areas of algorithms, complexity, expert systems and logic programming. He is an AAAI Fellow, ACM Fellow and winner of the Herbrand Award in Automated Reasoning. Adnan H. Yahya, Ph.D.: He is an associate professor at the Department of Electrical Engineering, Birzeit University, Palestine. He received his Diploma and PhD degrees from St. Petersburg Electrotechnical University and Nothwestern University in 1979 and 1984 respectively. His research interests are in Artificial Intelligence in general and in the areas of Deductive Databases, Logic Programming and Nonmonotonic Reasoning in particular. He had several visiting appointments at universities and research labs in the US, Germany, France and the UK. Adnan Yahya is a member of the ACM, IEEE and IEEE Computer Society.  相似文献   

2.
I—SATCHMORE:An Improvement of A—SATCHMORE   总被引:1,自引:1,他引:0       下载免费PDF全文
This paper presents an improvement of A-SATCHMORE (SATCHMORE with Availability).A-SATCHMORE incorporates relevancy testing and availability checking into SATCHMO to prune away irrelevant forward chaining.However ,considering every consequent atom of those non-Horn clauses being derivable,A-SATCHMORE may suffer from a potential explosion of the search space when some of such consequent atoms are actually underivable.This paper introduces a solution for this problem and shows its correctness.  相似文献   

3.
Although Prolog is a programming language based on techniques from theorem proving, its use as a base for a theorem prover has not been explored until recently (Stickel, 1984). In this paper, we introduce a Prolog-based deductive theorem proving method for proving theorems in a first-order inductive theory representable in Horn clauses. The method has the following characteristics:
  • 1.It automatically partitions the domains over which the variables range into subdomains according to the manner in which the predicate symbols in the theorem are defined.
  • 2.For each of the subdomains the prover returns a lemma. If the lemma is true, then the target theorem is true for this subdomain. The lemma could also be an induction hypothesis for the theorem.
  • 3.The method does not explicitly use any inductive inference rule. The induction hypothesis, if needed for a certain subdomain, will sometimes be generated from a (limited) forward chaining mechanism in the prover and not from employing any particular inference rule.
In addition to the backward chaining and backtracking facilities of Prolog, our method introduces three new mechanism—skolemization by need, suspended evaluation, and limited forward chaining. These new mechanisms are simple enough to be easily implemented or even incorporated into Prolog. We describe how the theorem prover can be used to prove properties of Prolog programs by showing two simple examples.  相似文献   

4.
Binary-clause reasoning has been shown to reduce the size of the search space on many satisfiability problems, but has often been so expensive that run-time was higher than that of a simpler procedure that explored a larger space. The method of Sharir for detecting strongly connected components in a directed graph can be adapted to performing lean resolution on a set of binary clauses. Beyond simply detecting unsatisfiability, the goal is to find implied equivalent literals, implied unit clauses, and implied binary clauses.  相似文献   

5.
Non-Horn clause logic programming without contrapositives   总被引:1,自引:0,他引:1  
We present an extension of Prolog-style Horn clause logic programming to full first order logic. This extension has some advantages over other such extensions that have been proposed. We compare this method with the model elimination strategy which Stickel has recently implemented very efficiently, and with Loveland's extension of Prolog to near-Horn clauses. This new method is based on the author's simplified problem reduction format but permits a better control of the splitting rule than does the simplified problem reduction format. In contrast to model elimination, this new method does not require the use of contrapositives of clauses, permitting a better control of the search. This method has been implemented in C Prolog and has turned out to be a respectable and surprisingly compact first-order theorem prover. This implementation uses depth-first iterative deepening and caching of answers to avoid repeated solution of the same subgoal. We show that the time and space used by this method are polynomial functions of certain natural parameters of the search space, unlike other known methods. We discuss the relation of these upper bounds to Savitch's theorem relating nondeterministic time to deterministic space.This research was supported in part by the National Science Foundation under grant DCR-8516243.  相似文献   

6.
Binary-clause reasoning has been shown to reduce the size of the search space on many satisfiability problems, but has often been so expensive that run-time was higher than that of a simpler procedure that explored a larger space. The method of Sharir for detecting strongly connected components in a directed graph can be adapted to performing lean resolution on a set of binary clauses. Beyond simply detecting unsatisfiability, the goal is to find implied equivalent literals, implied unit clauses, and implied binary clauses.  相似文献   

7.
The Model Elimination (ME) calculus is a refutationally complete,goal-oriented calculus for first-order clause logic. In this article, weintroduce a new variant called disjunctive positive ME (DPME); it improveson Plaisteds positive refinement of ME in that reduction steps areallowed only with positive literals stemming from clauses having at leasttwo positive literals (so-called disjunctive clauses). DPME is motivated byits application to various kinds of subsumption deletion: in order to applysubsumption deletion in ME equally successful as in resolution, it iscrucial to employ a version of ME that minimizes ancestor context (i.e., thenecessary A-literals to find a refutation). DPME meets this demand. Wedescribe several variants of ME with subsumption, the most important onesbeing ME with backward and forward subsumption and theT*-Context Check. We compare their pruning power, also takinginto consideration the well-known regularity restriction. All proofs aresupplied. The practicability of our approach is demonstrated with experiments.  相似文献   

8.
We prove hardness results for approximating set splitting problems and also instances of satisfiability problems which have no mixed clauses, i.e., every clause has either all its literals unnegated or all of them negated. Results of Håstad imply tight hardness results for set splitting when all sets have size exactly $k \ge 4$ elements and also for non-mixed satisfiability problems with exactly $k$ literals in each clause for $k \ge 4$. We consider the case $k=3$. For the MAX E3-SET SPLITTING, problem in which all sets have size exactly 3, we prove an NP-hardness result for approximating within any factor better than ${\frac{19}{20}}$. This result holds even for satisfiable instances of MAX E3-SET SPLITTING, and is based on a PCP construction due to Håstad. For non-mixed MAX 3SAT, we give a PCP construction which is a slight variant of the one given by Håstad for MAX 3SAT, and use it to prove the NP-hardness of approximating within a factor better than ${\frac{11}{12}}$.  相似文献   

9.
One of the major limitations of relational learning is due to the complexity of verifying hypotheses on examples. In this paper we investigate this task in light of recent published results, which show that many hard problems exhibit a narrow phase transition with respect to some order parameter, coupled with a large increase in computational complexity. First we show that matching a class of artificially generated Horn clauses on ground instances presents a typical phase transition in solvability with respect to both the number of literals in the clause and the number of constants occurring in the instance to match. Then, we demonstrate that phase transitions also appear in real-world learning problems, and that learners tend to generate inductive hypotheses lying exactly on the phase transition. On the other hand, an extensive experimenting revealed that not every matching problem inside the phase transition region is intractable. However, unfortunately, identifying those that are feasible cannot be done solely on the basis of the order parameters. To face this problem, we propose a method, based on a Monte Carlo algorithm, to estimate on-line the likelihood that the current matching problem will exceed a given amount of computational resources. The impact of the above findings on relational learning is discussed.  相似文献   

10.
11.
A combination hardware/software mechanism is presented which supports very general capabilities for the protection of and controlled access to sharable information structures. It is defined through symbolic algorithms in terms of the dedicated model hardware. The model centers on two key concepts, that of thetenant, who is a storage holding entity, and that of thedomain, which is an information accessing entity. The domain, defined as a capsular collection of mutually accessible information structures having a single common external protective interface, is an integral part of the hardware logic. It is contended that the definition of a mechanism to enforce access authorizations must include an underlying philosophy specifying the conditions under which such access authorizations may be granted. Such a philosophy is suggested. It is based on theprinciple of ownership according to which any area of storage is at all times held by a single tenant who has the exclusive right to grant/revoke access privileges to his proprietary information structures, i.e., information residing in proprietary storage.This is an extensively revised version of a paper presented under the title A Computer System Model for Controlled Sharing of Information at ONLINE72, September 1972, Brunel University, Uxbridge, Middlesex, England. Republished by permission of ONLINE72.Work reported in this paper is of a theoretical nature and may not be construed to imply any product commitment by the Digital Equipment Corporation, Maynard, Massachusetts.  相似文献   

12.
In this paper we introduce the notion of anF-program, whereF is a collection of formulas. We then study the complexity of computing withF-programs.F-programs can be regarded as a generalization of standard logic programs. Clauses (or rules) ofF-programs are built of formulas fromF. In particular, formulas other than atoms are allowed as building blocks ofF-program rules. Typical examples ofF are the set of all atoms (in which case the class of ordinary logic programs is obtained), the set of all literals (in this case, we get the class of logic programs with classical negation [9]), the set of all Horn clauses, the set of all clauses, the set of all clauses with at most two literals, the set of all clauses with at least three literals, etc. The notions of minimal and stable models [16, 1, 7] of a logic program have natural generalizations to the case ofF-programs. The resulting notions are called in this paperminimal andstable answer sets. We study the complexity of reasoning involving these notions. In particular, we establish the complexity of determining the existence of a stable answer set, and the complexity of determining the membership of a formula in some (or all) stable answer sets. We study the complexity of the existence of minimal answer sets, and that of determining the membership of a formula in all minimal answer sets. We also list several open problems.This work was partially supported by National Science Foundation under grant IRI-9012902.This work was partially supported by National Science Foundation under grant CCR-9110721.  相似文献   

13.
CLIN-S is an instance-based, clause-form first-order theorem prover. CLIN-S employs three inference procedures: semantic hyper-linking, which uses semantics to guide the proof search and performs well on non-Horn parts of the proofs involving small literals, rough resolution, which removes large literals in the proofs, and UR resolution, which proves the Horn parts of the proofs. A semantic structure for the input clauses is given as input. During the search for the proof, ground instances of the input clauses are generated and new semantic structures are built based on the input semantics and a model of the ground clause set. A proof is found if the ground clause set is unsatisfiable. In this article, we describe the system architecture and major inference rules used in CLIN-S.  相似文献   

14.
The evolution of logic programming semantics has included the introduction of a new explicit form of negation, beside the older implicit (or default) negation typical of logic programming. The richer language has been shown adequate for a spate of knowledge representation and reasoning forms.The widespread use of such extended programs requires the definition of a correct top-down querying mechanism, much as for Prolog wrt. normal programs. One purpose of this paper is to present and exploit a SLDNF-like derivation procedure, SLX, for programs with explicit negation under well-founded semantics (WFSX) and prove its soundness and completeness. (Its soundness wrt. the answer-sets semantics is also shown.) Our choice ofWFSX as the base semantics is justi-fied by the structural properties it enjoys, which are paramount for top-down query evaluation.Of course, introducing explicit negation requires dealing with contradiction. Consequently, we allow for contradiction to appear, and show moreover how it can be removed by freely changing the truth-values of some subset of a set of predefined revisable literals. To achieve this, we introduce a paraconsistent version ofWFSX, WFSX p , that allows contradictions and for which our SLX top-down procedure is proven correct as well.This procedure can be used to detect the existence of pairs of complementary literals inWESX p simply by detecting the violation of integrity rulesf L, -L introduced for eachL in the language of the program. Furthermore, integrity constraints of a more general form are allowed, whose violation can likewise be detected by SLX.Removal of contradiction or integrity violation is accomplished by a variant of the SLX procedure that collects, in a formula, the alternative combinations of revisable literals' truth-values that ensure the said removal. The formulas, after simplification, can then be satisfied by a number of truth-values changes in the revisable, among true, false, and undefined. A notion of minimal change is defined as well that establishes a closeness relation between a program and its revisions. Forthwith, the changes can be enforced by introducing or deleting program rules for the revisable literals.To illustrate the usefulness and originality of our framework, we applied it to obtain a novel logic programming approach, and results, in declarative debugging and model-based diagnosis problems.  相似文献   

15.
I-SATCHMO: An Improvement of SATCHMO   总被引:2,自引:0,他引:2  
We introduce a method for reducing the redundant search space for SATCHMO's model generation approach by means of intelligent backtracking. During the reasoning, we mark an asserted consequent atom as useful whenever it has been used as an antecedent atom for forward chaining. We show that a splitting of the consequence of a non-Horn clause is unnecessary if one of its consequent atoms is found not to be useful at the time it is retracted from the database on backtracking, and therefore the remaining splitting over the clause's consequence can be immediately abandoned. In this way, much of the redundant search space can be eliminated. Our method is simple in principle, easy to implement in Prolog, independent of other refinements, and effective for model generation theorem proving.  相似文献   

16.
Clause graphs, as they were defined in the 1970s, are graphs representing first order formulas in conjunctive normal form: the nodes are labelled with literals and the edges (links) connect complementary unifiable literals, i.e. they provide an explicit representation of the resolution possibilities. This report describes a generalization of this concept, called abstract clause graphs. The nodes of abstract clause graphs are still labelled with literals, the links however connect literals that are unifiable relative to a given relation between literals. This relation is not explicitely defined, only certain abstract properties are required. For instance the existence of a special purpose unification algorithm is assumed, which computes substitutions, the application of which makes the relation hold for two literals.When instances of already existing literals are added to the graph (e.g. due to resolution or factoring), the links to the new literals are derived from the links of their ancestors. An inheritance mechanism for such links is presented which operates only on the attached substitutions and does not have to unify the literals. The definition of abstract clause graphs and the theory about link inheritance is general enough to provide a framework so that as new ideas are proposed for graph based theorem provers, the methodology for both implementing links and proving their properties will be readily available.This research was supported by the Sonderforschungsbereich 314, Künstliche Intelligenz, of the German Research Agency.  相似文献   

17.
Unification, the heart of resolution, was formulated to work in the Herbrand universe and hence does not incorporate any function evaluation. Matching is completely syntactic. In this paper, we study the replacement of unification by a constraints solver in automatic theorem proving systems using Prolog as our example. The generalization of unification into a constraint satisfaction algorithm allows the (limited) incorporation of function evaluation into unification. Constraints are also allowed as literals in the clause. We discuss the enhanced expressive power that results from incorporating an efficient constrained unifier into an automatic theorem proving system. An interpreter for the extended Prolog system (written in Prolog) incorporating a constraint solver is presented along with examples illustrating its capabilities.  相似文献   

18.
The minimal entailment Min has been characterized elsewhere by where Cn is the first-order consequence operation, P is a set of clauses (indefinite deductive data base; in short: a data base), is a clause (a query), and Pos is the set of positive (that is, bodiless) ground clauses. In this paper, we address the problem of the computational feasibility of criterion (1). Our objective is to find a query evaluation algorithm that decides P Min by what we call indefinite modeling, without actually computing all ground positive consequences of P and P {}. For this purpose, we introduce the concept of minimal indefinite Herbrand model MP of P, which is defined as the set of subsumption-minimal ground positive clauses provable from P. The algorithm first computes MP by finding the least fixed-point of an indefinite consequence operator TIP. Next, the algorithm verifies whether every ground positive clause derivable from MP {} by one application of the parallel positive resolution rule (in short: the PPR rule) is subsumed by an element of MP. We prove that the PPR rule, which can derive only positive clauses, is positively complete, that is, every positive clause provable from a data base P is derivable from P by means of subsumption and finitely many applications of PPR. From this we conclude that the presented algorithm is partially correct and that it eventually halts if both P and MP are finite. Moreover, we indicate how the algorithm can be modified to handle data bases with infinite indefinite Herbrand models. This modification leads to a concept of universal model that allows for nonground clauses in its Herbrand base and appears to be a good candidate for representation of indefinite deductive data bases.  相似文献   

19.
曹锋  徐扬  钟建  宁欣然 《计算机科学》2020,47(3):217-221
一阶逻辑定理证明是人工智能的核心基础,研究一阶逻辑自动定理证明器的相关理论和高效的算法实现具有重要的学术意义。当前一阶逻辑自动定理证明器首先通过子句集预处理约简子句集规模,然后通过演绎方法对定理进行判定。现有的应用于证明器中的子句集预处理方法普遍只从与目标子句项符号相关性角度出发,不能很好地从文字的互补对关系中体现子句间的演绎。为了在子句集预处理时从演绎的角度刻画子句间的关系,定义了目标演绎距离的概念并给出了计算方法,提出了一种基于目标演绎距离的一阶逻辑子句集预处理方法。首先对原始子句集进行包含冗余子句约简并应用纯文字删除规则,然后根据目标子句计算剩余子句集中的文字目标演绎距离、子句目标演绎距离,并最终通过设定子句演绎距离阈值来实现对子句集的进一步预处理。将该预处理方法应用于顶尖证明器Vampire,以2017年国际一阶逻辑自动定理证明器标准一阶逻辑问题组竞赛例为测试对象,在标准的300 s内,加入提出的子句集预处理方法的Vampire4.1相比原始的Vampire4.1多证明4个定理,能证明10个Vampire4.1未证明的定理,占其未证明定理总数的13.5%;在证明的定理中,提出的子句集预处理方法能对77.2%的子句集进行约简,最大子句集约简规模达到51.7%。实验结果表明,提出的一阶逻辑子句集预处理方法是一种有效的方法,能有效地约简一阶逻辑子句集的规模,提高一阶逻辑自动定理证明器的证明能力。  相似文献   

20.
Subsumption is an important redundancy elimination method in automated deduction. A clause D is subsumed by a set C of clauses if there is a clause CC and a substitution such that the literals of C are included in D. In the field of automated model building, subsumption has been modified to an even stronger redundancy elimination method, namely the so-called clausal H-subsumption. Atomic H-subsumption emerges from clausal H-subsumption by restricting D to an atom and C to a set of atoms. Both clausal and atomic H-subsumption play an indispensable key role in automated model building. Moreover, problems equivalent to atomic H-subsumption have been studied with different terminologies in many areas of computer science. Both clausal and atomic H-subsumption are known to be intractable, i.e., 2 p -complete and NP-complete, respectively. In this paper, we present a new approach to deciding (clausal and atomic) H-subsumption that is based on a reduction to QSAT2 and SAT, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号