首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
CLIN-S is an instance-based, clause-form first-order theorem prover. CLIN-S employs three inference procedures: semantic hyper-linking, which uses semantics to guide the proof search and performs well on non-Horn parts of the proofs involving small literals, rough resolution, which removes large literals in the proofs, and UR resolution, which proves the Horn parts of the proofs. A semantic structure for the input clauses is given as input. During the search for the proof, ground instances of the input clauses are generated and new semantic structures are built based on the input semantics and a model of the ground clause set. A proof is found if the ground clause set is unsatisfiable. In this article, we describe the system architecture and major inference rules used in CLIN-S.  相似文献   

2.
张立明  欧阳丹彤  赵毅 《软件学报》2015,26(9):2250-2261
基于扩展规则的定理证明方法在一定意义上是与归结原理对偶的方法,通过子句集能否推导出所有极大项来判定可满足性.IER(improved extension rule)算法是不完备的算法,在判定子句集子空间不可满足时,并不能判定子句集的满足性,算法还需重新调用ER(extension rule)算法,降低了算法的求解效率.通过对子句集的极大项空间的研究,给出了子句集的极大项空间分解后子空间的求解方法.通过对扩展规则的研究,给出了极大项部分空间可满足性判定方法PSER(partial semi-extension rule).在IER算法判定子空间不可满足时,可以调用PSER算法判定子空间对应的补空间的可满足性,从而得到子句集的可满足性,避免了不能判定极大项子空间可满足性时需重新调用ER算法的缺点,使得IER算法更完备.在此基础上,还提出DPSER(degree partial semi-extension rule)定理证明方法.实验结果表明:所提出的DPSER和IPSER的执行效率较基于归结的有向归结算法DR、IER及NER算法有明显的提高.  相似文献   

3.
New sequent forms* of the famous Herbrand theorem are proved for first-order classical logic without equality. These forms use the original notion of an admissible substitution and a certain modification of the Herbrand universe, which is constructed from constants, special variables, and functional symbols occurring only in the signature of an initial theory. Other well-known forms of the Herbrand theorem are obtained as special cases of the sequent ones. Besides, the sequent forms give an approach to the construction and theoretical investigation of computer-oriented calculi for efficient logical inference search in the signature of an initial theory. In a comparably simple way, they provide us with some technique for proving the completeness and soundness of the calculi. *A part of this investigation was performed during a visit to the University of Liverpool supported by the grant NAL/00841/G given by the Nuffield foundation.  相似文献   

4.
The importance of models within automated deduction is generally acknowledged both in constructing countermodels (rather than just giving the answer “NO", if a given formula is found to be not a theorem) and in speeding up the deduction process itself (e.g. by semantic resolution refinement). However, so far little attention has been paid to the efficiency of algorithms to actually work with models. There are two fundamental decision problems as far as models are concerned, namely: the equivalence of two models and the truth evaluation of an arbitrary clause within a given model. This paper focuses on the efficiency of algorithms for these problems in the case of Herbrand models given through atomic representations. Both problems have been shown to be coNP-hard by Gottlob and Pichler (1999), so there is a certain limit to the efficiency that we can possibly expect. Nevertheless, what we can do is find out the real “source" of complexity and make use of this theoretical result for devising an algorithm which, in general, has a considerably smaller upper bound on the complexity than previously known algorithms, e.g. the partial saturation method in Fermüller and Leitsch (1996)and the transformation into equational problems in Caferra and Zabel (1991). The main results of this paper are algorithms for these two decision problems, where the complexity depends non-polynomially on the number of atoms (rather than on the total size) of the input model equivalence problem or clause evaluation problem, respectively. Hence, in contrast to the above-mentioned algorithms, the complexity of the expressions involved (e.g. the arity of the predicate symbols and, in particular, the term depth of the arguments) only has polynomial influence on the overall complexity of the algorithms.  相似文献   

5.
Unification, the heart of resolution, was formulated to work in the Herbrand universe and hence does not incorporate any function evaluation. Matching is completely syntactic. In this paper, we study the replacement of unification by a constraints solver in automatic theorem proving systems using Prolog as our example. The generalization of unification into a constraint satisfaction algorithm allows the (limited) incorporation of function evaluation into unification. Constraints are also allowed as literals in the clause. We discuss the enhanced expressive power that results from incorporating an efficient constrained unifier into an automatic theorem proving system. An interpreter for the extended Prolog system (written in Prolog) incorporating a constraint solver is presented along with examples illustrating its capabilities.  相似文献   

6.
Local-search Extraction of MUSes   总被引:2,自引:0,他引:2  
SAT is probably one of the most-studied constraint satisfaction problems. In this paper, a new hybrid technique based on local search is introduced in order to approximate and extract minimally unsatisfiable subformulas (in short, MUSes) of unsatisfiable SAT instances. It is based on an original counting heuristic grafted to a local search algorithm, which explores the neighborhood of the current interpretation in an original manner, making use of a critical clause concept. Intuitively, a critical clause is a falsified clause that becomes true thanks to a local search flip only when some other clauses become false at the same time. In the paper, the critical clause concept is investigated. It is shown to be the cornerstone of the efficiency of our approach, which outperforms competing ones to compute MUSes, inconsistent covers and sets of MUSes, most of the time.  相似文献   

7.
A Fuzzy Proof Theory   总被引:1,自引:0,他引:1       下载免费PDF全文
Based on the first order peicate logic,in this paper,we present a new approach to generalizing the syntax of ordinary Horn clause rules to establish a fuzzy proof theory,First of all,each Horn clause rule is associated with a numerical implication strength f.Therefore we obtain f-Horn clause rules.Secondly,Herbrand interpretations can be generalized to fuzzy subsets of the Herbrand base in the sense of Zadch.As a result the proof theory for Horn clause rules can be developed in much the same way for f-Horm clause rules.  相似文献   

8.
布尔公式的最小纠正集MCS是子句的集合。对于一个不可满足公式,移除MCS后,所得到的新公式可满足。任一MCS中的子句保留在公式中,所得到的新公式不可满足。通过求解MCS 并调整约束集合,能够求解最小不可满足核心、MaxSAT 问题和最大(小)可满足解问题;还能够应用于故障定位、模型检查配置优化等实际问题中。 提出了一种基于不可满足原因的MCS求解算法,实现了相应的CUC工具。通过与目前最好的MCS求解工具LBX进行比较,得到了CUC性能优于LBX的结论。CUC比LBX平均多解出5%(65个)的公式。对于CUC和LBX均可解出的公式,CUC的平均求解时间比LBX快2.5倍。  相似文献   

9.
A formula (in conjunctive normal form) is said to be minimal unsatisfiable if it is unsatisfiable and deleting any clause makes it satisfiable. The deficiency of a formula is the difference of the number of clauses and the number of variables. It is known that every minimal unsatisfiable formula has positive deficiency. Until recently, polynomial-time algorithms were known to recognize minimal unsatisfiable formulas with deficiency 1 and 2. We state an algorithm which recognizes minimal unsatisfiable formulas with any fixed deficiency in polynomial time.  相似文献   

10.
We describe a complete theorem proving procedure for higher-order logic that uses SAT-solving to do much of the heavy lifting. The theoretical basis for the procedure is a complete, cut-free, ground refutation calculus that incorporates a restriction on instantiations. The refined nature of the calculus makes it conceivable that one can search in the ground calculus itself, obtaining a complete procedure without resorting to meta-variables and a higher-order lifting lemma. Once one commits to searching in a ground calculus, a natural next step is to consider ground formulas as propositional literals and the rules of the calculus as propositional clauses relating the literals. With this view in mind, we describe a theorem proving procedure that primarily generates relevant formulas along with their corresponding propositional clauses. The procedure terminates when the set of propositional clauses is unsatisfiable. We prove soundness and completeness of the procedure. The procedure has been implemented in a new higher-order theorem prover, Satallax, which makes use of the SAT-solver MiniSat. We also describe the implementation and give several examples. Finally, we include experimental results of Satallax on the higher-order part of the TPTP library.  相似文献   

11.
Logic programming provides a model for rule-based reasoning in expert systems. The advantage of this formal model is that it makes available many results from the semantics and proof theory of first-ordet predicate logic. A disadvantage is that in expert systems one often wants to use, instead of the usual two truth values, an entire continuum of “uncertainties” in between. That is, instead of the usual “qualitative” deduction, a form of “quantitative” deduction is required. We present an approach to generalizing the Tarskian semantics of Horn clause rules to justify a form of quantitative deduction. Each clause receives a numerical attenuation factor. Herbrand interpretations, which are subsets of the Herbrand base, are generalized to subsets which are fuzzy in the sense of Zadeh. We show that as result the fixpoint method in the semantics of Horn clause rules can be developed in much the same way for the quantitative case. As for proof theory, the interesting phenomenon is that a proof should be viewed as a two-person game. The value of the game turns out to be the truth value of the atomic formula to be proved, evaluated in the minimal fixpoint of the rule set. The analog of the PROLOG interpreter for quantitative deduction becomes a search of the game tree ( = proof tree) using the alpha-beta heuristic well known in game theory.  相似文献   

12.
一种新的基于扩展规则的定理证明算法   总被引:3,自引:0,他引:3  
基于扩展规则的定理证明方法是一种与归结方法互补的新的定理证明方法,首先通过对扩展规则的深入研究,给出了扩展规则的一个重要性质,设计并实现了该性质的判定算法.此外,从理论上分析及证明了该判定算法的时问和空间复杂性.基于此,提出了一种新的基于扩展规则的定理证明算法NER,将判定子句集可满足性问题转化为一系列文字集合的包含问题,而非计数问题.实验结果表明,算法NER的执行效率较原有扩展规则算法IER和基于归结的有向归结算法DR有明显提高,有些问题可以提高两个数量级.  相似文献   

13.
The Replacement Rule Theorem Prover (RRTP) is an instance-based, refutational, first-order clausal theorem prover. The prover is motivated by the idea of selectively replacing predicates by their definitions, and operates by selecting relevant instances of the input clauses. The relevant instances are grounded, if necessary, and tested for unsatisfiability by using a fast propositional calculus decision procedure.  相似文献   

14.
Applications in software verification often require determining the satisfiability of first-order formulae with respect to background theories. During development, conjectures are usually false. Therefore, it is desirable to have a theorem prover that terminates on satisfiable instances. Satisfiability Modulo Theories (SMT) solvers have proven to be highly scalable, efficient and suitable for integrated theory reasoning. Inference systems with resolution and superposition are strong at reasoning with equalities, universally quantified variables, and Horn clauses. We describe a theorem-proving method that tightly integrates superposition-based inference system and SMT solver. The combination is refutationally complete if background theory symbols only occur in ground formulae, and non-ground clauses are variable-inactive. Termination is enforced by introducing additional axioms as hypotheses. The system detects any unsoundness introduced by these speculative inferences and recovers from it.  相似文献   

15.
模糊集与模糊逻辑是处理大量存在的不确定性与模糊性信息的重要数学工具,在近似推理等领域有着广泛的应用。该文将王家兵等人提出的真值取在[0,1]区间上的带有相似性关系的模糊逻辑,扩充到很一般的与滋可比的有余完全分配格值逻辑中,将王家兵等人的许多结论进行了推广。首先对带有相似性关系的模糊逻辑的语义描述进行了扩充,然后讨论了在这种模糊推理中归结式与调解式的有效性,最后通过证明一个子句集在扩充模糊逻辑中的不可满足性与它在带有相等关系的二值逻辑中的不可满足性是等价的,得到了基于归结与调解方法对这种广义模糊演算的完备性。  相似文献   

16.
We continue our study, initiated in [9], of the following computational problem proposed by Nilsson: Several clauses (Boolean functions of several variables) are given, and for each clause the probability that the clause is true is specified. We are asked whether these probabilities are consistent. They are if there is a probability distribution on the truth assignments such that the probability of each clause is the measure of its satisfying set of assignments. Since this is a generalization of the satisfiability problem of predicate calculus, it is immediately NP-hard. In [9] we showed certain restricted cases of the problem to be NP-complete, and used the Ellipsoid Algorithm to show that a certain special case is in P. In this paper we use the Simplex method, column generation techniques, and variable-depth local search to derive an effective heuristic for the general problem. Experiments show that our heuristic performs successfully on instances with many dozens of variables and clauses. We also prove several interesting complexity results that answer open questions in [9] and motivate our approach.  相似文献   

17.
In this paper we revise Muggleton’s theory of inverse entailment, which is the logical foundation of Progol, one of the most famous ILP systems. We first point out that the theory is incomplete in general. Secondly we prove that the theory is complete if the background knowledge given to the system is a ground reduced program, every training example is a ground unit clause, and the hypothesis space is the set of all definite clauses. The proof is obtained by showing that every ground reduced logic program is logically equivalent to the conjunction of all atoms in its least Herbrand model. As a corollary to this equivalence, we are finally able to improve the logical foundation of the GOLEM system.  相似文献   

18.
We introduce a new method for checking satisfiability of conjunctive normal forms (CNFs). The method is based on the fact that if no clause of a CNF contains a satisfying assignment in its 1-neighborhood, then this CNF is unsatisfiable. (The 1-neighborhood of a clause is the set of all assignments satisfying only one literal of this clause.) The idea of 1-neighborhood exploration allows one to prove unsatisfiability without generating an empty clause. The reason for avoiding the generation of an empty clause is that we believe that no deterministic algorithm can efficiently reach a global goal (deducing an empty clause) using an inherently local operation (resolution). At the same time, when using 1-neighborhood exploration, a global goal is replaced with a set of local subgoals, which makes it possible to optimize steps of the proof. We introduce two proof systems formalizing 1-neighborhood exploration. An interesting open question is whether there exists a class of CNFs for which the introduced systems have proofs that are exponentially shorter than the ones that can be obtained by general resolution.  相似文献   

19.
This paper presents a formal specification and a proof of correctness for the widely-used Force-Directed List Scheduling (FDLS) algorithm for resource-constrained scheduling of data flow graphs in high-level synthesis systems. The proof effort is conducted using a higher-order logic theorem prover. During the proof effort many interesting properties of the FDLS algorithm are discovered. These properties are formally stated and proved in a higher-order logic theorem proving environment. These properties constitute a detailed set of formal assertions and invariants that should hold at various steps in the FDLS algorithm. They are then inserted as programming assertions in the implementation of the FDLS algorithm in a production-strength high-level synthesis system. When turned on, the programming assertions (1) certify whether a specific run of the FDLS algorithm produced correct schedules and, (2) in the event of failure, help discover and isolate programming errors in the FDLS implementation.We present a detailed example and several experiments to demonstrate the effectiveness of these assertions in discovering and isolating errors. Based on this experience, we discuss the role of the formal theorem proving exercise in developing a useful set of assertions for embedding in the scheduler code and argue that in the absence of such a formal proof checking effort, discovering such a useful set of assertions would have been an arduous if not impossible task.  相似文献   

20.
We review the quantum adiabatic approximation for closed systems, and its recently introduced generalization to open systems (M.S. Sarandy and D.A. Lidar, eprint quant-ph/0404147). We also critically examine a recent argument claiming that there is an inconsistency in the adiabatic theorem for closed quantum systems (K.P. Marzlin and B.C. Sanders, Phys. Rev. Lett. 93, 160408 (2004).) and point out how an incorrect manipulation of the adiabatic theorem may lead one to obtain such an inconsistent result.PACS: 03.65.Ta, 03.65.Yz, 03.67.-a, 03.65.Vf.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号