首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Quantification in first-order logic always involves all elements of the universe. However, it is often more natural to just quantify over those elements of the universe which satisfy a certain condition. Constrained logics therefore provide restricted quantifiers X:R F and X:R F whereX is a set of variables, and which can be read as F holds for all elements satisfying the restrictionR and F holds if there exists an element which satisfiesR. In order to test satisfiability of a set of such formulas by means of an appropriately extended resolution principle, one needs a procedure which transforms them into a set of clauses with constraints. Such a procedure essentially differs from the classical transformation of first-oder formulas into a set of clauses, in particular since quantification over the empty set may occur and since the needed Skolemization procedure has to take the restrictions of restricted quantifiers into account. In the first part of this article we present a procedure that transforms formulas with restricted quantifiers into a set of clauses with constraints while preserving satisfiability. The thus obtained clauses are of the formC R whereC is an ordinary clause andR is a restriction, and can be read as C holds ifR holds. These clauses can now be tested on unsatisfiability via the existingconstrained resolution principle. In the second part we generalize the constrained resolution principle in such a way that it allows for further optimization, and we introduce a combination of unification and constraint solving that can be employed to instantiate this kind of optimization.  相似文献   

2.
A CNF formula is called matched if its associated bipartite graph (whose vertices are clauses and variables) has a matching that covers all clauses. Matched CNF formulas are satisfiable and can be recognized efficiently by matching algorithms. We generalize this concept and cover clauses by collections of bicliques (complete bipartite graphs). It turns out that such generalization indeed gives rise to larger classes of satisfiable CNF formulas which we term biclique satisfiable. We show, however, that the recognition of biclique satisfiable CNF formulas is NP-complete, and remains NP-hard if the size of bicliques is bounded. A satisfiable CNF formula is called var-satisfiable if it remains satisfiable under arbitrary replacement of literals by their complements. Var-satisfiable CNF formulas can be viewed as the best possible generalization of matched CNF formulas as every matched CNF formula and every biclique satisfiable CNF formula is var-satisfiable. We show that recognition of var-satisfiable CNF formulas is 2 P-complete, answering a question posed by Kleine Büning and Zhao.  相似文献   

3.
We investigate the complexity of deciding whether for minimal unsatisfiable formulas F and H there exists a variable renaming, a literal renaming or a homomorphism such that (F)=H. A variable renaming is a permutation of variables. A literal renaming is a permutation of variables which additionally replaces some of the variables by its complements. A homomorphism can be considered as a literal renaming which can map different literals to one literal.  相似文献   

4.
A CNF formula is called matched if its associated bipartite graph (whose vertices are clauses and variables) has a matching that covers all clauses. Matched CNF formulas are satisfiable and can be recognized efficiently by matching algorithms. We generalize this concept and cover clauses by collections of bicliques (complete bipartite graphs). It turns out that such generalization indeed gives rise to larger classes of satisfiable CNF formulas which we term biclique satisfiable. We show, however, that the recognition of biclique satisfiable CNF formulas is NP-complete, and remains NP-hard if the size of bicliques is bounded. A satisfiable CNF formula is called var-satisfiable if it remains satisfiable under arbitrary replacement of literals by their complements. Var-satisfiable CNF formulas can be viewed as the best possible generalization of matched CNF formulas as every matched CNF formula and every biclique satisfiable CNF formula is var-satisfiable. We show that recognition of var-satisfiable CNF formulas is P 2 P-complete, answering a question posed by Kleine Büning and Zhao.  相似文献   

5.
We investigate the complexity of deciding whether for minimal unsatisfiable formulas F and H there exists a variable renaming, a literal renaming or a homomorphism such that (F) = H. A variable renaming is a permutation of variables. A literal renaming is a permutation of variables which additionally replaces some of the variables by its complements. A homomorphism can be considered as a literal renaming which can map different literals to one literal.  相似文献   

6.
It is shown that a variety of problems have computational complexity equivalent to that of finding a path through a directed graph. These results, which parallel those of Karp at a lower complexity level, concern satisfiability of propositional formulas with two literals per clause, generation of elements by an associative binary operation, solution of linear equations with two variables per equation, equivalence of generalized sequential machines with final states, and deciding theLL(k) andLR(k) conditions for context-free grammars. Finally, several problems are shown equivalent to reachability in undirected graphs, including bipartiteness and satisfiability of formulas with the exclusive or connective.The work of this author was partly supported by Kansas General Research Grant 3756-5038. A preliminary version appears in the Proceedings of the Eighth Princeton Conference on Information Sciences and Systems, Princeton University, pp. 184–188, March 1974.  相似文献   

7.
This paper addresses complexity issues for important problems arising with disjunctive logic programming. In particular, the complexity of deciding whether a disjunctive logic program is consistent is investigated for a variety of well-known semantics, as well as the complexity of deciding whether a propositional formula is satisfied by all models according to a given semantics. We concentrate on finite propositional disjunctive programs with as well as without integrity constraints, i.e., clauses with empty heads; the problems are located in appropriate slots of the polynomial hierarchy. In particular, we show that the consistency check is 2 p -complete for the disjunctive stable model semantics (in the total as well as partial version), the iterated closed world assumption, and the perfect model semantics, and we show that the inference problem for these semantics is 2 p -complete; analogous results are derived for the answer sets semantics of extended disjunctive logic programs. Besides, we generalize previously derived complexity results for the generalized closed world assumption and other more sophisticated variants of the closed world assumption. Furthermore, we use the close ties between the logic programming framework and other nonmonotonic formalisms to provide new complexity results for disjunctive default theories and disjunctive autoepistemic literal theories.Parts of the results in this paper appeared in form of an abstract in the Proceedings of the Twelfth ACM SIGACT SIGMOD-SIGART Symposium on Principles of Database Systems (PODS-93), pp. 158–167. Other parts appeared in shortened form in the Proceedings of the International Logic Programming Symposium, Vancouver, October 1993 (ILPS-93), pp. 266–278. MIT Press.  相似文献   

8.
Checking if a program has an answer set, and if so, compute its answer sets are just some of the important problems in answer set logic programming. Solving these problems using Gelfond and Lifschitz's original definition of answer sets is not an easy task. Alternative characterizations of answer sets for nested logic pro- grams by Erdem and Lifschitz, Lee and Lifschitz, and You et al. are based on the completion semantics and various notions of tightness. However, the notion of tightness is a local notion in the sense that for different answer sets there are, in general, different level mappings capturing their tightness. This makes it hard to be used in the design of algorithms for computing answer sets. This paper proposes a characterization of answer sets based on sets of generating rules. From this char- acterization new algorithms are derived for computing answer sets and for per- forming some other reasoning tasks. As an application of the characterization a sufficient and necessary condition for the equivalence between answer set seman- tics and completion semantics has been proven, and a basic theorem is shown on computing answer sets for nested logic programs based on an extended notion of loop formulas. These results on tightness and loop formulas are more general than that in You and Lin's work.  相似文献   

9.
Recognition of minimal unsatisfiable CNF formulas (unsatisfiable CNF formulas which become satisfiable if any clause is removed) is a classical DP-complete problem. It was shown recently that minimal unsatisfiable formulas with n variables and n+k clauses can be recognized in time . We improve this result and present an algorithm with time complexity ; hence the problem turns out to be fixed-parameter tractable (FTP) in the sense of Downey and Fellows (Parameterized Complexity, 1999).Our algorithm gives rise to a fixed-parameter tractable parameterization of the satisfiability problem: If for a given set of clauses F, the number of clauses in each of its subsets exceeds the number of variables occurring in the subset at most by k, then we can decide in time whether F is satisfiable; k is called the maximum deficiency of F and can be efficiently computed by means of graph matching algorithms. Known parameters for fixed-parameter tractable satisfiability decision are tree-width or related to tree-width. Tree-width and maximum deficiency are incomparable in the sense that we can find formulas with constant maximum deficiency and arbitrarily high tree-width, and formulas where the converse prevails.  相似文献   

10.
The evolution of logic programming semantics has included the introduction of a new explicit form of negation, beside the older implicit (or default) negation typical of logic programming. The richer language has been shown adequate for a spate of knowledge representation and reasoning forms.The widespread use of such extended programs requires the definition of a correct top-down querying mechanism, much as for Prolog wrt. normal programs. One purpose of this paper is to present and exploit a SLDNF-like derivation procedure, SLX, for programs with explicit negation under well-founded semantics (WFSX) and prove its soundness and completeness. (Its soundness wrt. the answer-sets semantics is also shown.) Our choice ofWFSX as the base semantics is justi-fied by the structural properties it enjoys, which are paramount for top-down query evaluation.Of course, introducing explicit negation requires dealing with contradiction. Consequently, we allow for contradiction to appear, and show moreover how it can be removed by freely changing the truth-values of some subset of a set of predefined revisable literals. To achieve this, we introduce a paraconsistent version ofWFSX, WFSX p , that allows contradictions and for which our SLX top-down procedure is proven correct as well.This procedure can be used to detect the existence of pairs of complementary literals inWESX p simply by detecting the violation of integrity rulesf L, -L introduced for eachL in the language of the program. Furthermore, integrity constraints of a more general form are allowed, whose violation can likewise be detected by SLX.Removal of contradiction or integrity violation is accomplished by a variant of the SLX procedure that collects, in a formula, the alternative combinations of revisable literals' truth-values that ensure the said removal. The formulas, after simplification, can then be satisfied by a number of truth-values changes in the revisable, among true, false, and undefined. A notion of minimal change is defined as well that establishes a closeness relation between a program and its revisions. Forthwith, the changes can be enforced by introducing or deleting program rules for the revisable literals.To illustrate the usefulness and originality of our framework, we applied it to obtain a novel logic programming approach, and results, in declarative debugging and model-based diagnosis problems.  相似文献   

11.
The purpose of this paper is to expand the syntax and semantics of logic programs and disjunctive databases to allow for the correct representation of incomplete information in the presence of multiple extensions. The language of logic programs with classical negation, epistemic disjunction, and negation by failure is further expanded by new modal operators K and M (where for the set of rulesT and formulaF, KF stands for F is known to be true by a reasoner with a set of premisesT and MF means F may be believed to be true by the same reasoner). Sets of rules in the extended language will be called epistemic specifications. We will define the semantics of epistemic specifications (which expands the semantics of disjunctive databases from) and demonstrate their applicability to formalization of various forms of commonsense reasoning. In particular, we suggest a new formalization of the closed world assumption which seems to better correspond to the assumption's intuitive meaning.  相似文献   

12.
Subsumption is an important redundancy elimination method in automated deduction. A clause D is subsumed by a set C of clauses if there is a clause CC and a substitution such that the literals of C are included in D. In the field of automated model building, subsumption has been modified to an even stronger redundancy elimination method, namely the so-called clausal H-subsumption. Atomic H-subsumption emerges from clausal H-subsumption by restricting D to an atom and C to a set of atoms. Both clausal and atomic H-subsumption play an indispensable key role in automated model building. Moreover, problems equivalent to atomic H-subsumption have been studied with different terminologies in many areas of computer science. Both clausal and atomic H-subsumption are known to be intractable, i.e., 2 p -complete and NP-complete, respectively. In this paper, we present a new approach to deciding (clausal and atomic) H-subsumption that is based on a reduction to QSAT2 and SAT, respectively.  相似文献   

13.
子句型缺省逻辑中的分情形推理   总被引:3,自引:0,他引:3  
许道云  丁德成  张明义 《软件学报》2001,12(8):1140-1146
引进一种树型方法以研究缺省逻辑中分情形推理下的Roos扩张,深入讨论了Roos扩张的计算,并分析了Roos扩张与Reiter扩张的关系.为计算Roos扩张,引入了从子句集分解最小文字集的算法.方法对于在缺省逻辑中计算Roos扩张以及分析分情形推理的计算复杂性是有用的.  相似文献   

14.
We consider the parameterized problems of whether a given set of clauses can be refuted within k resolution steps, and whether a given set of clauses contains an unsatisfiable subset of size at most k  . We show that both problems are complete for the class W[1]W[1], the first level of the W-hierarchy of fixed-parameter intractable problems. Our results remain true if restricted to 3-SAT instances and/or to various restricted versions of resolution including tree-like resolution, input resolution, and read-once resolution. Applying a metatheorem of Frick and Grohe, we show that, restricted to classes of sets of clauses of locally bounded treewidth, the considered problems are fixed-parameter tractable. For example, the problems are fixed-parameter tractable for planar CNF formulas.  相似文献   

15.
Disjunctive logic programs have become a powerful tool in knowledge representation and commonsense reasoning. This paper focuses on stable model semantics, currently the most widely acknowledged semantics for disjunctive logic programs. After presenting a new notion of unfounded sets for disjunctive logic programs, we provide two declarative characterizations of stable models in terms of unfounded sets. One shows that the set of stable models coincides with the family of unfounded-free models (i.e., a model is stable iff it contains no unfounded atoms). The other proves that stable models can be defined equivalently by a property of their false literals, as a model is stable iff the set of its false literals coincides with its greatest unfounded set. We then generalize the well-founded operator to disjunctive logic programs, give a fixpoint semantics for disjunctive stable models and present an algorithm for computing the stable models of function-free programs. The algorithm's soundness and completeness are proved and some complexity issues are discussed.  相似文献   

16.
One of the main issues when using inductive logic programming (ILP) in practice remain the long running times that are needed by ILP systems to induce the hypothesis. We explore the possibility of reducing the induction running times of systems that use asymmetric relative minimal generalisation (ARMG) by analysing the bottom clauses of examples that serve as inputs into the generalisation operator. Using the fact that the ARMG covers all of the examples and that it is a subset of the variabilization of one of the examples, we identify literals that cannot appear in the ARMG and remove them prior to computing the generalisation. We apply this procedure to the ProGolem ILP system and test its performance on several real world data sets. The experimental results show an average speedup of \(36\,\%\) compared to the base ProGolem system and \(12\,\%\) compared to ProGolem extended with caching, both without a decrease in the accuracy of the produced hypotheses. We also observe that the gain from using the proposed method varies greatly, depending on the structure of the data set.  相似文献   

17.
Propositional satisfiability (SAT) is a success story in Computer Science and Artificial Intelligence: SAT solvers are currently used to solve problems in many different application domains, including planning and formal verification. The main reason for this success is that modern SAT solvers can successfully deal with problems having millions of variables. All these solvers are based on the Davis–Logemann–Loveland procedure (dll). In its original version, dll is a decision procedure, but it can be very easily modified in order to return one or all assignments satisfying the input set of clauses, assuming at least one exists. However, in many cases it is not enough to compute assignments satisfying all the input clauses: Indeed, the returned assignments have also to be “optimal” in some sense, e.g., they have to satisfy as many other constraints—expressed as preferences—as possible. In this paper we start with qualitative preferences on literals, defined as a partially ordered set (poset) of literals. Such a poset induces a poset on total assignments and leads to the definition of optimal model for a formula ψ as a minimal element of the poset on the models of ψ. We show (i) how dll can be extended in order to return one or all optimal models of ψ (once converted in clauses and assuming ψ is satisfiable), and (ii) how the same procedures can be used to compute optimal models wrt a qualitative preference on formulas and/or wrt a quantitative preference on literals or formulas. We implemented our ideas and we tested the resulting system on a variety of very challenging structured benchmarks. The results indicate that our implementation has comparable performances with other state-of-the-art systems, tailored for the specific problems we consider.  相似文献   

18.
Dr. G. Veenker 《Computing》1967,2(3):263-283
Zusammenfassung Ausgehend von einer speziellen Formulierung des Prädikatenkalküls 1. Stufe werden zwei Verfahren entwickelt, welche Widerlegungen inkonsistenter FormelnF liefern. Grundlegend für beide Verfahren ist der Begriff der Verkettung von Literals. Das erste beruht auf demHerbrad-Theorem. SeiF n die Konjunktion vonn Exemplaren der FormelF. Es wird ein Algorithmus angegeben, der entscheidet, ob es bei gegebenemn eine Belegung der Variablen vonF n gibt, bei welcher die Formel F n vollständig verkettet ist. Die Anwendung dieses Algorithmus fürn=1,2, ... führt zur Konstruktion einer aussagenlogisch inkonsistenten Formel minimaler Länge. Das zweite Verfahren beruht darauf, daß neue Clausen erzeugt werden, die aus nur einem Literal bestehen (und die mitF konsistent sind). Dieses Verfahren ist nicht vollständig, liefert aber in vielen Fällen schnellere Widerlegungen als andere Methoden; einige Maschinen-Protokolle solcher Widerlegungen sind im Anhang zusammengestellt. Das Verfahren läßt sich verallgemeinern; insbesondere bietet es eine Möglichkeit des Zusammenwirkens von Mensch und Maschine beim Beweisen mathematischer Sätze.
Summary Starting from a special formulation of first order functional calculus two procedures are presented producing refutations of inconsistent formulas. Both procedures are based on the notion of interconnecting literals. The first one goes back toHerbrand's Theorem. LetF n be the conjunction ofn copies of the formulaF. An algorithm is introduced to decide whether, for a fixedn, there is an assignment for the variables ofF n making the formula F n completely interconnected. By applying this algorithm forn=1,2,... a truth-functionally inconsistent formula of minimal length is constructed. The basic idea of the second procedure is to produce fromF new oneliteral clauses (which are consisten withF). In many cases this proofprocedure, while not complete, is very effective in comparison with other programs; some examples of machine-refutations are given in the appendix. The method may be generalized; in particular it suggests a possibility of man-machine interaction in theorem-proving.


Mit 1 Textabbildung

Dieser Beitrag ist ein Auszug aus der Dissertation [19] des Verfassers.  相似文献   

19.
The existence of setsnot being tt P -reducible to low sets is investigated for several complexity classes such as UP, NP, the polynomial-time hierarchy, PSPACE, and EXPTIME. The p-selective sets are mainly considered as a class of low sets. Such investigations were done in many earlier works, but almost all of these have dealt withpositive reductions in order to imply the strongest consequence such as P=NP under the assumption that all sets in NP are polynomial-time reducible to low sets. Currently, there seems to be some difficulty in obtaining the same strong results undernonpositive reducibilities. The purpose of this paper is to develop a useful technique to show for many complexity classes that if each set in the class is polynomial-time reducible to a p-selective set via anonpositive reduction, then the class is already contained in P. The following results are shown in this paper.(1) If each set in UP is tt P -reducible to a p-selective set, then P=UP.(2) If each set in NP is tt P -reducible to a p-selective set, then P=FewP and R=NP.(3) If each set in 2 P is tt P -reducible to a p-selective set, then P=NP.(4) If each set in PSPACE is tt P -reducible to a p-selective set, then P=PSPACE.(5) There is a set in EXPTIME that is not tt P -reducible to any p-selective set.It remains open whether P=NP follows from a weaker assumption that each set in NP is tt P -reducible to a p-selective set.  相似文献   

20.
We analyze and compare two solvers for Boolean optimization problems: WMaxSatz, a solver for Partial MaxSAT, and MinSatz, a solver for Partial MinSAT. Both MaxSAT and MinSAT are similar, but previous results indicate that when solving optimization problems using both solvers, the performance is quite different on some cases. For getting insights about the differences in the performance of the two solvers, we analyze their behaviour when solving 2SAT-MaxOnes problem instances, given that 2SAT-MaxOnes is probably the most simple, but NP-hard, optimization problem we can solve with them. The analysis is based first on the study of the bounds computed by both algorithms on some particular 2SAT-MaxOnes instances, characterized by the presence of certain particular structures. We find that the fraction of positive literals in the clauses is an important factor regarding the quality of the bounds computed by the algorithms. Then, we also study the importance of this factor on the typical case complexity of Random-p 2SAT-MaxOnes, a variant of the problem where instances are randomly generated with a probability p of having positive literals in the clauses. For the case p=0, the performance results indicate a clear advantage of MinSatz with respect to WMaxSatz, but as we consider positive values of p WMaxSatz starts to show a better performance, although at the same time the typical complexity of Random-p 2SAT-MaxOnes decreases as p increases. We also study the typical value of the bound computed by the two algorithms on these sets of instances, showing that the behaviour is consistent with our analysis of the bounds computed on the particular instances we studied first.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号