共查询到20条相似文献,搜索用时 390 毫秒
1.
Answer set programming (ASP) emerged in the late 1990s as a new logic programming paradigm that has been successfully applied
in various application domains. Also motivated by the availability of efficient solvers for propositional satisfiability (SAT),
various reductions from logic programs to SAT were introduced. All these reductions, however, are limited to a subclass of
logic programs or introduce new variables or may produce exponentially bigger propositional formulas. In this paper, we present
a SAT-based procedure, called ASPSAT, that (1) deals with any (nondisjunctive) logic program, (2) works on a propositional
formula without additional variables (except for those possibly introduced by the clause form transformation), and (3) is
guaranteed to work in polynomial space. From a theoretical perspective, we prove soundness and completeness of ASPSAT. From
a practical perspective, we have (1) implemented ASPSAT in Cmodels, (2) extended the basic procedures in order to incorporate the most popular SAT reasoning strategies, and (3) conducted an
extensive comparative analysis involving other state-of-the-art answer set solvers. The experimental analysis shows that our
solver is competitive with the other solvers we considered and that the reasoning strategies that work best on ‘small but
hard’ problems are ineffective on ‘big but easy’ problems and vice versa. 相似文献
2.
Checking if a program has an answer set, and if so, compute its answer sets are just some of the important problems in answer set logic programming. Solving these problems using Gelfond and Lifschitz's original definition of answer sets is not an easy task. Alternative characterizations of answer sets for nested logic pro- grams by Erdem and Lifschitz, Lee and Lifschitz, and You et al. are based on the completion semantics and various notions of tightness. However, the notion of tightness is a local notion in the sense that for different answer sets there are, in general, different level mappings capturing their tightness. This makes it hard to be used in the design of algorithms for computing answer sets. This paper proposes a characterization of answer sets based on sets of generating rules. From this char- acterization new algorithms are derived for computing answer sets and for per- forming some other reasoning tasks. As an application of the characterization a sufficient and necessary condition for the equivalence between answer set seman- tics and completion semantics has been proven, and a basic theorem is shown on computing answer sets for nested logic programs based on an extended notion of loop formulas. These results on tightness and loop formulas are more general than that in You and Lin's work. 相似文献
3.
Robert W. Numrich 《The Journal of supercomputing》2010,52(2):119-134
This paper describes how to interpret a program’s performance in terms of its computational energy spectrum. High spikes in
the spectrum correspond to important events during execution, such as cache misses, for example, and their positions show
when they happen and how they effect other events. The area under the spectrum measures a program’s size in terms of the computational
action norm, a measure of how efficiently it moves through computational phase space. The distance from one program to another
is the area between their action curves. The measured energy spectra for a set of real programs executing on real hardware
support the conjecture that the best program generates the least action, the Principle of Computational Least Action. 相似文献
4.
Allen Van Gelder Fumiaki Okushi 《Annals of Mathematics and Artificial Intelligence》1999,26(1-4):87-112
Classical STRIPS-style planning problems are formulated as theorems to be proven from a new point of view: that the problem
is not solvable. The result for a refutation-based theorem prover may be a propositional formula that is to be proven unsatisfiable.
This formula is identical to the formula that may be derived directly by various “SAT compilers”, but the theorem-proving
view provides valuable additional information not in the formula, namely, the theorem to be proven. Traditional satisfiability
methods, most of which are based on model search, are unable to exploit this additional information. However, a new algorithm
called “Modoc” is able to exploit this information and has achieved performance comparable to the fastest known satisfiability
methods, including stochastic search methods, on planning problems that have been reported by other researchers, as well as
formulas derived from other applications. Unlike most theorem provers, Modoc performs well on both satisfiable and unsatisfiable
formulas. Modoc works by a combination of back-chaining from the theorem clauses and forward-chaining on tractable subformulas.
In some cases, Modoc is able to solve a planning problem without finding a complete assignment because the back-chaining methodology
is able to ignore irrelevant clauses. Although back-chaining is well known in the literature, a high level of search redundancy
existed in previous methods; Modoc incorporates a new technique called “autarky pruning”, which reduces search redundancy
to manageable levels, permitting the benefits of back-chaining to emerge, for certain problem classes. Experimental results
are presented for planning problems and formulas derived from other applications.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
5.
Michael Huth Orna Grumberg 《International Journal on Software Tools for Technology Transfer (STTT)》2009,11(2):85-94
Reachability analysis asks whether a system can evolve from legitimate initial states to unsafe states. It is thus a fundamental
tool in the validation of computational systems—be they software, hardware, or a combination thereof. We recall a standard
approach for reachability analysis, which captures the system in a transition system, forms another transition system as an
over-approximation, and performs an incremental fixed-point computation on that over-approximation to determine whether unsafe
states can be reached. We show this method to be sound for proving the absence of errors, and discuss its limitations for
proving the presence of errors, as well as some means of addressing this limitation. We then sketch how program annotations
for data integrity constraints and interface specifications—as in Bertrand Meyer’s paradigm of Design by Contract—can facilitate
the validation of modular programs, e.g., by obtaining more precise verification conditions for software verification supported
by automated theorem proving. Then we recap how the decision problem of satisfiability for formulae of logics with theories—e.g.,
bit-vector arithmetic—can be used to construct an over-approximating transition system for a program. Programs with data types
comprised of bit-vectors of finite width require bespoke decision procedures for satisfiability. Finite-width data types challenge
the reduction of that decision problem to one that off-the-shelf tools can solve effectively, e.g., SAT solvers for propositional
logic. In that context, we recall the Tseitin encoding which converts formulae from that logic into conjunctive normal form—the
standard format for most SAT solvers—with only linear blow-up in the size of the formula, but linear increase in the number
of variables. Finally, we discuss the contributions that the three papers in this special section make in the areas that we
sketched above. 相似文献
6.
Ilkka Niemelä 《Annals of Mathematics and Artificial Intelligence》2008,53(1-4):313-329
The paper studies the relationship between logic programs with the stable model semantics and difference logic recently considered in the Satisfiability Modulo Theories framework. Characterizations of stable models in terms of level rankings are developed building on simple linear integer constraints allowed in difference logic. Based on the characterizations translations are devised which map normal programs to difference logic formulas capturing stable models of a program as satisfying valuations of the resulting formula. The translations make it possible to use a solver for difference logic to compute stable models of logic programs. 相似文献
7.
Yannis Dimopoulos 《Journal of Automated Reasoning》1996,17(3):259-289
In this paper we present and compare some classical problem-solving methods for computing the stable models of logic programs with negation. Using a graph theoretic representation of logic programs and their stable models, we discuss and compare linear programming, propositional satisfiability, constraint satisfaction, and graph methods. 相似文献
8.
Douglas Cenzer Jeffrey B. Remmel Amy Vanderbilt 《Annals of Mathematics and Artificial Intelligence》2004,40(3-4):225-262
In general, the set of stable models of a recursive logic program can be quite complex. For example, it follows from results of Marek, Nerode, and Remmel [Ann. Pure and Appl. Logic (1992)] that there exists finite predicate logic programs and recursive propositional logic programs which have stable models but no hyperarithmetic stable models. In this paper, we shall define several conditions which ensure that recursive logic program P has a stable model which is of low complexity, e.g., a recursive stable model, a polynomial time stable model, or a stable model which lies in a low level of the polynomial time hierarchy. 相似文献
9.
10.
Yi Zhou Fangzhen Lin Yan Zhang 《Annals of Mathematics and Artificial Intelligence》2009,57(2):125-160
In this paper, we propose a new nonmonotonic logic called general default logic. On the one hand, it generalizes Reiter’s
default logic by adding to it rule-like operators used in logic programming. On the other hand, it extends logic programming
by allowing arbitrary propositional formulas. We show that with this new logic, one can formalize naturally rule constraints,
generalized closed world assumptions, and conditional defaults. We show that under a notion of strong equivalence, sentences
of this new logic can be converted to a normal form. We also investigate the computational complexity of various reasoning
tasks in the logic, and relate it to some other nonmonotonic formalisms such as Lin and Shoham’s logic of GK and Moore’s autoepistemic
logic. 相似文献
11.
Robert W. Numrich 《The Journal of supercomputing》2008,43(3):281-298
We define a normed metric space for computer programs and derive from it the Principle of Computational Least Action. In our
model, programs follow trajectories determined by Newton’s equation of motion in an abstract computational phase space and
generate computational action as they evolve. A program’s action norm is the L
1-norm of its action function, and its distance from other programs is the distance derived from the action norm. The Principle
of Computational Least Action states the goal of performance optimization as finding the program with the smallest action
norm. We illustrate this principle by analyzing a simple program.
相似文献
Robert W. NumrichEmail: |
12.
We focus on automated addition of masking fault-tolerance to existing fault-intolerant distributed programs. Intuitively,
a program is masking fault-tolerant, if it satisfies its safety and liveness specifications in the absence and presence of faults. Masking fault-tolerance
is highly desirable in distributed programs, as the structure of such programs are fairly complex and they are often subject
to various types of faults. However, the problem of synthesizing masking fault-tolerant distributed programs from their fault-intolerant
version is NP-complete in the size of the program’s state space, setting the practicality of the synthesis problem in doubt.
In this paper, we show that in spite of the high worst-case complexity, synthesizing moderate-sized masking distributed programs
is feasible in practice. In particular, we present and implement a BDD-based synthesis heuristic for adding masking fault-tolerance
to existing fault-intolerant distributed programs automatically. Our experiments validate the efficiency and effectiveness
of our algorithm in the sense that synthesis is possible in reasonable amount of time and memory. We also identify several
bottlenecks in synthesis of distributed programs depending upon the structure of the program at hand. We conclude that unlike
verification, in program synthesis, the most challenging barrier is not the state explosion problem by itself, but the time
complexity of the decision procedures. 相似文献
13.
The concept of truth degrees of formulas in Łukasiewiczn-valued propositional logicL
n is proposed. A limit theorem is obtained, which says that the truth functionτ
n induced by truth degrees converges to the integrated truth functionτ whenn converges to infinite. Hence this limit theorem builds a bridge between the discrete valued Łukasiewicz logic and the continuous
valued Łukasiewicz logic. Moreover, the results obtained in the present paper is a natural generalization of the corresponding
results obtained in two-valued propositional logic. 相似文献
14.
15.
16.
在实单位区间[0,1]具有一定概率分布的基础上,引入命题逻辑公式的随机模糊意义下的真度概念,指出随机真度是已有文献中各种命题逻辑真度的共同推广.利用随机模糊真度定义公式间的随机模糊相似度,导出全体公式集上的一种伪距离——随机模糊逻辑伪距离,证明在随机模糊逻辑伪距离空间无孤立点.利用概率论中的积分收敛定理,证明一个关于随机模糊真度的极限定理.研究已有各种真度之间的联系.证明随机逻辑伪距离空间中逻辑运算的连续性,并将概率逻辑学基本定理推广至多值命题逻辑.在随机逻辑伪距离空间中提出2种不同类型的近似推理模式并应用于实际问题的近似推理. 相似文献
17.
Pascal Nicolas Laurent Garcia Igor Stéphan Claire Lefèvre 《Annals of Mathematics and Artificial Intelligence》2006,47(1-2):139-181
In this work, we introduce a new framework able to deal with a reasoning that is at the same time non monotonic and uncertain. In order to take into account a certainty level associated to each piece of knowledge, we use possibility theory to extend the non monotonic semantics of stable models for logic programs with default negation. By means of a possibility distribution we define a clear semantics of such programs by introducing what is a possibilistic stable model. We also propose a syntactic process based on a fix-point operator to compute these particular models representing the deductions of the program and their certainty. Then, we show how this introduction of a certainty level on each rule of a program can be used in order to restore its consistency in case of the program has no model at all. Furthermore, we explain how we can compute possibilistic stable models by using available softwares for Answer Set Programming and we describe the main lines of the system that we have developed to achieve this goal. 相似文献
18.
Amir M. Ben-Amram 《Acta Informatica》2009,46(1):57-72
To prove that a program terminates, we can employ a ranking function argument, where program states are ranked so that every
transition decreases the rank. Alternatively, we can use a set of ranking functions with the property that every cycle in
the program’s flow-chart can be ranked with one of the functions. This “local” approach has gained interest recently on the
grounds that local ranking functions would be simpler and easier to find. The current study is aimed at better understanding
the tradeoffs involved, in a precise quantitative sense. We concentrate on a convenient setting, the Size-Change Termination
framework (SCT). In SCT, programs are replaced by an abstraction whose termination is decidable. Moreover, sufficient classes
of ranking functions (both global and local) are known. Our results show a tradeoff: either exponentially many local functions
of certain simple forms, or an exponentially complex global function may be required for proving termination. 相似文献
19.
Karin Quaas 《Formal Methods in System Design》2011,38(3):193-222
We aim to generalize Büchi’s fundamental theorem on the coincidence of recognizable and MSO-definable languages to a weighted
timed setting. For this, we investigate weighted timed automata and show how we can extend Wilke’s relative distance logic
with weights taken from an arbitrary semiring. We show that every formula in our logic can effectively be transformed into
a weighted timed automaton, and vice versa. The results indicate the robustness of weighted timed automata and may also be
used for specification purposes. 相似文献