首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 82 毫秒
1.
Bridging the gap between OWL and relational databases   总被引:2,自引:0,他引:2  
Despite similarities between the Web Ontology Language (OWL) and schema languages traditionally used in relational databases, systems based on these languages exhibit quite different behavior in practice. The schema statements in relational databases are usually interpreted as integrity constraints and are used to check whether the data is structured according to the schema. OWL allows for axioms that resemble integrity constraints; however, these axioms are interpreted under the standard first-order semantics and not as checks. This often leads to confusion and is inappropriate in certain data-centric applications. To explain the source of this confusion, in this paper we compare OWL and relational databases w.r.t. their schema languages and basic computational problems. Based on this comparison, we extend OWL with integrity constraints that capture the intuition behind similar statements in relational databases. We show that, if the integrity constraints are satisfied, they need not be considered while answering a broad range of positive queries. Finally, we discuss several algorithms for checking integrity constraint satisfaction, each of which is suitable to different types of OWL knowledge bases.  相似文献   

2.
In this paper, it is shown that stable model semantics, perfect model semantics, and partial stable model semantics of disjunctive logic programs have the same expressive power with respect to the polynomial-time model-equivalent reduction. That is, taking perfect model semantics and stable model semantic as an example, any logic program P can be transformed in polynomial time to another logic program P' such that perfect models (resp. stable models) of P i-i correspond to stable models (resp. perfect models) of P', and the correspondence can be computed also in polynomial time. However, the minimal model semantics has weaker expressiveness than other mentioned semantics, otherwise, the polynomial hierarchy would collapse to NP.  相似文献   

3.
We show that stable models of logic programs may be viewed as minimal models of programs that satisfy certain additional constraints. To do so, we transform the normal programs into disjunctive logic programs and sets of integrity constraints. We show that the stable models of the normal program coincide with the minimal models of the disjunctive program thatsatisfy the integrity constraints. As a consequence, the stable model semantics can be characterized using theextended generalized closed world assumption for disjunctive logic programs. Using this result, we develop a bottomup algorithm for function-free logic programs to find all stable models of a normal program by computing the perfect models of a disjunctive stratified logic program and checking them for consistency with the integrity constraints. The integrity constraints provide a rationale as to why some normal logic programs have no stable models.  相似文献   

4.
The paper presents a new approach to the problem of completeness of the SLDNF-resolution. We propose a different reference theory that we call strict completion. This new concept of completion (comp*(P)) is based on a program transformation that given any program transforms it into a strict one (with the same computational behaviour) and the usual notion of program completion. We consider it a reasonable reference theory to discuss program semantics and completeness results. The standard 2-valued logic is used. The new comp*(P) is always consistent and the completeness of all allowed programs and goals w.r.t. comp*(P) is proved.  相似文献   

5.
An answer set program with variables is first-order definable on finite structures if the set of its finite answer sets can be captured by a first-order sentence. Characterizing classes of programs that are first-order definable on finite structures is theoretically challenging and of practical relevance to answer set programming. In this paper, we identify a non-trivial class of answer set programs called loop-separable programs and show that they are first-order definable on finite structures.  相似文献   

6.
The objective of this paper is to provide a theoretical foundation for program extraction from inductive and coinductive proofs geared to practical applications. The novelties consist in the addition of inductive and coinductive definitions to a realizability interpretation for first-order proofs, a soundness proof for this system, and applications to the synthesis of non-trivial provably correct programs in the area of exact real number computation. We show that realizers, although per se untyped, can be assigned polymorphic recursive types and hence represent valid programs in a lazy functional programming language such as Haskell. Programs extracted from proofs using coinduction can be understood as perpetual processes producing infinite streams of data. Typical applications of such processes are computations in exact real arithmetic. As an example we show how to extract a program computing the average of two real numbers w.r.t. the binary signed digit representation.  相似文献   

7.
In this paper, we address the problem of managing inconsistent databases, i.e., databases violating integrity constraints. We propose a general logic framework for computing repairs and consistent answers over inconsistent databases. A repair for a possibly inconsistent database is a minimal set of insert and delete operations which makes the database consistent, whereas a consistent answer is a set of tuples derived from the database, satisfying all integrity constraints. In our framework, different types of rules defining general integrity constraints, repair constraints (i.e., rules defining conditions on the insertion or deletion of atoms), and prioritized constraints (i.e., rules defining priorities among updates and repairs) are considered. We propose a technique based on the rewriting of constraints into (prioritized) extended disjunctive rules with two different forms of negation (negation as failure and classical negation). The disjunctive program can be used for two different purposes: to compute "repairs" for the database and produce consistent answers, i.e., a maximal set of atoms which do not violate the constraints. We show that our technique is sound, complete (each preferred stable model defines a repair and each repair is derived from a preferred stable model), and more general than techniques previously proposed.  相似文献   

8.
9.
10.
We present a generic scheme for the declarative debugging of programs that are written in rewriting-based languages that are equipped with narrowing. Our aim is to provide an integrated development environment in which it is possible to debug a program and then correct it automatically. Our methodology is based on the combination (in a single framework) of a semantics-based diagnoser that identifies those parts of the code that contain errors and an inductive learner that tries to repair them, once the bugs have been located in the program. We develop our methodology in several steps. First, we associate with our programs a semantics that is based on a (continuous) immediate consequence operator, TR, which models the answers computed by narrowing and is parametric w.r.t. the evaluation strategy, which can be eager or lazy. Then, we show that, given the intended specification of a program R, it is possible to check the correctness of R by a single step of TR. In order to develop an effective debugging method, we approximate the computed answers semantics of R and derive a finitely terminating bottom-up abstract diagnosis method, which can be used statically. Finally, a bug-correction program synthesis methodology attempts to correct the erroneous components of the wrong code. We propose a hybrid, top-down (unfolding-based) as well as bottom-up (induction-based), correction approach that is driven by a set of evidence examples which are automatically produced as an outcome by the diagnoser. The resulting program is proven to be correct and complete w.r.t. the considered example sets. Our debugging framework does not require the user to provide error symptoms in advance or to answer difficult questions concerning program correctness. An implementation of our debugging system has been undertaken which demonstrates the workability of our approach.  相似文献   

11.
This paper is devoted to the problem of consistency enforcement for logical databases. The enforcement methods we propose compute the minimal real change in a given DB state, which is sufficient to accomplish the input update and preserve the integrity constraints (IC). For IC expressed in the form of a generalized logic program, this problem is proven to be hard. Meanwhile, we show that it is solvable in linear time under partial interpretations. We propose a method of computing DB state independent correct expansions of the input update and simultaneous optimization of IC with respect to this expansion. We show that under partial interpretations, optimal pairs (greatest correct update expansion/simplest equivalent IC) always exist and can be incrementally computed in square time. This partial solution being correct with respect to the total interpretations, we use it as an approximation in the total case. Moreover, for the class of IC without negation in clause bodies, we prove that this approximation constitutes the optimal pair (greatest correct update expansion/simplest equivalent IC).  相似文献   

12.
We show that termination is a first-order notion if approached via Nonstandard Logics of Programs (NLP). We give explicit first-order characterizations of the program-verifying power of the well-known Manna-Cooper method for proving total correctness assertions (tca's). Similar characterization is given to the Intermittent Assertions Method (w.r.t. tca's). A comparison of the tca-proving powers of distinguished methods (or logics of programs) is also attempted. We also show that NLP provides new methods which are strictly stronger than the Manna-Cooper method. In the end we turn to partial correctness issues related to the main body of the paper.  相似文献   

13.
Set constraints (SC) are logical formulae in which atoms are inclusions between set expressions. Those set expressions are built over a signature , variables and various set operators. On a semantical point of view, the set constraints are interpreted over sets of trees built from and the inclusion symbol is interpreted as the subset relation over those sets. By restricting the syntax of those formulae and/or the set of operators that can occur in set expressions, different classes of set constraints are obtained. Several classes have been proposed and studied for some problems such as satisfiability and entailment. Among those classes, we focus in this article on the class of definite SC's introduced by Heintze and Jaffar, and the class of co-definite SC's studied by Charatonik and Podelski. In spite of their name, those two classes are not dual from each other, neither through inclusion inversion nor through complementation. In this article, we propose an extension for each of those two classes by means of an intentional set construction, so called membership expression. A membership expression is an expression {x| (x)}. The formula (x) is a positive first-order formula built from membership atomst in S in which S is a set expression. We name those two classes respectively generalized definite and generalized co-definite set constraints. One of the main point concerning those so-extended classes is that the two generalized classes turn out to be dual through complementation. First, we prove in this article that generalized definite set constraints is a proper extension of the definite class, as it is more expressive in terms of sets of solutions. But we show also that those extensions preserve some main properties of the definite and co-definite class. Hence for instance, as definite set constraints, generalized definite SC's have a least solution whereas the generalized co-definite SC's have a greatest solution, just as co-definite ones. Furthermore, we devise an algorithm based on tree automata that solves the satisfiability problem for generalized definite set constraints. Due to the dualization, the algorithm solves the satisfiability problem for generalized co-definite set constraints as well. This algorithm proves first that for those generalized classes, the satisfiability problem remains DEXPTIME-complete. It provides also a proof for regularity of the least solution of generalized definite constraints and so, by dualization for the greatest solution for the generalized co-definite SC's.  相似文献   

14.
15.
基于事件约束的分布式程序正确性测试   总被引:7,自引:3,他引:4  
顾庆  陈道蓄  于勐  谢立  孙钟秀 《软件学报》2000,11(8):1035-1040
由于并发的存在和不确定性,在以规约为基础来测试分布式程序的正确性时,必须考虑程序 执行时的内部状态.这些内部状态通过端口显示为事件序列,程序规约需要对序列中各事件间 的依赖关系作约定,即定义事件约束集.该文提出了E-CSPE(extended-constraints on suc ceeding and preceding events),以形式化描述这类事件约束,它由3个基本描述规则组成, 分别对应于3种不同类型的事件约束.通过判断程序执行时所产生的事件序 列集同这些事件约束集的一致性以及对约束集覆盖程  相似文献   

16.
17.
Implementing temporal integrity constraints using an active DBMS   总被引:2,自引:0,他引:2  
The paper proposes a general architecture for implementing temporal integrity constraints by compiling them into a set of active DBMS rules. The modularity of the design allows easy adaptation to different environments. Both differences in the specification languages and in the target rule systems can be easily accommodated. The advantages of this architecture are demonstrated on a particular temporal constraint compiler. This compiler allows automatic translation of integrity constraints formulated in Past Temporal Logic into rules of an active DBMS (in the current version of the compiler two active DBMS are supported: Starburst and INGRES). During the compilation the set of constraints is checked for the safe evaluation property. The result is a set of SQL statements that includes all the necessary rules needed for enforcing the original constraints. The rules are optimized to reduce the space overhead introduced by the integrity checking mechanism. There is no need for an additional runtime constraint monitor. When the rules are activated, all updates to the database that violate any of the constraints are automatically rejected (i.e., the corresponding transaction is aborted). In addition to straightforward implementation, this approach offers a clean separation of application programs and the integrity checking code  相似文献   

18.
This paper addresses the problem of representing the set of repairs of a possibly inconsistent database by means of a disjunctive database. Specifically, the class of denial constraints is considered. We show that, given a database and a set of denial constraints, there exists a (unique) disjunctive database, called canonical, which represents the repairs of the database w.r.t. the constraints and is contained in any other disjunctive database with the same set of minimal models. We propose an algorithm for computing the canonical disjunctive database. Finally, we study the size of the canonical disjunctive database in the presence of functional dependencies for both subset-based repairs and cardinality-based repairs.  相似文献   

19.
In this paper, we extend the language we proposed in [11] to make it possible to program arc-consistency algorithms. We also propose a hybrid algorithm that integrates the interval-consistency (IC) and arc-consistency (AC) algorithms. For a constraint, the algorithm checks IC when it is non-binary and checks AC when the constraint turns into binary. The algorithm is well-balanced. Its reduction cost is close to the IC checking algorithm, while its reduction power is close to the AC checking algorithm for most problems. The experimental results show that the hybrid algorithm may be a little slower than the IC checking algorithm for programs that do not require strong reduction power, but may be an order of magnitude faster than IC for other programs.  相似文献   

20.
This paper presents a formal approach to specify and verify object-oriented programs written in the `programming to interfaces'' paradigm. In this approach, besides the methods to be invoked by its clients, an interface also declares a set of abstract and polymorphic function/predicate symbols, together with a set of constraints about these symbols. The methods declared in this interface are specified using these abstract symbols. A class implementing this interface can give its own definitions to the abstract symbols, as long as all the constraints are satisfied. This class implements all the methods declared in the interface such that the method specification declared in the interface are satisfied w.r.t. the function symbol definitions in this class. Based on the constraints about the abstract symbols, client code using the interfaces can be specified and verified precisely without knowing what classes implement the interfaces. Given more information about the implementing classes, the specifications of the client code can be specialized into more precise ones without re-verifying the client code.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号