首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The view update problem is considered in the context of deductive databases where the update of an intensional predicate is accomplished by modifying appropriately the underlying relations in the extensional database. Two classes of disjunctive databases are considered. The first class contains those disjunctive databases which allow only definite rules in the intensional database and disjunctive facts in the extensional database. The second class contains stratified disjunctive databases so that in addition to the first class, negation is allowed in the bodies of the rules, but the database must be stratified. Algorithms are given both for the insertion of an intensional predicate into and the deletion of an intensional predicate from the database. The algorithms use SLD resolution and the concept of minimal models of the extensional database. The algorithms are proved to be correct and best according to the criterion of causing minimal change to the database, where we give first priority to minimizing deletions.Research supported by the National Science Foundation under grant numbers IRI-8916059, IRI-8921591, IRI-9200898, and IRI-9210220.  相似文献   

2.
3.
A top-down query-processing method for first-order deductive databases under the disjunctive well-founded semantics (DWFS) is presented. The method is based on a characterization of the DWFS in terms of the Gelfond–Lifschitz transformation and employs a hyperresolution-like operator and quasi-cyclic trees to handle minimal model processing. The method is correct and complete and can be guaranteed to terminate given certain mild constraints on the format of database rules. The efficiency of the method is enhanced by the fact that large parts of the search tree are naturally grounded, even for first-order queries and databases. In the case of a grounded yes/no answer, the search tree becomes nongrounded only if processing enters the definite part of the database. For finite propositional databases the method runs in polynomial space. Efficiency may be enhanced by the application of partial compilation.  相似文献   

4.
By the sometimes so-called Main Theorem of Recursive Analysis, every computable real function is necessarily continuous. We wonder whether and which kinds of hypercomputation allow for the effective evaluation of also discontinuous . More precisely the present work considers the following three super-Turing notions of real function computability: - relativized computation; specifically given oracle access to the Halting Problem or its jump ; - encoding input and/or output y = f(x) in weaker ways also related to the Arithmetic Hierarchy; - nondeterministic computation. It turns out that any computable in the first or second sense is still necessarily continuous whereas the third type of hypercomputation provides the required power to evaluate for instance the discontinuous Heaviside function.  相似文献   

5.
We consider the problems of enumerating all minimal strongly connected subgraphs and all minimal dicuts of a given strongly connected directed graph G=(V,E). We show that the first of these problems can be solved in incremental polynomial time, while the second problem is NP-hard: given a collection of minimal dicuts for G, it is NP-hard to tell whether it can be extended. The latter result implies, in particular, that for a given set of points , it is NP-hard to generate all maximal subsets of contained in a closed half-space through the origin. We also discuss the enumeration of all minimal subsets of whose convex hull contains the origin as an interior point, and show that this problem includes as a special case the well-known hypergraph transversal problem. This research was supported by the National Science Foundation (Grant IIS-0118635). The third and fourth authors are also grateful for the partial support by DIMACS, the National Science Foundation’s Center for Discrete Mathematics and Theoretical Computer Science. Our friend and co-author, Leonid Khachiyan tragically passed away on April 29, 2005.  相似文献   

6.
Generalized queries are defined as sets of clauses in implication form. They cover several tasks of practical importance for database maintenance such as answering positive queries, computing database completions and integrity constraints checking. We address the issue of answering generalized queries under the minimal model semantics for the class of disjunctive deductive databases (DDDBs). The advanced approach is based on having the query induce an order on the models returned by a sound and complete minimal model generating procedure. We consider answers that are true in all and those that are true in some minimal models of the theory. We address the issue of answering positive queries through the construction of the minimal model state of the DDDB, using a minimal model generating procedure. The refinements allowed by the procedure include isolating a minimal component of a disjunctive answer, the specification of possible updates to the theory to enable the derivability of certain queries and deciding the monotonicity properties of answers to different classes of queries.  相似文献   

7.
Krivine presents the  machine, which produces weak head normal form results. Sestoft introduces several call-by-need variants of the  machine that implement result sharing via pushing update markers on the stack in a way similar to the TIM and the STG machine. When a sequence of consecutive markers appears on the stack, all but the first cause redundant updates. Improvements related to these sequences have dealt with either the consumption of the markers or the removal of the markers once they appear. Here we present an improvement that eliminates the production of marker sequences of length greater than one. This improvement results in the  machine, a more space and time efficient variant of . We then apply the classic optimization of short-circuiting operand variable dereferences to create the call-by-need  machine. Finally, we combine the two improvements in the  machine. On our benchmarks this machine uses half the stack space, performs one quarter as many updates, and executes between 27% faster and 17% slower than our ℒ variant of Sestoft’s lazy Krivine machine. More interesting is that on one benchmark ℒ, , and consume unbounded space, but consumes constant space. Our comparisons to Sestoft’s Mark 2 machine are not exact, however, since we restrict ourselves to unpreprocessed closed lambda terms. Our variant of his machine does no environment trimming, conversion to deBruijn-style variable access, and does not provide basic constants, data type constructors, or the recursive let. (The Y combinator is used instead.)  相似文献   

8.
Hash tables on external memory are commonly used for indexing in database management systems. In this paper we present an algorithm that, in an asymptotic sense, achieves the best possible I/O and space complexities. Let B denote the number of records that fit in a block, and let N denote the total number of records. Our hash table uses I/Os, expected, for looking up a record (no matter if it is present or not). To insert, delete or change a record that has just been looked up requires I/Os, amortized expected, including I/Os for reorganizing the hash table when the size of the database changes. The expected external space usage is times the optimum of N/B blocks, and just O(1) blocks of internal memory are needed.  相似文献   

9.
In this paper, we address the problem of managing inconsistent databases, i.e., databases violating integrity constraints. We propose a general logic framework for computing repairs and consistent answers over inconsistent databases. A repair for a possibly inconsistent database is a minimal set of insert and delete operations which makes the database consistent, whereas a consistent answer is a set of tuples derived from the database, satisfying all integrity constraints. In our framework, different types of rules defining general integrity constraints, repair constraints (i.e., rules defining conditions on the insertion or deletion of atoms), and prioritized constraints (i.e., rules defining priorities among updates and repairs) are considered. We propose a technique based on the rewriting of constraints into (prioritized) extended disjunctive rules with two different forms of negation (negation as failure and classical negation). The disjunctive program can be used for two different purposes: to compute "repairs" for the database and produce consistent answers, i.e., a maximal set of atoms which do not violate the constraints. We show that our technique is sound, complete (each preferred stable model defines a repair and each repair is derived from a preferred stable model), and more general than techniques previously proposed.  相似文献   

10.
Integrity constraints were initially defined to verify the correctness of the data that is stored in a database. They were used to restrict the modifications that can be applied to a database. However, there are many other applications in which integrity constraints can play an important role. For example, the semantic query optimization method developed by Chakravarthy, Grant, and Minker for definite deductive databases uses integrity constraints during query processing to prevent the exploration of search space that is bound to fail. In this paper, we generalize the semantic query optimization method to apply to negated atoms. The generalized method is referred to assemantic compilation. This exploration has led to two significant results. First, semantic compilation provides an alternative search space for negative query literals. The alternative search space can find answers in cases for which negation-as-finite-failure and constructive negation cannot. Second, we show how semantic compilation can be used to transform a disjunctive database with or without functions and denial constraints without negation into a new disjunctive database that complies with the integrity constraints.  相似文献   

11.
The typechecking problem for transformations of relational data into tree data is the following: given a relational-to-XML transformation P, and an XML type d, decide whether for every database instance the result of the transformation P on satisfies d. TreeQL programs with projection-free conjunctive queries (see Alon et al. in ACM Trans. Comput. Log. 4(3):315–354, 2003) are considered as transformations and DTDs with arbitrary regular expressions as XML types. A non-elementary upper bound for the typechecking problem was already given by Alon et al. (ACM Trans. Comput. Log. 4(3):315–354, 2003) (although in a more general setting, where equality and negation in projection-free conjunctive queries and additional universal integrity constraints are allowed). In this paper we show that the typechecking problem is coNEXPTIME-complete. As an intermediate step we consider the following problem, which can be formulated independently of XML notions. Given a set of triples of the form (φ,k,j), where φ is a projection-free conjunctive query and k,j are natural numbers, decide whether there exists a database such that, for each triple (φ,k,j) in the set, there exists a natural number α, such that there are exactly k+j*α tuples satisfying the query φ in . Our main technical contribution consists of a NEXPTIME algorithm for the last problem. Partially supported by Polish Ministry of Science and Higher Education research project N206 022 31/3660, 2006/2009. This paper is an extended version of 20, where the coNEXPTIME upper bound was shown.  相似文献   

12.
Haptic devices allow a user to feel either reaction forces from virtual interactions or reaction forces reflected from a remote site during a bilateral teleoperation task. Also, guiding forces can be exerted to train the user in the performance of a virtual task or to assist him/her to safely teleoperate a robot. The generation of guiding forces relies on the existence of a motion plan that provides the direction to be followed to reach the goal from any free configuration of the configuration space (-space). This paper proposes a method to obtain such a plan that interleaves a sampling-based exploration of -space with an efficient computation of harmonic functions. A deterministic sampling sequence (with a bias based on harmonic function values) is used to obtain a hierarchical cell decomposition model of -space. A harmonic function is iteratively computed over the partially known model using a novel approach. The harmonic function is the navigation function used as motion plan. The approach has been implemented in a planner (called the Kautham planner) that, given an initial and a goal configuration, provides: (a) a channel of cells connecting the cell that contains the initial configuration with the cell that contains the goal configuration; (b) two harmonic functions over the whole -space, one that guides motions towards the channel and another that guides motions within the channel towards the goal; and (c) a path computed over a roadmap built with the free samples of the channel. The harmonic functions and the solution path are then used to generate the guiding forces for the haptic device. The planning approach is illustrated with examples on 2D and 3D workspaces. This work was partially supported by the CICYT projects DPI2005-00112 and DPI2007-63665.  相似文献   

13.
Disjoint -pairs are a well studied complexity-theoretic concept with important applications in cryptography and propositional proof complexity. In this paper we introduce a natural generalization of the notion of disjoint -pairs to disjoint k-tuples of -sets for k≥2. We define subclasses of the class of all disjoint k-tuples of -sets. These subclasses are associated with a propositional proof system and possess complete tuples which are defined from the proof system. In our main result we show that complete disjoint -pairs exist if and only if complete disjoint k-tuples of -sets exist for all k≥2. Further, this is equivalent to the existence of a propositional proof system in which the disjointness of all k-tuples is shortly provable. We also show that a strengthening of this conditions characterizes the existence of optimal proof systems. An extended abstract of this paper appeared in the proceedings of the conference CSR 2006 (Lecture Notes in Computer Science 3967, 80–91, 2006). Supported by DFG grant KO 1053/5-1.  相似文献   

14.
COMPUTING PERFECT AND STABLE MODELS USING ORDERED MODEL TREES   总被引:1,自引:0,他引:1  
Ordered model trees were introduced as a normal form for disjunctive deductive databases. They were also used to facilitate the computation of minimal models for disjunctive theories by exploiting the order imposed on the Herbrand base of the theory. In this work we show how the order on the Herbrand base can be used to compute perfect models of a disjunctive stratified finite theory. We are able to compute the stable models of a general finite theory by combining the order on the elements of the Herbrand base with previous results that had shown that the stable models of a theory T can be computed as the perfect models of a corresponding disjunctive theory ɛ T resulting from applying the so called evidential transformation to T. While other methods consider many models that are rejected at the end, the use of atom ordering allows us to guarantee that every model generated belongs to the class of models being computed. As for negation-free databases, the ordered tree serves as the canonical representation of the database.  相似文献   

15.
Goal separation is often a fruitful approach when solving complex problems. It provides a way to focus on relevant aspects in a stepwise fashion and hence bound the problem solving scope along a specific direction at any point. This work applies goal separation to the problem of synthesizing robust schedules. The problem is addressed by separating the phase of problem solution, which may pursue a standard optimization criterion (e.g., minimal makespan), from a subsequent phase of solution robustification in which a more flexible set of solutions is obtained and compactly represented through a temporal graph, called a Partial Order Schedule ( ). The key advantage of a is that it provides the capability to promptly respond to temporal changes (e.g., activity duration changes or activity start-time delays) and to hedge against further changes (e.g., new activities to perform or unexpected variations in resource capacity). On the one hand, the paper focuses on specific heuristic algorithms for synthesis of s, starting from a pre-existing schedule (hence the name Solve-and-Robustify). Different extensions of a technique called chaining, which progressively introduces temporal flexibility into the representation of the solution, are introduced and evaluated. These extensions follow from the fact that in multi-capacitated resource settings more than one can be derived from a specific fixed-times solution via chaining, and carry out a search for the most robust alternative. On the other hand, an additional analysis is performed to investigate the performance gain possible by further broadening the search process to consider multiple initial seed solutions. A detailed experimental analysis using state-of-the-art rcpsp/max  benchmarks is carried out to demonstrate the performance advantage of these more sophisticated solve and robustify procedures, corroborating prior results obtained on smaller problems and also indicating how this leverage increases as problem size is increased.  相似文献   

16.
In 1999 Nakano, Olariu, and Schwing in [20], they showed that the permutation routing of n items pretitled on a mobile ad hoc network (MANET for short) of p stations (p known) and k channels (MANET{(n, p, k)) with k < p, can be carried out in broadcast rounds if k p and if each station has a -memory locations. And if k and if each station has a -memory locations, the permutations of these n pretitled items can be done also in broadcast rounds. They used two assumptions: first they suppose that each station of the mobile ad hoc network has an identifier beforehand. Secondly, the stations are partitioned into k groups such that each group has stations, but it was not shown how this partition can be obtained. In this paper, the stations have not identifiers beforehand and p is unknown. We develop a protocol which first names the stations, secondly gives the value of p, and partitions stations in groups of stations. Finally we show that the permutation routing problem can be solved on it in broadcast rounds in the worst case. It can be solved in broadcast rounds in the better case. Note that our approach does not impose any restriction on k.  相似文献   

17.
A homogeneous set is a non-trivial module of a graph, i.e. a non-empty, non-unitary, proper subset of a graph's vertices such that all its elements present exactly the same outer neighborhood. Given two graphs the Homogeneous Set Sandwich Problem (HSSP) asks whether there exists a sandwich graph which has a homogeneous set. In 2001 Tang et al. published an all-fast algorithm which was recently proven wrong, so that the HSSP's known upper bound would have been reset thereafter at the former determined by Cerioli et al. in 1998. We present, notwithstanding, new deterministic algorithms which have it established at We give as well two even faster randomized algorithms, whose simplicity might lend them didactic usefulness. We believe that, besides providing efficient easy-to-implement procedures to solve it, the study of these new approaches allows a fairly thorough understanding of the problem.  相似文献   

18.
We show that for arbitrary positive integers with probability the gcd of two linear combinations of these integers with rather small random integer coefficients coincides with This naturally leads to a probabilistic algorithm for computing the gcd of several integers, with probability via just one gcd of two numbers with about the same size as the initial data (namely the above linear combinations). This algorithm can be repeated to achieve any desired confidence level.  相似文献   

19.
20.
In this paper, we consider the $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy and $(\overline{\in}_{\gamma},\overline{\in}_{\gamma} \vee \; \overline{\hbox{q}}_{\delta})$ -fuzzy subnear-rings (ideals) of a near-ring. Some new characterizations are also given. In particular, we introduce the concepts of (strong) prime $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy ideals of near-rings and discuss the relationship between strong prime $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy ideals and prime $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy ideals of near-rings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号