首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The view update problem is considered in the context of deductive databases where the update of an intensional predicate is accomplished by modifying appropriately the underlying relations in the extensional database. Two classes of disjunctive databases are considered. The first class contains those disjunctive databases which allow only definite rules in the intensional database and disjunctive facts in the extensional database. The second class contains stratified disjunctive databases so that in addition to the first class, negation is allowed in the bodies of the rules, but the database must be stratified. Algorithms are given both for the insertion of an intensional predicate into and the deletion of an intensional predicate from the database. The algorithms use SLD resolution and the concept of minimal models of the extensional database. The algorithms are proved to be correct and best according to the criterion of causing minimal change to the database, where we give first priority to minimizing deletions.Research supported by the National Science Foundation under grant numbers IRI-8916059, IRI-8921591, IRI-9200898, and IRI-9210220.  相似文献   

2.
Model trees were conceived as a structure-sharing approach to represent information in disjunctive deductive databases. In this paper we introduce the concept ofordered minimal model trees as a normal form for disjunctive deductive databases. These are model trees in which an order is imposed on the elements of the Herbrand base. The properties of ordered minimal model trees are investigated as well as their possible utilization for efficient manipulation of disjunctive deductive databases. Algorithms are presented for constructing and performing operations on ordered model trees. The complexity of ordered model tree processing is addressed. Model forests are presented as an approach to reduce the complexity of ordered model tree construction and processing.This research was supported by the National Science Foundation, under the grant Nr. IRI-89-16059, the Air Force Office of Scientific Research, under the grant Nr. AFOSR-91-0350, and the Fulbright Scholar Program.This work was done while visiting at the University of Maryland Institute for Advanced Computer Studies.  相似文献   

3.
We propose an approach with feasible space requirement to maintain the transitive closure of a class of hypergraphs called OR-graphs. OR-graphs are equivalent to disjunctive deductive databases where disjunctions are limited to one attribute in each OR-table. It has been shown that query processing in disjunctive deductive databases grows into CoNP with very simple examples, but few attempts have been made, as is done in this paper, to obtain classes of disjunctive databases and queries for which efficient algorithms exist. Polynomial time algorithms are presented to compute the transitive closure of OR-graphs and to handle dynamic insertions and deletions. With algorithms for insertions and deletions, we provide a simple but efficient technique to solve the failure set problem in reliability models, which is equivalent to finding the closure of an arbitrary non-empty set of simple nodes. We also show that a minimal extension to OR-graphs makes the computational complexity of the transitive closure CoNP-complete.Research supported in part by NSF under IRI-9210220 and IRI-9111988, Omron Corporation and Omron Management Center of America.  相似文献   

4.
Learning Binary Relations Using Weighted Majority Voting   总被引:2,自引:0,他引:2  
In this paper we demonstrate how weighted majority voting with multiplicative weight updating can be applied to obtain robust algorithms for learning binary relations. We first present an algorithm that obtains a nearly optimal mistake bound but at the expense of using exponential computation to make each prediction. However, the time complexity of our algorithm is significantly reduced from that of previously known algorithms that have comparable mistake bounds. The second algorithm we present is a polynomial time algorithm with a non-optimal mistake bound. Again the mistake bound of our second algorithm is significantly better than previous bounds proven for polynomial time algorithms.A key contribution of our work is that we define a non-pure or noisy binary relation and then by exploiting the robustness of weighted majority voting with respect to noise, we show that both of our algorithms can learn non-pure relations. These provide the first algorithms that can learn non-pure binary relations.The first author was supported in part by NSF grant CCR-91110108 and NSF National Young Investigator Grant CCR-9357707 with matching funds provided by Xerox Corporation, Palo Alto Research Center and WUTA. The second author was supported by ONR grant NO0014-91-J-1162 and NSF grant IRI-9123692.  相似文献   

5.
In this paper two algorithms are presented which compute a set of generators of minimal cardinality for a finite soluble group given by a polycyclic presentation. The first can be used when a chief series is available. The second algorithm is less simple, but nevertheless efficient, and can be used when it is difficult or too expensive to compute a chief series. The problem of determining the minimal number d(G) of generators when G is a solvable group has been discussed and solved by Gaschütz, and the ideas for these algorithms are essentially suggested by the work of Gaschütz. In the Appendices CAYLEY V.3.7.2 procedures are listed.  相似文献   

6.
A challenging problem that confronts unstructured peer-to-peer (P2P) computing systems is how to provide efficient support to locate desired files. This paper addresses this problem by using some quantitative information in the form of probabilistic knowledge. Two types of probabilistic knowledge are considered in this paper: overlap between topics shared in the network and coverage of topics at each individual peer. Based on the probabilistic knowledge, this paper proposes an adaptive probabilistic search algorithm that can efficiently support file locating operation in the unstructured P2P network. Then, an update algorithm is devised to keep the freshness of the probabilistic knowledge of individual peers by taking advantage of feedback from the previous user queries. Finally, some extensive experiments are conducted to evaluate the efficiency and effectiveness of the proposed method. This work is partially supported by the National Natural Science Foundation of China under grant No. 60496325 and 60496327 and MoE Doctorate Subject Program under project No. 20030246023, and this work was done when the first author was visiting University of California at Berkeley.  相似文献   

7.
The computational complexity of a number of problems relating to minimal models of non-Horn deductive databases is considered. In particular, the problem of determining minimal model membership is shown to be NP-complete for non recursive propositional databases. The structure of minimal models is also examined using the notion of a cyclic tree, and methods of determining minimal model membership, minimality of models and compiling the GCWA are presented. The handling of negative premises is also considered using perfect model semantics, and methods for computing perfect model membership are presented.  相似文献   

8.
The class of decision problems for which finite, special string-rewriting systems that are confluent on some congruence class effectively provide algorithms is compared to the class of decision problems for which finite, monadic, and confluent string-rewriting systems effectively yield algorithms. Among the decision problems solved are the word problem, the power problem, the left-and right-divisibility problems, the finiteness problem, the group problem, the problem of torsion-freeness, the inclusion problem, and the generalized word problem. In particular, it is shown that the technique of linear sentences of Book [7] applies to finite, special string-rewriting systems that are confluent on some congruence class.This work was prepared while the first author was visiting at Fachbereich Informatik, Universität Kaiserslautern, Kaiserslautern, Federal Republic of Germany. Some of the results presented here were first announced at the 8th International Conference on Applied Algebra, Algebraic Algorithms and Error Correcting Codes, Tokyo, Japan, August 1990  相似文献   

9.
The use of Gibbs random fields (GRF) to model images poses the important problem of the dependence of the patterns sampled from the Gibbs distribution on its parameters. Sudden changes in these patterns as the parameters are varied are known asphase transitions. In this paper we concentrate on developing a general deterministic theory for the study of phase transitions when a single parameter, namely, the temperature, is varied. This deterministic framework is based on a technique known as themean-field approximation, which is widely used in statistical physics. Our mean-field theory is general in that it is valid for any number of gray levels, any pairwise interaction potential, any neighborhood structure or size, and any set of constraints imposed on the desired images. The mean-field approximation is used to compute closed-form estimates of the critical temperatures at which phase transitions occur for two texture models widely used in the image modeling literature: the Potts model and the autobinomial model. The mean-field model allows us to gain insight into the Gibbs model behavior in the neighborhood of these temperatures. These analytical results are verified by computer simulations that use a novel mean-field descent algorithm. An important spinoff of our mean-field theory is that it allows us to compute approximations for the correlation functions of GRF models, thus bridging the gap between neighborhood-based and correlation-baseda priori image models.The work of I.M. Elfadel was supported in part by the National Science Foundation under grant MIP-91-17724. The work of A.L. Yuille was supported by the Brown, Harvard, and MIT Center for Intelligent Control Systems under U.S. Army Research Office grant DAAL03-86-C-0171, by the Defense Advanced Research Projects Agency under contract AFOSR-89-0506, and by the National Science Foundation under grant IRI-9003306.  相似文献   

10.
We study the problem of decentralization of flow control in packet-switching networks under the isarithmic scheme. An incoming packet enters the network only if there are permits available at the entry port when it arrives. The actions of the controllers refer to the routing of permits in the network and the control variables are the corresponding probabilities. We study the behavior of adaptive algorithms implemented at the controllers to update these probabilities and seek optimal performance. This problem can be stated as a routing problem in a closed queueing network. The centralized version of a learning automation is a general framework presented along with the proof of asymptotic optimality. Decentralization of the controller gives rise to non-uniqueness of the optimal control parameters. Non-uniqueness can be exploited to construct asymptotically optimal learning algorithms that exhibit different behavior. We implement two different algorithms for the parallel operation and discuss their differences. Convergence is established using the weak convergence methodology. In addition to our theoretical results, we illustrate the main results using the flow control problem as a model example and verify the predicted behavior of the two proposed algorithms through computer simulations, including an example of tracking.The work of this author was partially supported by a grant from the Canadian Institute for Telecommunications Research under the NCE program of the Government of Canada, and partially supported by NSERC grant WFA 0139015 and FCAR-Québec grant 95-NC-1375.The work of this author was supported by a grant from the CITR under the NCE program of the Government of Canada.  相似文献   

11.
An exact subexponential-time lattice algorithm for Asian options   总被引:1,自引:0,他引:1  
Asian options are popular financial derivative securities. Unfortunately, no exact pricing formulas exist for their price under continuous-time models. Asian options can also be priced on the lattice, which is a discretized version of the continuous- time model. But only exponential-time algorithms exist if the options are priced on the lattice without approximations. Although efficient approximation methods are available, they lack accuracy guarantees in general. This paper proposes a novel lattice structure for pricing Asian options. The resulting pricing algorithm is exact (i.e., without approximations), converges to the value under the continuous-time model, and runs in subexponential time. This is the first exact, convergent lattice algorithm to break the long-standing exponential-time barrier. An early version of this paper appeared in the Proceedings of the 15th Annual ACM-SIAM Symposium on Discrete Algorithms, 2004. T.-S. Dai was supported in part by NSC grant 94-2213-E-033-024. Y.-D. Lyuu was supported in part by NSC grant 94-2213-E-002-088.  相似文献   

12.
Przmusinski extended the notion of stratified logic programs,developed by Apt,Blair and Walker,and by van Gelder,to stratified databases that allow both negative premises and disjunctive consequents.However,he did not provide a fixpoint theory for such class of databases.On the other hand,although a fixpoint semantics has been developed by Minker and Rajasekar for non-Horn logic programs,it is tantamount to traditional minimal model semantics which is not sufficient to capture the intended meaning of negation in the premises of clauses in stratified databases.In this paper,a fixpoint approach to stratified databases is developed,which corresponds with the perfect model semantics.Moreover,algorithms are proposed for computing the set of perfect models of a stratified database.  相似文献   

13.
Many algorithms in distributed systems assume that the size of a single message depends on the number of processors. In this paper, we assume in contrast that messages consist of a single bit. Our main goal is to explore how the one-bit translation of unbounded message algorithms can be sped up by pipelining. We consider two problems. The first is routing between two processors in an arbitrary network and in some special networks (ring, grid, hypercube). The second problem is coloring a synchronous ring with three colors. The routing problem is a very basic subroutine in many distributed algorithms; the three coloring problem demonstrates that pipelining is not always useful. Amotz Bar-Noy received his B.Sc. degree in Mathematics and Computer Science in 1981, and his Ph.D. degree in Computer Science in 1987, both from the Hebrew University of Jerusalem, Israel. Between 1987 and 1989 he was a post-doctoral fellow in the Department of Computer Science at Stanford University. He is currently a visiting scientist at the IBM Thomas J. Watson Research Center. His current research interests include the theoretical aspects of distributed and parallel computing, computational complexity and combinatorial optimization. Joseph (Seffi) Naor received his B.A. degree in Computer Science in 1981 from the Technion, Israel Institute of Technology. He received his M.Sc. in 1983 and Ph.D. in 1987 in Computer Science, both from the Hebrew University of Jerusalem, Israel. Between 1987 and 1988 he was a post-doctoral fellow at the University of Southern California, Los Angeles, CA. Since 1988 he has been a post-doctoral fellow in the Department of Computer Science at Stanford University. His research interests include combinatorial optimization, randomized algorithms, computational complexity and the theoretical aspects of parallel and distributed computing. Moni Naor received his B.A. in Computer Science from the Technion, Israel Institute of Technology, in 1985, and his Ph.D. in Computer Science from the University of California at Berkeley in 1989. He is currently a visiting scientist at the IBM Almaden Research Center. His research interests include computational complexity, data structures, cryptography, and parallel and distributed computation.Supported in part by a Weizmann fellowship and by contract ONR N00014-85-C-0731Supported by contract ONR N00014-88-K-0166 and by a grant from Stanford's Center for Integrated Systems. This work was done while the author was a post-doctoral fellow at the University of Southern California, Los Angeles, CAThis work was done while the author was with the Computer Science Division, University of California at Berkeley, and Supported by NSF grant DCR 85-13926  相似文献   

14.
The state-space model is a general, powerful, and elegant representation of problem solving. Nevertheless, state-spaces have rarely been used to model realistic environments because conventional state-spaces are inherently deterministic, while the world is not. This paper extends the model toprobabilistic state-spaces (PSS), which elegantly capture many of the world's complexities by regarding state values as generated by random variables. These PSS models, when combined with decision analytic techniques for knowledge elicitation and encoding, should yield realistic representations. The first investigation of states based on random variables derived the expected-outcome model of two-player games, which led to some powerful results about game evaluators. Most of the work done on expected-outcome was quite general, and should extend easily beyond game-trees to arbitrary state-spaces. The second potential application domain, strategic planning for robot controllers in automated assembly plants, is currently under investigation.This research was supported in part by NSF grants IST-8513989 and IRI-8910173 and by the University of Southern California Faculty Research Initiation Fund.  相似文献   

15.
Summary The efficient parallel algorithms proposed for many fundamental problems, such as list ranking, integer sorting and computing preorder numberings on trees, are very sensitive to processor failures. The requirement of efficiency (commonly formalized usingParallel-timexProcessors as a cost measure) has led to the design of highly tuned PRAM algorithms which, given the additional constraint of simple processor failures, unfortunately become inefficient or even incorrect. We propose a new notion ofrobustness, that combines efficiency with fault tolerance. For the common case of fail-stop errors, we develop a general and easy to implement technique to make robust many efficient parallel algorithms, e.g., algorithms for all the problems listed above. More specifically,for any dynamic pattern of fail-stop errors on a CRCW PRAMwith at least one surviving processor, our method increases the original algorithm cost by at most a log2 multiplicative factor. Our technique is based on a robust solution of the problem ofWrite-All, i.e., usingP processors, write 1's in all locations of anN-sized array. In addition we show that at least a log/log log multiplicative overhead will be incurred for certain patterns of failures by any algorithm that implements robust solutions toWrite-All withP=N. However, by exploiting parallel slackness, we obtain an optimal cost algorithm when Paris C. Kanellakis is a professor of computer science at Brown University. His primary research interests are in the application of logic to computer science, such as high-level query languages for database systems, parallel evaluation of logic programs, and type inference for programming languages. He has published many articles on these subjects and is the author of the chapter Elements of Relational Database Theory in theHandbook of Theoretical Computer Science (Elsevier, 1990). He has also contributed to the theory of distributed computing and to combinatorial optimization. Alex Allister Shvartsman is an engineer at Digital Equipment Corporation. His professional interests include design and development of efficient distributed systems, distributed resource management, and theoretical foundations of fault-tolerant parallel computation. At Digital he architected and managed the development of distributed control systems that automated several of Digital's manufacturing processes. He is currently on an academic leave at Brown University.An extended abstract of a part of this work appears as: Kanellakis and Shvartsman (1989) in the Proceedings of the 8th ACM Symposium on Principles of Distributed Computing, Edmonton 1989This author was supported by NSF grant IRI-8617344, an Alfred P. Sloan fellowship and ONR grant N00014-83-K-0146 ARPA Order No. 4786This author was supported by DEC GEEP Doctoral program and ONR grant N00014-91-J-1613  相似文献   

16.
Summary It is shown how to use efficient mergeable heaps to improve the running time of two algorithms that solve optimization problems on trees.The work of the author was supported in part by the Israel Commission for Basic Research and National Science Foundation grant MCS78-25301  相似文献   

17.
Data access scheduling in firm real-time database systems   总被引:10,自引:0,他引:10  
A major challenge addressed by conventional database systems has been to efficiently implement the transaction model, which provides the properties of atomicity, serializability, and permanence. Real-time applications have added a complex new dimension to this challenge by placing deadlines on the response time of the database system. In this paper, we examine the problem of real-time data access scheduling, that is, the problem of scheduling the data accesses of real-time transactions in order to meet their deadlines. In particular, we focus on firm deadline real-time database applications, where transactions that miss their deadlines are discarded and the objective of the real-time database system is to minimize the number of missed deadlines. Within this framework, we use a detailed simulation model to compare the performance of several real-time locking protocols and optimistic concurrency control algorithms under a variety of real-time transaction workloads. The results of our study show that in moving from the conventional database system domain to the real-time domain, there are new performance-related forces that come into effect. Our experiments demonstrate that these factors can cause performance recommendations that were valid in a conventional database setting to be significantly altered in the corresponding real-time setting.This work was done while the author was with the Computer Sciences Department, University of Wisconsin-Madison. This research was partially supported by the National Science Foundation under grant IRI-8657323.  相似文献   

18.
Several extensions of the logic programming language Prolog to nonHorn clauses use case analysis to handle non-Horn clauses.In this paper,analytical and emprirical evidences are presented to show that,by making a set of clauses less “non-Horn“ using predicated renaming.the performance of these case-analysis based procedures can be improved significantly.In addition,the paper also investigated the problem of efficeiently constructing a predicate renaming that reduces the degree of “non-Hornness“ of a clause set maximally.It is shown that this problem of finding a redicate renaming to achieve minimal “non-Hornness“ is NP -complete.  相似文献   

19.
Summary We propose hot-potato (or, deflection) packet routing algorithms on the two-dimensional mesh. The algorithms are strongly greedy in the sense that they attempt to send packets in good directions whenever possible. Furthermore, the routing operations are simple and independent of the time that has elapsed. The first algorithm gives the best evacuation time known for delivering all the packets to their destinations. A batch ofk packets with maximal source-to-destination distanced max is delivered in 2(k-1)+d max. The second algorithm improves this bound tok+d max when all packets are destined to the same node. This also implies a new bound for the multitarget case, which is the first to take into account the number of in-edges of a node. The third algorithm is designed for routing permutations with source-to-destination distance at most three, in which case the algorithm terminates in at most seven steps. We also show a lower bound of five steps for this problem. Ishai Ben-Aroya received the B.A. and M.Sc. in computer science from the Technion (Israel Institute of Technology). He is currently working with Microsoft Israel R&D group. His main interests include Routing Algorithms, Cryptography and Computer Security. Tamar Eilam received the B.A. degree in Computer Science from the Technion IIL in 1995, and is currently studying towards her M.A. degree. Assaf Schuster received his B.A., M.A. and Ph.D. degrees in Computer Science from the Hebrew University of Jerusalem (the last one in 1991). He is currently a lecturer at the Technion IIL. His main interests include Networks and Routing Algorithms, Parallel and Distributed Computation, Optical Computation and Communication, Dynamically Reconfiguring Networks, and Greedy Hot Potato Routing.This work was supported in part by the French-Israeli grant for cooperation in Computer Science, and by a grant from the Israeli Ministry of Science. An extended abstract appeared in proc. 2nd European Symposium on Algorithms, September 1994  相似文献   

20.
Reconstructing convex sets from support line measurements   总被引:2,自引:0,他引:2  
Algorithms are proposed for reconstructing convex sets given noisy support line measurements. It is observed that a set of measured support lines may not be consistent with any set in the plane. A theory of consistent support lines which serves as a basis for reconstruction algorithms that take the form of constrained optimization algorithms is developed. The formal statement of the problem and constraints reveals a rich geometry that makes it possible to include prior information about object position and boundary smoothness. The algorithms, which use explicit noise models and prior knowledge, are based on maximum-likelihood and maximum a posteriori estimation principles and are implemented using efficient linear and quadratic programming codes. Experimental results are presented. This research sets the stage for a more general approach to the incorporation of prior information concerning the estimation of object shape  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号