首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
MAAN: A Multi-Attribute Addressable Network for Grid Information Services   总被引:14,自引:0,他引:14  
Recent structured Peer-to-Peer (P2P) systems such as Distributed Hash Tables (DHTs) offer scalable key-based lookup for distributed resources. However, they cannot be simply applied to grid information services because grid resources need to be registered and searched using multiple attributes. This paper proposes a Multi-Attribute Addressable Network (MAAN) that extends Chord to support multi-attribute and range queries. MAAN addresses range queries by mapping attribute values to the Chord identifier space via uniform locality preserving hashing. It uses an iterative or single attribute dominated query routing algorithm to resolve multi-attribute based queries. Each node in MAAN only has O(logN) neighbors for N nodes. The number of routing hops to resolve a multi-attribute range query is O(logN+N×smin), where smin is the minimum range selectivity on all attributes. When smin=, it is logarithmic to the number of nodes, which is scalable to a large number of nodes and attributes. We also measured the performance of our MAAN implementation and the experimental results are consistent with our theoretical analysis.  相似文献   

2.
LetB be a Banach space ofR n valued continuous functions on [0, ) withfB. Consider the nonlinear Volterra integral equation (*)x(t)+ o t K(t,s,x(s))ds. We use the implicit function theorem to give sufficient conditions onB andK (t,s,x) for the existence of a unique solutionxB to (*) for eachf B with f B sufficiently small. Moreover, there is a constantM>0 independent off with MfB.Part of this work was done while the author was visiting at Wright State University.  相似文献   

3.
The simple rational partial functions accepted by generalized sequential machines are shown to coincide with the compositions P –1 , where P consists of the prefix codings. The rational functions accepted by generalized sequential machines are proved to coincide with the compositions P –1 , where is the family of endmarkers and is the family of removals of endmarkers. (The compositions are read from left to right). We also show that P –1 is the family of the subsequential functions.This work was partially supported by the Esprit Basic Research Action Working Group No. 3166 ASMICS, the CNRS and the Academy of Finland  相似文献   

4.
A first-order system F has theKreisel length-of-proof property if the following statement is true for all formulas(x): If there is ak1 such that for alln0 there is a proof of(¯n) in F with at mostk lines, then there is a proof of x(x) in F. We consider this property for Parikh systems, which are first-order axiomatic systems that contain a finite number of axiom schemata (including individual axioms) and a finite number of rules of inference. We prove that any usual Parikh system formulation of Peano arithmetic has the Kreisel length-of-proof property if the underlying logic of the system is formulated without a schema for universal instantiation in either one of two ways. (In one way, the formula to be instantiated is built up in steps, and in the other way, the term to be substituted is built up in steps.) Our method of proof uses techniques and ideas from unification theory.  相似文献   

5.
For compact Euclidean bodiesP, Q, we define (P, Q) to be the smallest ratior/s wherer > 0,s > 0 satisfy . HeresQ denotes a scaling ofQ by the factors, andQ,Q are some translates ofQ. This function gives us a new distance function between bodies which, unlike previously studied measures, is invariant under affine transformations. If homothetic bodies are identified, the logarithm of this function is a metric. (Two bodies arehomothetic if one can be obtained from the other by scaling and translation.)For integerk 3, define (k) to be the minimum value such that for each convex polygonP there exists a convexk-gonQ with (P, Q) (k). Among other results, we prove that 2.118 ... <-(3) 2.25 and (k) = 1 + (k –2). We give anO(n 2 log2 n)-time algorithm which, for any input convexn-gonP, finds a triangleT that minimizes (T, P) among triangles. However, in linear time we can find a trianglet with (t, P)<-2.25.Our study is motivated by the attempt to reduce the complexity of the polygon containment problem, and also the motion-planning problem. In each case we describe algorithms which run faster when certain implicitslackness parameters of the input are bounded away from 1. These algorithms illustrate a new algorithmic paradigm in computational geometry for coping with complexity.Work of all authors was partially supported by the ESPRIT II Basic Research Actions Program of the EC under Contract No. 3075 (project ALCOM). Rudolf Fleischer and Kurt Mehlhorn acknowledge also DFG (Grant SPP Me 620/6). Chee Yap acknowledges also DFG (Grant Be 142/46-1) and NSF (Grants DCR-84-01898 and CCR-87-03458). This research was performed when Günter Rote and Chee Yap were at the Freie Universität Berlin.  相似文献   

6.
We consider the half-space range-reporting problem: Given a setS ofn points in d, preprocess it into a data structure, so that, given a query half-space , allk points ofS can be reported efficiently. We extend previously known static solutions to dynamic ones, supporting insertions and deletions of points ofS. For a given parameterm,n m n d/2 and an arbitrarily small positive constant , we achieveO(m 1+) space and preprocessing time, O((n/m d/2 logn+k) query time, and O(m1+n) amortized update time (d 3). We present, among others, the following applications: an O(n1+)-time algorithm for computing convex layers in 3, and an output sensitive algorithm for computing a level in an arrangements of planes in 3, whose time complexity is O((b+n) n, whereb is the size of the level.Work by the first author has been supported by National Science Foundation Grant CCR-91-06514. A preliminary version of this paper appeared in Agarwalet al. [2], which also contains the results of [20] on dynamic bichromatic closest pair and minimum spanning trees.  相似文献   

7.
Summary For a family of languages , CAL() is defined as the family of images of under nondeterministic two-way finite state transducers, while FINITE · VISIT() is the closure of under deterministic two-way finite state transducers; CAL0()= and for n0, CAL n+1()=CAL n (CAL()). For any semiAFL , if FINITE · VISIT() CAL(), then CAL n () forms a proper hierarchy and for every n0, FINITE · VISIT(CALn()) CAL n+1() FINITE · VISIT(CAL n+1()). If is a SLIP semiAFL or a weakly k-iterative full semiAFL or a semiAFL contained in any full bounded AFL, then FINITE · VISIT() CAL() and in the last two cases, FINITE · VISIT(). If is a substitution closed full principal semiAFL and FINITE · VISIT(), then FINITE · VISIT() CAL(). If is a substitution closed full principal semiAFL generated by a language without an infinite regular set and 1 is a full semiAFL, then is contained in CALm(1) if and only if it is contained in 1. Among the applications of these results are the following. For the following families , CAL n () forms a proper hierarchy: =INDEXED, =ETOL, and any semiAFL contained in CF. The family CF is incomparable with CAL m (NESA) where NESA is the family of one-way nonerasing stack languages and INDEXED is incomparable with CAL m (STACK) where STACK is the family of one-way stack languages.This work was supported in part by the National Science Foundation under Grants No. DCR74-15091 and MCS-78-04725  相似文献   

8.
We study the approximation of the smallest eigenvalue of a Sturm–Liouville problem in the classical and quantum settings. We consider a univariate Sturm–Liouville eigenvalue problem with a nonnegative function q from the class C2 ([0,1]) and study the minimal number n() of function evaluations or queries that are necessary to compute an -approximation of the smallest eigenvalue. We prove that n()=(–1/2) in the (deterministic) worst case setting, and n()=(–2/5) in the randomized setting. The quantum setting offers a polynomial speedup with bit queries and an exponential speedup with power queries. Bit queries are similar to the oracle calls used in Grovers algorithm appropriately extended to real valued functions. Power queries are used for a number of problems including phase estimation. They are obtained by considering the propagator of the discretized system at a number of different time moments. They allow us to use powers of the unitary matrix exp((1/2) iM), where M is an n× n matrix obtained from the standard discretization of the Sturm–Liouville differential operator. The quantum implementation of power queries by a number of elementary quantum gates that is polylog in n is an open issue. In particular, we show how to compute an -approximation with probability (3/4) using n()=(–1/3) bit queries. For power queries, we use the phase estimation algorithm as a basic tool and present the algorithm that solves the problem using n()=(log –1) power queries, log 2–1 quantum operations, and (3/2) log –1 quantum bits. We also prove that the minimal number of qubits needed for this problem (regardless of the kind of queries used) is at least roughly (1/2) log –1. The lower bound on the number of quantum queries is proven in Bessen (in preparation). We derive a formula that relates the Sturm–Liouville eigenvalue problem to a weighted integration problem. Many computational problems may be recast as this weighted integration problem, which allows us to solve them with a polylog number of power queries. Examples include Grovers search, the approximation of the Boolean mean, NP-complete problems, and many multivariate integration problems. In this paper we only provide the relationship formula. The implications are covered in a forthcoming paper (in preparation).PACS: 03.67.Lx, 02.60.-x.  相似文献   

9.
Transformation of programs for fault-tolerance   总被引:2,自引:0,他引:2  
In this paper we describe how a program constructed for afault-free system can be transformed into afault-tolerant program for execution on a system which is susceptible to failures. A program is described by a set of atomic actions which perform transformations from states to states. We assume that a fault environment is represented by a programF. Interference by the fault environmentF on the execution of a programP can then be described as afault-transformation which transformsP into a program (P). This is proved to be equivalent to the programPP F , whereP F is derived fromP andF, and defines the union of the sets of actions ofP andF P . A recovery transformation transformsP into a program (P) =PR by adding a set ofrecovery actions R, called arecovery program. If the system isfailstop and faults do not affect recovery actions, we have ((P))=(P)R=PP F R We illustrate this approach to fault-tolerant programming by considering the problem of designing a protocol that guarantees reliable communication from a sender to a receiver in spite of faults in the communication channel between them.  相似文献   

10.
Exploratory data mining and analysis requires a computing environment which provides facilities for the user-friendly expression and rapid execution of scientific queries. In this paper, we address research issues in the parallelization of scientific queries containing complex user-defined operations. In a parallel query execution environment, parallelizing a query execution plan involves determining how input data streams to evaluators implementing logical operations can be divided to be processed by clones of the same evaluator in parallel. We introduced the concept of relevance window that characterizes data lineage and data partitioning opportunities available for an user-defined evaluator. In addition, we developed a query parallelization framework by extending relational parallel query optimization algorithms to allow the parallelization characteristics of user-defined evaluators to guide the process of query parallelization in an extensible query processing environment. We demonstrated the utility of our system by performing experiments mining cyclonic activity, blocking events, and the upward wave-energy propagation features from several observational and model simulation datasets.  相似文献   

11.
Domain truncation is the simple strategy of solving problems ony [-, ] by using a large but finite computational interval, [– L, L] Sinceu(y) is not a periodic function, spectral methods have usually employed a basis of Chebyshev polynomials,T n(y/L). In this note, we show that becauseu(±L) must be very, very small if domain truncation is to succeed, it is always more efficient to apply a Fourier expansion instead. Roughly speaking, it requires about 100 Chebyshev polynomials to achieve the same accuracy as 64 Fourier terms. The Fourier expansion of a rapidly decaying but nonperiodic function on a large interval is also a dramatic illustration of the care that is necessary in applying asymptotic coefficient analysis. The behavior of the Fourier coefficients in the limitn for fixed intervalL isnever relevant or significant in this application.  相似文献   

12.
Chen  Ying  Zhu  Qiang  Wang  Nengbin 《World Wide Web》1998,1(4):241-255
Recent research on integrating database and World Wide Web (WWW) technologies has changed the navigation approach to searching information in the Web. People now can issue queries via a simple query interface or a databaselike query language to retrieve information from semistructured WWW data sources. However, the quality of query processing in the WWW is still low due to many factors such as unpredictable response time, irrelevant results, and outofdate data. Such lowquality query processing is intolerable to either users or service providers. In this paper, we present a qualitycontrolled query processing method in the WWW. Quality parameters that users can specify with their queries are introduced. Distance functions that are used to evaluate the goodness of query quality parameters are defined. A query processing model with quality control is introduced. A quality control protocol in query processing is presented. Qualitycontrolled query scheduling algorithms including admission scheduling, promotion/demotion scheduling and execution scheduling are proposed. Other relevant issues such as query classification, system parameter estimation, and query queue management are also discussed. Query processing with quality control is a promising way to solve the uncertain and lowquality query processing problems in the WWW.  相似文献   

13.
Many database applications and environments, such as mediation over heterogeneous database sources and data warehousing for decision support, lead to complex queries. Queries are often nested, defined over previously defined views, and may involve unions. There are good reasons why one might want to remove pieces (sub-queries or sub-views) from such queries: some sub-views of a query may be effectively cached from previous queries, or may be materialized views; some may be known to evaluate empty, by reasoning over the integrity constraints; and some may match protected queries, which for security cannot be evaluated for all users.In this paper, we present a new evaluation strategy with respect to queries defined over views, which we call tuple-tagging, that allows for an efficient removal of sub-views from the query. Other approaches to this are to rewrite the query so the sub-views to be removed are effectively gone, then to evaluate the rewritten query. With the tuple tagging evaluation, no rewrite of the original query is necessary.We describe formally a discounted query (a query with sub-views marked that are to be considered as removed), present the tuple tagging algorithm for evaluating discounted queries, provide an analysis of the algorithm's performance, and present some experimental results. These results strongly support the tuple-tagging algorithm both as an efficient means to effectively remove sub-views from a view query during evaluation, and as a viable optimization strategy for certain applications. The experiments also suggest that rewrite techniques for this may perform worse than the evaluation of the original query, and much worse than the tuple tagging approach.  相似文献   

14.
We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the polynomial fringe property, this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the linear and piecewise linear classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an elementary chain. We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the polynomial fringe property; hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.Supported by NSF Grant IST-84-12791, a grant of IBM Corporation, and ONR contract N00014-85-C-0731.  相似文献   

15.
We show that a mixed state = mnamn|mn| can be realized by an ensemble of pure states {pk, |k} where . Employing this form, we discuss the relative entropy of entanglement of Schmidt correlated states. Also, we calculate the distillable entanglement of a class of mixed states. PACS: 03.67.-a; 03.65.Bz; 03.65.Ud  相似文献   

16.
This paper describes a conceptual framework for nested relational query processing using tuple identifiers. It does this by introducing extended nested relations, relations in which tuple identifiers are allowed as attribute values, and by defining two primitives,Index andLookup, that allow query processing based on extensive use of tuple identifiers. Some properties ofIndex and of the flat relational database that it induces are studied in detail.This research was partially supported by Sandia National Lab's SURP program hector@nmsu.edu  相似文献   

17.
N. Young 《Algorithmica》1994,11(6):525-541
Weighted caching is a generalization ofpaging in which the cost to evict an item depends on the item. We study both of these problems as restrictions of the well-knownk-server problem, which involves moving servers in a graph in response to requests so as to minimize the distance traveled.We give a deterministic on-line strategy for weighted caching that, on any sequence of requests, given a cache holdingk items, incurs a cost within a factor ofk/(k–h+1) of the minimum cost possible given a cache holdingh items. The strategy generalizes least recently used, one of the best paging strategies in practice. The analysis is a primal-dual analysis, the first for an on-line problem, exploiting the linear programming structure of thek-server problem.We introduceloose competitiveness, motivated by Sleator and Tarjan's complaint [ST] that the standard competitive ratios for paging strategies are too high. Ak-server strategy isloosely c(k)-competitive if, for any sequence, foralmost all k, the cost incurred by the strategy withk serverseither is no more thanc(k) times the minimum costor is insignificant.We show that certain paging strategies (including least recently used, and first in first out) that arek-competitive in the standard model are looselyc(k)-competitive providedc(k)/Ink and bothk/c(k) andc(k) are nondecreasing. We show that the marking algorithm, a randomized paging strategy that is (Ink)-competitive in the standard model, is looselyc(k)-competitive providedk–2 In Ink and both 2 Ink–c(k) andc(k) are nondecreasing.This paper is the journal version of On-line Caching as Cache Size Varies, which appeared in theProceedings of the 2nd Annual ACM-SIAM Symposium on Discrete Algorithms (1991). Details beyond those in this paper may be found in Competitive Paging and Dual-Guided Algorithms for Weighted Caching and Matching, which is the author's thesis and is available as Technical Report CS-TR-348-91 from the Computer Science Department at Princeton University.This research was performed while the author was at the Computer Science Department, Princeton University, Princeton, NJ 08544, USA, and was supported by the Hertz Foundation.  相似文献   

18.
This paper investigates the optimization problem when executing a join in a distributed database environment. The minimization of the communication cost for sending data through links has been adopted as an optimization criterion. We explore in this paper the approach of judiciously using join operations as reducers in distributed query processing. In general, this problem is computationally intractable. A restriction of the execution of a join in a pre-defined combinatorial order leads to a possible solution in polynomial time. An algorithm for a chain query computation has been proposed in [21]. The time complexity of the algorithm isO(m 2 n 2+m 3 n), wheren is the number of sites in the network, andm is the number of relations (fragments) involved in the join. In this paper, we firstly present a proof of the intuitively well understood fact—that the eigenorder of a chain join will be the best pre-defined combinatorial order to implement the algorithm in [21]. Secondly, we show a sufficient and necessary condition for a chain query with the eigenordering to be a simple query. For the process of the class of simple queries, we show a significant reduction of the time complexity fromO(m 2 n 2+m 3 n) toO(mn+m 2). It is encouraging that, in practice, the most frequent queries belong to the category of simple queries. Editor: Peter Apers  相似文献   

19.
This paper deals with the implementation of logic queries where array structures are manipulated. Both top-down and bottom-up implementations of the presented logic language, called Datalog A , are considered. Indeed, SLD-resolution is generalized to realize Datalog A top-down query answering. Further, a fixpoint based evaluation of Datalog A queries is introduced, which forms the basis for efficient bottom-up implementation of queries obtained by generalizing rewriting techniques such as magic set method to the case of Datalog A programs.Work partially supported by a European Union grant under the EC-US project DEUS EX MACHINA: nondeterminism for deductive databases and by a MURST grant (40% share) under the project Sistemi formali e strumenti per basi di dati evolute.  相似文献   

20.
Zusammenfassung Es geht in dieser Arbeit in der Hauptsache darum, ein vorgelegtes Differentialgleichungssystem so zu skalieren, daß in der zugehörigen Analogrechnerschaltung die Spannungen an den Ausgängen der Integratoren die durch die Referenzspannung einerseits und durch das Auflösevermögen andererseits gesetzten Schranken nicht über- bzw. unterschreiten. Es werden Abschätzungssätze hergeleitet, die diese Frage im Apriori-Sinn, also ohne die Lösung des Differentialgleichungssystems zu kennen, zu lösen gestatten. Zur Abschätzung werden zunächst Normen, dannKamke-Normen verwendet. Der im Titel erwähnte Satz vonPerron ergibt sich durch spezielle Normengebung und Verzicht auf Abschätzung nach unten. Erschwert werden die Betrachtungen durch die relative Schwäche der Forderung, daß die rechte Seite des Systemsdx/dt=f(x,t) der Bedingung aus xa folgt f(x,t)v(t)x genüge (...:=Norm,a positiv reell). Dadurch scheint es bei Abschätzungen mitKamke-Normen nicht mehr möglich, von den in der Literatur über Existenzbeweise und Abschätzungssätze üblichen Methoden Gebrauch zu machen. Zur Lösung dieser Frage wird eine bedingte Form des bekannten Satzes vonGronwall (auch Satz vonBellman genannt) entwickelt.
A conditional version of the integral inequality of gronwall, a slight generalization of a stability theorem of perron, and overflow-free scaling of analogue computer set-ups
Summary The main subject of this paper is the scaling of a given set of differential equations in such a way that the output voltages of the integrators of the associated analogue computer set-up do not exceed certain upper and lower bounds imposed by the reference voltage and the limited power of resolution of the elements of the analogue computer. The paper gives a priori bounds on the solution of the differential set. Some of these bounds work with norms, others withKamke-norms.Perron's stability theorem mentioned in the title of this paper results by inserting special norms and neglecting lower bounds. A difficulty arises by the relative weakness of the condition xa implies f(x,t)v(t)x on the right hand side of the setdx/dt=f(x,t), where ... is any norm anda is a positive real constant. As a consequence of this, it seems no longer possible to use the usual techniques known from the literature on existence theorems and bounds for the solution of differential equations. To cope with this situation, a conditional version of the well-known theorem ofGronwall (also known by the name of Lemma ofBellman) will be derived.

Diese Arbeit ist Teil einer am Institut für Angewandte Mathematik der Technischen Hochschule München unter Anleitung von Herrn o. Prof. Dr. rer. nat. habil.J. Heinhold angefertigten Dissertation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号