共查询到20条相似文献,搜索用时 919 毫秒
1.
Optimizing multiple dimensional queries simultaneously in multidimensional databases 总被引:1,自引:0,他引:1
Weifa Liang Maria E. Orlowska Jeffrey X. Yu 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):319-338
Some significant progress related to multidimensional data analysis has been achieved in the past few years, including the
design of fast algorithms for computing datacubes, selecting some precomputed group-bys to materialize, and designing efficient
storage structures for multidimensional data. However, little work has been carried out on multidimensional query optimization
issues. Particularly the response time (or evaluation cost) for answering several related dimensional queries simultaneously
is crucial to the OLAP applications. Recently, Zhao et al. first exploited this problem by presenting three heuristic algorithms.
In this paper we first consider in detail two cases of the problem in which all the queries are either hash-based star joins
or index-based star joins only. In the case of the hash-based star join, we devise a polynomial approximation algorithm which
delivers a plan whose evaluation cost is $ O(n^{\epsilon }$) times the optimal, where n is the number of queries and is a fixed constant with . We also present an exponential algorithm which delivers a plan with the optimal evaluation cost. In the case of the index-based
star join, we present a heuristic algorithm which delivers a plan whose evaluation cost is n times the optimal, and an exponential algorithm which delivers a plan with the optimal evaluation cost. We then consider
a general case in which both hash-based star-join and index-based star-join queries are included. For this case, we give a
possible improvement on the work of Zhao et al., based on an analysis of their solutions. We also develop another heuristic
and an exact algorithm for the problem. We finally conduct a performance study by implementing our algorithms. The experimental
results demonstrate that the solutions delivered for the restricted cases are always within two times of the optimal, which
confirms our theoretical upper bounds. Actually these experiments produce much better results than our theoretical estimates.
To the best of our knowledge, this is the only development of polynomial algorithms for the first two cases which are able
to deliver plans with deterministic performance guarantees in terms of the qualities of the plans generated. The previous
approaches including that of [ZDNS98] may generate a feasible plan for the problem in these two cases, but they do not provide
any performance guarantee, i.e., the plans generated by their algorithms can be arbitrarily far from the optimal one.
Received: July 21, 1998 / Accepted: August 26, 1999 相似文献
2.
R. Braumandl J. Claussen A. Kemper D. Kossmann 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):156-177
Inter-object references are one of the key concepts of object-relational and object-oriented database systems. In this work,
we investigate alternative techniques to implement inter-object references and make the best use of them in query processing,
i.e., in evaluating functional joins. We will give a comprehensive overview and performance evaluation of all known techniques
for simple (single-valued) as well as multi-valued functional joins. Furthermore, we will describe special order-preserving\/ functional-join techniques that are particularly attractive for decision support queries that require ordered results. While
most of the presentation of this paper is focused on object-relational and object-oriented database systems, some of the results
can also be applied to plain relational databases because index nested-loop joins\/ along key/foreign-key relationships, as they are frequently found in relational databases, are just one particular way to
execute a functional join.
Received February 28, 1999 / Accepted September 27, 1999 相似文献
3.
Approximate query processing using wavelets 总被引:7,自引:0,他引:7
Kaushik Chakrabarti Minos Garofalakis Rajeev Rastogi Kyuseok Shim 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(2-3):199-223
Approximate query processing has emerged as a cost-effective approach for dealing with the huge data volumes and stringent
response-time requirements of today's decision support systems (DSS). Most work in this area, however, has so far been limited
in its query processing scope, typically focusing on specific forms of aggregate queries. Furthermore, conventional approaches
based on sampling or histograms appear to be inherently limited when it comes to approximating the results of complex queries
over high-dimensional DSS data sets. In this paper, we propose the use of multi-dimensional wavelets as an effective tool
for general-purpose approximate query processing in modern, high-dimensional applications. Our approach is based on building
wavelet-coefficient synopses of the data and using these synopses to provide approximate answers to queries. We develop novel query processing algorithms
that operate directly on the wavelet-coefficient synopses of relational tables, allowing us to process arbitrarily complex
queries entirely in the wavelet-coefficient domain. This guarantees extremely fast response times since our approximate query execution engine
can do the bulk of its processing over compact sets of wavelet coefficients, essentially postponing the expansion into relational
tuples until the end-result of the query. We also propose a novel wavelet decomposition algorithm that can build these synopses
in an I/O-efficient manner. Finally, we conduct an extensive experimental study with synthetic as well as real-life data sets
to determine the effectiveness of our wavelet-based approach compared to sampling and histograms. Our results demonstrate
that our techniques: (1) provide approximate answers of better quality than either sampling or histograms; (2) offer query
execution-time speedups of more than two orders of magnitude; and (3) guarantee extremely fast synopsis construction times
that scale linearly with the size of the data.
Received: 7 August 2000 / Accepted: 1 April 2001 Published online: 7 June 2001 相似文献
4.
Query processing over object views of relational data 总被引:2,自引:0,他引:2
Gustav Fahl Tore Risch 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(4):261-281
This paper presents an approach to object view management for relational databases. Such a view mechanism makes it possible for users to transparently work with data in
a relational database as if it was stored in an object-oriented (OO) database. A query against the object view is translated
to one or several queries against the relational database. The results of these queries are then processed to form an answer
to the initial query. The approach is not restricted to a ‘pure’ object view mechanism for the relational data, since the
object view can also store its own data and methods. Therefore it must be possible to process queries that combine local data
residing in the object view with data retrieved from the relational database. We discuss the key issues when object views
of relational databases are developed, namely: how to map relational structures to sub-type/supertype hierarchies in the view,
how to represent relational database access in OO query plans, how to provide the concept of object identity in the view,
how to handle the fact that the extension of types in the view depends on the state of the relational database, and how to
process and optimize queries against the object view. The results are based on experiences from a running prototype implementation.
Edited by: M.T. ?zsu. Received April 12, 1995 / Accepted April 22, 1996 相似文献
5.
Approximate query mapping: Accounting for translation closeness 总被引:2,自引:0,他引:2
Kevin Chen-Chuan Chang Héctor García-Molina 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(2-3):155-181
In this paper we present a mechanism for approximately translating Boolean query constraints across heterogeneous information
sources. Achieving the best translation is challenging because sources support different constraints for formulating queries,
and often these constraints cannot be precisely translated. For instance, a query [score>8] might be “perfectly” translated
as [rating>0.8] at some site, but can only be approximated as [grade=A] at another. Unlike other work, our general framework
adopts a customizable “closeness” metric for the translation that combines both precision and recall. Our results show that
for query translation we need to handle interdependencies among both query conjuncts as well as disjuncts. As the basis, we
identify the essential requirements of a rule system for users to encode the mappings for atomic semantic units. Our algorithm
then translates complex queries by rewriting them in terms of the semantic units. We show that, under practical assumptions,
our algorithm generates the best approximate translations with respect to the closeness metric of choice. We also present
a case study to show how our technique may be applied in practice.
Received: 15 October 2000 / Accepted: 15 April 2001 Published online: 28 June 2001 相似文献
6.
Fast joins using join indices 总被引:1,自引:0,他引:1
Zhe Li Kenneth A. Ross 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(1):1-24
Two new algorithms, “Jive join” and “Slam join,” are proposed for computing the join of two relations using a join index.
The algorithms are duals: Jive join range-partitions input relation tuple ids and then processes each partition, while Slam
join forms ordered runs of input relation tuple ids and then merges the results. Both algorithms make a single sequential
pass through each input relation, in addition to one pass through the join index and two passes through a temporary file,
whose size is half that of the join index. Both algorithms require only that the number of blocks in main memory is of the
order of the square root of the number of blocks in the smaller relation. By storing intermediate and final join results in
a vertically partitioned fashion, our algorithms need to manipulate less data in memory at a given time than other algorithms.
The algorithms are resistant to data skew and adaptive to memory fluctuations. Selection conditions can be incorporated into
the algorithms. Using a detailed cost model, the algorithms are analyzed and compared with competing algorithms. For large
input relations, our algorithms perform significantly better than Valduriez's algorithm, the TID join algorithm, and hash
join algorithms. An experimental study is also conducted to validate the analytical results and to demonstrate the performance
characteristics of each algorithm in practice.
Received July 21, 1997 / Accepted June 8, 1998 相似文献
7.
Chiang Lee Chi-Sheng Shih Yaw-Huei Chen 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):327-343
Traditional algorithms for optimizing the execution order of joins are no more valid when selections and projections involve
methods and become very expensive operations. Selections and projections could be even more costly than joins such that they
are pulled above joins, rather than pushed down in a query tree. In this paper, we take a fundamental look at how to approach
query optimization from a top-down design perspective, rather than trying to force one model to fit into another. We present
a graph model which is designed to characterize execution plans. Each edge and each vertex of the graph is assigned a weight
to model execution plans. We also design algorithms that use these weights to optimize the execution order of operations.
A cost model of these algorithms is developed. Experiments are conducted on the basis of this cost model. The results show
that our algorithms are superior to similar work proposed in the literature.
Received 20 April 1999 / Accepted 9 August 2000 Published online 20 April 2001 相似文献
8.
Boris Chidlovskii Uwe M. Borghoff 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(1):2-17
Abstract. In meta-searchers accessing distributed Web-based information repositories, performance is a major issue. Efficient query
processing requires an appropriate caching mechanism. Unfortunately, standard page-based as well as tuple-based caching mechanisms
designed for conventional databases are not efficient on the Web, where keyword-based querying is often the only way to retrieve
data. In this work, we study the problem of semantic caching of Web queries and develop a caching mechanism for conjunctive
Web queries based on signature files. Our algorithms cope with both relations of semantic containment and intersection between a query and the corresponding cache
items. We also develop the cache replacement strategy to treat situations when cached items differ in size and contribution
when providing partial query answers. We report results of experiments and show how the caching mechanism is realized in the
Knowledge Broker system.
Received June 15, 1999 / Accepted December 24, 1999 相似文献
9.
Secure buffering in firm real-time database systems 总被引:2,自引:0,他引:2
Binto George Jayant R. Haritsa 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):178-198
Many real-time database applications arise in electronic financial services, safety-critical installations and military systems
where enforcing security is crucial to the success of the enterprise. We investigate here the performance implications, in terms of killed transactions,
of guaranteeing multi-level secrecy in a real-time database system supporting applications with firm deadlines. In particular, we focus on the buffer management aspects of this issue.
Our main contributions are the following. First, we identify the importance and difficulties of providing secure buffer management
in the real-time database environment. Second, we present SABRE, a novel buffer management algorithm that provides covert-channel-free security. SABRE employs a fully dynamic one-copy allocation policy for efficient usage of buffer resources. It also incorporates
several optimizations for reducing the overall number of killed transactions and for decreasing the unfairness in the distribution
of killed transactions across security levels. Third, using a detailed simulation model, the real-time performance of SABRE
is evaluated against unsecure conventional and real-time buffer management policies for a variety of security-classified transaction
workloads and system configurations. Our experiments show that SABRE provides security with only a modest drop in real-time
performance. Finally, we evaluate SABRE's performance when augmented with the GUARD adaptive admission control policy. Our
experiments show that this combination provides close to ideal fairness for real-time applications that can tolerate covert-channel
bandwidths of up to one bit per second (a limit specified in military standards).
Received March 1, 1999 / Accepted October 1, 1999 相似文献
10.
We consider the problem of scheduling a set of pages on a single broadcast channel using time-multiplexing. In a perfectly periodic schedule, time is divided into equal size slots, and each page is transmitted in a time slot precisely every fixed interval of time (the period of the page). We study the case in which each page i has a given demand probability , and the goal is to design a perfectly periodic schedule that minimizes the average time a random client waits until its
page is transmitted. We seek approximate polynomial solutions. Approximation bounds are obtained by comparing the costs of
a solution provided by an algorithm and a solution to a relaxed (non-integral) version of the problem. A key quantity in our
methodology is a fraction we denote by , that depends on the maximum demand probability: . The best known polynomial algorithm to date guarantees an approximation of . In this paper, we develop a tree-based methodology for perfectly periodic scheduling, and using new techniques, we derive
algorithms with better bounds. For small values, our best algorithm guarantees approximation of . On the other hand, we show that the integrality gap between the cost of any perfectly periodic schedule and the cost of
the fractional problem is at least . We also provide algorithms with good performance guarantees for large values of .
Received: December 2001 / Accepted: September 2002 相似文献
11.
Bogdan S. Chlebus Leszek Gasieniec Alan Gibbons Andrzej Pelc Wojciech Rytter 《Distributed Computing》2002,15(1):27-38
We consider the problem of distributed deterministic broadcasting in radio networks of unknown topology and size. The network
is synchronous. If a node u can be reached from two nodes which send messages in the same round, none of the messages is received by u. Such messages block each other and node u either hears the noise of interference of messages, enabling it to detect a collision, or does not hear anything at all, depending on the model. We assume that nodes know neither the topology nor the size of
the network, nor even their immediate neighborhood. The initial knowledge of every node is limited to its own label. Such
networks are called ad hoc multi-hop networks. We study the time of deterministic broadcasting under this scenario.
For the model without collision detection, we develop a linear-time broadcasting algorithm for symmetric graphs, which is
optimal, and an algorithm for arbitrary n-node graphs, working in time . Next we show that broadcasting with acknowledgement is not possible in this model at all.
For the model with collision detection, we develop efficient algorithms for broadcasting and for acknowledged broadcasting
in strongly connected graphs.
Received: January 2000 / Accepted: June 2001 相似文献
12.
R. Braumandl M. Keidl A. Kemper D. Kossmann A. Kreutz S. Seltzsam K. Stocker 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(1):48-71
We present the design of ObjectGlobe, a distributed and open query processor for Internet data sources. Today, data is published
on the Internet via Web servers which have, if at all, very localized query processing capabilities. The goal of the ObjectGlobe
project is to establish an open marketplace in which data and query processing capabilities can be distributed and used by any kind of Internet application. Furthermore, ObjectGlobe integrates cycle providers (i.e., machines) which carry out query processing operators. The overall picture is to make it possible to execute a query
with – in principle – unrelated query operators, cycle providers, and data sources. Such an infrastructure can serve as enabling
technology for scalable e-commerce applications, e.g., B2B and B2C market places, to be able to integrate data and data processing
operations of a large number of participants. One of the main challenges in the design of such an open system is to ensure
privacy and security. We discuss the ObjectGlobe security requirements, show how basic components such as the optimizer and
runtime system need to be extended, and present the results of performance experiments that assess the additional cost for
secure distributed query processing. Another challenge is quality of service management so that users can constrain the costs
and running times of their queries.
Received: 30 October 2000 / Accepted: 14 March 2001 Published online: 7 June 2001 相似文献
13.
In this paper we describe continuing work being carried out as part of the Bristol Wearable Computing Initiative. We are
interested in the use of context sensors to improve the usefulness of wearable computers. A CyberJacket incorporating a Tourist Guide application has been built, and we have experimented with location and movement sensing devices
to improve its performance. In particular, we have researched processing techniques for data from accelerometers which enable
the wearable computer to determine the user’s activity. We have experimented with, and review, techniques already employed
by others; and then propose new methods for analysing the data delivered by these devices. We try to minimise the number of
devices needed, and use a single X-Y accelerometer device. Using our techniques we have adapted our CyberJacket and Tourist
Guide to include a multimedia presentation which gives the user information using different media depending on the user’s
activity as well as location. 相似文献
14.
K. Selçuk Candan Eric Lemar V.S. Subrahmanian 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(2):131-153
Abstract. Though there has been extensive work on multimedia databases in the last few years, there is no prevailing notion of a multimedia
view, nor there are techniques to create, manage, and maintain such views. Visualizing the results of a dynamic multimedia
query or materializing a dynamic multimedia view corresponds to assembling and delivering an interactive multimedia presentation
in accordance with the visualization specifications. In this paper, we suggest that a non-interactive multimedia presentation
is a set of virtual objects with associated spatial and temporal presentation constraints. A virtual object is either an object, or the result of a query.
As queries may have different answers at different points in time, scheduling the presentation of such objects is nontrivial.
We then develop a probabilistic model of interactive multimedia presentations, extending the non-interactive model described
earlier. We also develop a probabilistic model of interactive visualization where the probabilities reflect the user profiles,
or the likelihood of certain user interactions. Based on this probabilistic model, we develop three utility-theoretic based
types of prefetching algorithms that anticipate how users will interact with the presentation. These prefetching algorithms
allow efficient visualization of the query results in accordance with the underlying specification. We have built a prototype
system that incorporates these algorithms. We report on the results of experiments conducted on top of this implementation.
Received June 10, 1998 / Accepted November 10, 1999 相似文献
15.
Fast techniques for the optimal smoothing of stored video 总被引:3,自引:0,他引:3
Work-ahead smoothing is a technique whereby a server, transmitting stored compressed video to a client, utilizes client buffer
space to reduce the rate variability of the transmitted stream. The technique requires the server to compute a schedule of
transfer under the constraints that the client buffer neither overflows nor underflows. Recent work established an optimal
off-line algorithm (which minimizes peak, variance and rate variability of the transmitted stream) under the assumptions of
fixed client buffer size, known worst case network jitter, and strict playback of the client video. In this paper, we examine
the practical considerations of heterogeneous and dynamically variable client buffer sizes, variable worst case network jitter
estimates, and client interactivity. These conditions require on-line computation of the optimal transfer schedule. We focus on techniques for reducing on-line computation time. Specifically,
(i) we present an algorithm for precomputing and storing the optimal schedules for all possible client buffer sizes in a compact
manner; (ii) we show that it is theoretically possible to precompute and store compactly the optimal schedules for all possible
estimates of worst case network jitter; (iii) in the context of playback resumption after client interactivity, we show convergence
of the recomputed schedule with the original schedule, implying greatly reduced on-line computation time; and (iv) we propose
and empirically evaluate an “approximation scheme” that produces a schedule close to optimal but takes much less computation
time. 相似文献
16.
The GMAP: a versatile tool for physical data independence 总被引:1,自引:0,他引:1
Odysseas G. Tsatalos Marvin H. Solomon Yannis E. Ioannidis 《The VLDB Journal The International Journal on Very Large Data Bases》1996,5(2):101-118
Physical data independence is touted as a central feature of modern
database systems. It allows users to frame queries in terms of the logical
structure of the data, letting a query processor automatically translate
them into optimal plans that access physical storage structures. Both
relational and object-oriented systems, however, force users to frame their
queries in terms of a logical schema that is directly tied to physical
structures. We present an approach that eliminates this dependence. All
storage structures are defined in a declarative language based on
relational algebra as functions of a logical schema. We present an
algorithm, integrated with a conventional query optimizer, that translates
queries over this logical schema into plans that access the storage
structures. We also show how to compile update requests into plans that
update all relevant storage structures consistently and optimally.
Finally, we report on experiments with a prototype implementation of our
approach that demonstrate how it allows storage structures to be tuned to
the expected or observed workload to achieve significantly better
performance than is possible with conventional techniques.
Edited by
Matthias Jarke, Jorge Bocca, Carlo Zaniolo. Received
September 15, 1994 / Accepted September 1, 1995 相似文献
17.
In packet audio applications, packets are buffered at a receiving site and their playout delayed in order to compensate for
variable network delays. In this paper, we consider the problem of adaptively adjusting the playout delay in order to keep
this delay as small as possible, while at the same time avoiding excessive “loss” due to the arrival of packets at the receiver
after their playout time has already passed. The contributions of this paper are twofold. First, given a trace of packet audio
receptions at a receiver, we present efficient algorithms for computing a bound on the achievable performance of any playout delay adjustment algorithm. More precisely, we compute upper and lower bounds (which are shown to be tight for the
range of loss and delay values of interest) on the optimum (minimum) average playout delay for a given number of packet losses
(due to late arrivals) at the receiver for that trace. Second, we present a new adaptive delay adjustment algorithm that tracks
the network delay of recently received packets and efficiently maintains delay percentile information. This information, together
with a “delay spike” detection algorithm based on (but extending) our earlier work, is used to dynamically adjust talkspurt
playout delay. We show that this algorithm outperforms existing delay adjustment algorithms over a number of measured audio
delay traces and performs close to the theoretical optimum over a range of parameter values of interest. 相似文献
18.
Speeding up construction of PMR quadtree-based spatial indexes 总被引:5,自引:0,他引:5
Gisli R. Hjaltason Hanan Samet 《The VLDB Journal The International Journal on Very Large Data Bases》2002,11(2):109-137
Spatial indexes, such as those based on the quadtree, are important in spatial databases for efficient execution of queries
involving spatial constraints, especially when the queries involve spatial joins. In this paper we present a number of techniques
for speeding up the construction of quadtree-based spatial indexes, specifically the PMR quadtree, which can index arbitrary
spatial data. We assume a quadtree implementation using the “linear quadtree”, a disk-resident representation that stores
objects contained in the leaf nodes of the quadtree in a linear index (e.g., a B-tree) ordered based on a space-filling curve.
We present two complementary techniques: an improved insertion algorithm and a bulk-loading method. The bulk-loading method
can be extended to handle bulk-insertions into an existing PMR quadtree. We make some analytical observations about the I/O
cost and CPU cost of our PMR quadtree bulk-loading algorithm, and conduct an extensive empirical study of the techniques presented
in the paper. Our techniques are found to yield significant speedup compared to traditional quadtree building methods, even
when the size of a main memory buffer is very small compared to the size of the resulting quadtrees.
Edited by R. Sacks-Davis. Received: July 10, 2001 / Accepted: March 25, 2002 Published online: September 25, 2002 相似文献
19.
Ranking queries, also known as top-k queries, produce results that are ordered on some computed score. Typically, these queries involve joins, where users are usually interested only in the top-k join results. Top-k queries are dominant in many emerging applications, e.g., multimedia retrieval by content, Web databases, data mining, middlewares, and most information retrieval applications. Current relational query processors do not handle ranking queries efficiently, especially when joins are involved. In this paper, we address supporting top-k join queries in relational query processors. We introduce a new rank-join algorithm that makes use of the individual orders of its inputs to produce join results ordered on a user-specified scoring function. The idea is to rank the join results progressively during the join operation. We introduce two physical query operators based on variants of ripple join that implement the rank-join algorithm. The operators are nonblocking and can be integrated into pipelined execution plans. We also propose an efficient heuristic designed to optimize a top-k join query by choosing the best join order. We address several practical issues and optimization heuristics to integrate the new join operators in practical query processors. We implement the new operators inside a prototype database engine based on PREDATOR. The experimental evaluation of our approach compares recent algorithms for joining ranked inputs and shows superior performance.Received: 23 December 2003, Accepted: 31 March 2004, Published online: 12 August 2004Edited by: S. AbiteboulExtended version of the paper published in the Proceedings of the 29th International Conference on Very Large Databases, VLDB 2003, Berlin, Germany, pp 754-765 相似文献
20.
J. Claussen A. Kemper D. Kossmann C. Wiesner 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(3):190-213
Decision support queries typically involve several joins, a grouping with aggregation, and/or sorting of the result tuples.
We propose two new classes of query evaluation algorithms that can be used to speed up the execution of such queries. The
algorithms are based on (1) early sorting and (2) early partitioning– or a combination of both. The idea is to push the sorting and/or the partitioning to the leaves, i.e., the base relations,
of the query evaluation plans (QEPs) and thereby avoid sorting or partitioning large intermediate results generated by the
joins. Both early sorting and early partitioning are used in combination with hash-based algorithms for evaluating the join(s)
and the grouping. To enable early sorting, the sort order generated at an early stage of the QEP is retained through an arbitrary
number of so-called order-preserving hash joins. To make early partitioning applicable to a large class of decision support queries, we generalize the so-called hash teams
proposed by Graefe et al. [GBC98]. Hash teams allow to perform several hash-based operations (join and grouping) on the same
attribute in one pass without repartitioning intermediate results. Our generalization consists of indirectly partitioning
the input data. Indirect partitioning means partitioning the input data on an attribute that is not directly needed for the
next hash-based operation, and it involves the construction of bitmaps to approximate the partitioning for the attribute that
is needed in the next hash-based operation. Our performance experiments show that such QEPs based on early sorting, early partitioning, or both in combination perform significantly better than conventional strategies for many common classes of decision support
queries.
Received April 4, 2000 / Accepted June 23, 2000 相似文献