排序方式: 共有4条查询结果,搜索用时 125 毫秒
1
1.
Extract-transform-load (ETL) workflows model the population of enterprise data warehouses with information gathered from a large variety of heterogeneous data sources. ETL workflows are complex design structures that run under strict performance requirements and their optimization is crucial for satisfying business objectives. In this paper, we deal with the problem of scheduling the execution of ETL activities (a.k.a. transformations, tasks, operations), with the goal of minimizing ETL execution time and allocated memory. We investigate the effects of four scheduling policies on different flow structures and configurations and experimentally show that the use of different scheduling policies may improve ETL performance in terms of memory consumption and execution time. First, we examine a simple, fair scheduling policy. Then, we study the pros and cons of two other policies: the first opts for emptying the largest input queue of the flow and the second for activating the operation (a.k.a. activity) with the maximum tuple consumption rate. Finally, we examine a fourth policy that combines the advantages of the latter two in synergy with flow parallelization. 相似文献
2.
State-space optimization of ETL workflows 总被引:3,自引:0,他引:3
Simitsis A. Vassiliadis P. Sellis T. 《Knowledge and Data Engineering, IEEE Transactions on》2005,17(10):1404-1419
Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization, and insertion into a data warehouse. In this paper, we derive into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide an exhaustive and two heuristic algorithms toward the minimization of the execution cost of an ETL workflow. The heuristic algorithm with greedy characteristics significantly outperforms the other two algorithms for a large set of experimental cases. 相似文献
3.
Alkis Simitsis Georgia Koutrika Yannis Ioannidis 《The VLDB Journal The International Journal on Very Large Data Bases》2008,17(1):117-149
Précis queries represent a novel way of accessing data, which combines ideas and techniques from the fields of databases and
information retrieval. They are free-form, keyword-based, queries on top of relational databases that generate entire multi-relation
databases, which are logical subsets of the original ones. A logical subset contains not only items directly related to the
given query keywords but also items implicitly related to them in various ways, with the purpose of providing to the user
much greater insight into the original data. In this paper, we lay the foundations for the concept of logical database subsets
that are generated from précis queries under a generalized perspective that removes several restrictions of previous work.
In particular, we extend the semantics of précis queries considering that they may contain multiple terms combined through
the AND, OR, and NOT operators. On the basis of these extended semantics, we define the concept of a logical database subset, we identify the
one that is most relevant to a given query, and we provide algorithms for its generation. Finally, we present an extensive
set of experimental results that demonstrate the efficiency and benefits of our approach. 相似文献
4.
Polyzotis N. Skiadopoulos S. Vassiliadis P. Simitsis A. Frantzell N. 《Knowledge and Data Engineering, IEEE Transactions on》2008,20(7):976-991
Active data warehousing has emerged as an alternative to conventional warehousing practices in order to meet the high demand of applications for up-to-date information. In a nutshell, an active warehouse is refreshed online and thus achieves a higher consistency between the stored information and the latest data updates. The need for online warehouse refreshment introduces several challenges in the implementation of data warehouse transformations, with respect to their execution time and their overhead to the warehouse processes. In this paper, we focus on a frequently encountered operation in this context, namely, the join of a fast stream 5" of source updates with a disk-based relation R, under the constraint of limited memory. This operation lies at the core of several common transformations such as surrogate key assignment, duplicate detection, or identification of newly inserted tuples. We propose a specialized join algorithm, termed mesh join (MESHJOIN), which compensates for the difference in the access cost of the two join inputs by 1) relying entirely on fast sequential scans of R and 2) sharing the I/O cost of accessing R across multiple tuples of 5". We detail the MESHJOIN algorithm and develop a systematic cost model that enables the tuning of MESHJOIN for two objectives: maximizing throughput under a specific memory budget or minimizing memory consumption for a specific throughput. We present an experimental study that validates the performance of MESHJOIN on synthetic and real-life data. Our results verify the scalability of MESHJOIN to fast streams and large relations and demonstrate its numerous advantages over existing join algorithms. 相似文献
1