首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article presents a method for adaptively representing multidimensional data cubes using wavelet view elements in order to more efficiently support data analysis and querying involving aggregations. The proposed method decomposes the data cubes into an indexed hierarchy of wavelet view elements. The view elements differ from traditional data cube cells in that they correspond to partial and residual aggregations of the data cube. The view elements provide highly granular building blocks for synthesizing the aggregated and range-aggregated views of the data cubes. We propose a strategy for selectively materializing alternative sets of view elements based on the patterns of access of views. We present a fast and optimal algorithm for selecting a non-expansive set of wavelet view elements that minimizes the average processing cost for supporting a population of queries of data cube views. We also present a greedy algorithm for allowing the selective materialization of a redundant set of view element sets which, for measured increases in storage capacity, further reduces processing costs. Experiments and analytic results show that the wavelet view element framework performs better in terms of lower processing and storage cost than previous methods that materialize and store redundant views for online analytical processing (OLAP).  相似文献   

2.
High Performance OLAP and Data Mining on Parallel Computers   总被引:2,自引:0,他引:2  
On-Line Analytical Processing (OLAP) techniques are increasingly being used in decision support systems to provide analysis of data. Queries posed on such systems are quite complex and require different views of data. Analytical models need to capture the multidimensionality of the underlying data, a task for which multidimensional databases are well suited. Multidimensional OLAP systems store data in multidimensional arrays on which analytical operations are performed. Knowledge discovery and data mining requires complex operations on the underlying data which can be very expensive in terms of computation time. High performance parallel systems can reduce this analysis time. Precomputed aggregate calculations in a Data Cube can provide efficient query processing for OLAP applications. In this article, we present algorithms for construction of data cubes on distributed-memory parallel computers. Data is loaded from a relational database into a multidimensional array. We present two methods, sort-based and hash-based for loading the base cube and compare their performances. Data cubes are used to perform consolidation queries used in roll-up operations using dimension hierarchies. Finally, we show how data cubes are used for data mining using Attribute Focusing techniques. We present results for these on the IBM-SP2 parallel machine. Results show that our algorithms and techniques for OLAP and data mining on parallel systems are scalable to a large number of processors, providing a high performance platform for such applications.  相似文献   

3.
On-line analytical processing (OLAP) typically involves complex aggregate queries over large datasets. The data cube has been proposed as a structure that materializes the results of such queries in order to accelerate OLAP. A significant fraction of the related work has been on Relational-OLAP (ROLAP) techniques, which are based on relational technology. Existing ROLAP cubing solutions mainly focus on “flat” datasets, which do not include hierarchies in their dimensions. Nevertheless, as shown in this paper, the nature of hierarchies introduces several complications into the entire lifecycle of a data cube including the operations of construction, storage, indexing, query processing, and incremental maintenance. This fact renders existing techniques essentially inapplicable in a significant number of real-world applications and mandates revisiting the entire cube lifecycle under the new perspective. In order to overcome this problem, the CURE algorithm has been recently proposed as an efficient mechanism to construct complete cubes over large datasets with arbitrary hierarchies and store them in a highly compressed format, compatible with the relational model. In this paper, we study the remaining phases in the cube lifecycle and introduce query-processing and incremental-maintenance algorithms for CURE cubes. These are significantly different from earlier approaches, which have been proposed for flat cubes constructed by other techniques and are inadequate for CURE due to its high compression rate and the presence of hierarchies. Our methods address issues such as cube indexing, query optimization, and lazy update policies. Especially regarding updates, such lazy approaches are applied for the first time on cubes. We demonstrate the effectiveness of CURE in all phases of the cube lifecycle through experiments on both real-world and synthetic datasets. Among the experimental results, we distinguish those that have made CURE the first ROLAP technique to complete the construction and usage of the cube of the highest-density dataset in the APB-1 benchmark (12 GB). CURE was in fact quite efficient on this, showing great promise with respect to the potential of the technique overall.  相似文献   

4.
View materialization is one of the most important techniques applied in multidimensional databases. The problem of selecting a set of views for materialization that minimizes queries response time under storage space constraint received significant attention over last twenty years. Many researchers concentrate on designing better view selection methods with respect to the running time or the cost of the solution. This paper summarizes our research on the problem of how much space should be allocated for views materialization to ensure good queries performance. In order to comprehensively investigate the problem and minimize the influence of untypical cases, the experiments described in this paper were done on the large data set, including large data cubes, rarely considered in previous papers. In particular, the relation between the number of data cube views and the space limit expressed as a percentage of the fully materialized data cube size and a multiple of the base view size is analysed. According to our experimental results, the allocation of large space for views materialization is not cost effective.  相似文献   

5.
OLAP cubes provide exploratory query capabilities combining joins and aggregations at multiple granularity levels. However, cubes cannot intuitively or directly show the relationship between measures aggregated at different grouping levels. One prominent example is the percentage, which is widely used in most analytical applications. Considering this limitation, we introduce percentage cube as a generalized data cube that takes percentages as its basic measure. More precisely, a percentage cube shows the fractional relationship in every cuboid between each aggregated measure on several dimensions and its rolled-up measure aggregated by fewer dimensions. We propose the syntax and introduce query optimizations to materialize the percentage cube. We justify that percentage cubes are significantly harder to evaluate than standard data cubes because in addition to the exponential number of cuboids, there is an additional exponential number of grouping column pairs (grouping columns at the individual level and the total level) on which percentages are computed. We propose alternative methods to prune the cube to identify interesting percentages including a row count threshold, a percentage threshold, and selecting the top k percentages. We study percentage aggregations within the classification of distributive, algebraic, and holistic functions. Finally, we also consider the problem of incremental computation of percentage cube. Experiments compare our query optimizations with existing SQL functions, evaluate the impact and speed of lattice pruning methods and study the effectiveness of the incremental computation.  相似文献   

6.
一种高效流立方体结构   总被引:1,自引:0,他引:1       下载免费PDF全文
流立方体是一种通过H-tree结构实现的,通过H-cubing算法计算每个立方单元格的立方体结构。由于H-tree中的子节点是无序的,H-cubing算法的局限性导致其不能有效地进行数据流的查询和在线分析以及等高级操作。针对这一问题,提出一种新的基于ANH-tree的流立方体实现方法,该方法在H-tree的基础上,使用平衡二叉树索引无序节点并在相关节点直接建立链接来加快节点访问速度和立方单元格的计算速度,并在此基础上给出了与新结构对应的创建和查询算法,实验表明ANH-tree结构在CPU时间和内存空间等方面的性能远远优于H-tree。  相似文献   

7.
基于OLAP的数据挖掘,是数据挖掘的一个新的发展方向。对于如何把OLAP(联机分析处理技术)和DM(数据挖掘)统一起来,从而在数据库或数据仓库的不同层次进行挖掘,提出了OLAP数据挖掘系统的结构。通过研究数据挖掘方法和OLAP操作的特点,以及数据立方的构建和物化,对传统的DM算法进行了改进,设计并实现了更能适应OLAP数据挖掘引擎的算法。  相似文献   

8.
Online mining of fuzzy multidimensional weighted association rules   总被引:1,自引:1,他引:0  
This paper addresses the integration of fuzziness with On-Line Analytical Processing (OLAP) based association rules mining. It contributes to the ongoing research on multidimensional online association rules mining by proposing a general architecture that utilizes a fuzzy data cube for knowledge discovery. A data cube is mainly constructed to provide users with the flexibility to view data from different perspectives as some dimensions of the cube contain multiple levels of abstraction. The first step of the process described in this paper involves introducing fuzzy data cube as a remedy to the problem of handling quantitative values of dimensional attributes in a cube. This facilitates the online mining of fuzzy association rules at different levels within the constructed fuzzy data cube. Then, we investigate combining the concepts of weight and multiple-level to mine fuzzy weighted multi-cross-level association rules from the constructed fuzzy data cube. For this purpose, three different methods are introduced for single dimension, multidimensional and hybrid (integrates the other two methods) fuzzy weighted association rules mining. Each of the three methods utilizes a fuzzy data cube constructed to suite the particular method. To the best of our knowledge, this is the first effort in this direction. We compared the proposed approach to an existing approach that does not utilize fuzziness. Experimental results obtained for each of the three methods on a synthetic dataset and on the adult data of the United States census in year 2000 demonstrate the effectiveness and applicability of the proposed fuzzy OLAP based mining approach. OLAP is one of the most popular tools for on-line, fast and effective multidimensional data analysis. In the OLAP framework, data is mainly stored in data hypercubes (simply called cubes).  相似文献   

9.
A novel top-down compression technique for data cubes is introduced and experimentally assessed in this paper. This technique considers the previously unrecognized case in which multiple Hierarchical Range Queries (HRQ), a very useful class of OLAP queries, must be evaluated against the target data cube simultaneously. This scenario makes traditional data cube compression techniques ineffective, as, contrary to the aim of our work, these techniques take into consideration one constraint only (e.g., a given storage space bound). The result of our study consists in introducing an innovative multiple-objective OLAP computational paradigm, and a hierarchical multidimensional histogram, whose main benefit is meaningfully implementing an intermediate compression of the input data cube able to simultaneously accommodate an even large family of different-in-nature HRQ. A complementary contribution of our work is represented by a wide experimental evaluation of the performance of our technique against both benchmark and real-life data cubes, also in comparison with state-of-the-art histogram-based compression techniques.  相似文献   

10.
空间Cube计算方法   总被引:3,自引:0,他引:3  
随着卫星勘测、遥感影像、GPS等系统的广泛应用,目前各行各业拥有了大量的地理空间数据。空间数据仓库技术将较为成熟的数据仓库和联机分析处理技术应用到空间信息领域,以有效地支持空间分析和决策。空间Cube的构建与维护是空间数据仓库和空间联机分析处理的一个核心问题。文章在介绍空间数据仓库模型和空间Cube的基础上,结合空间聚集计算的特点,给出了几种空间Cube计算的有效方法。  相似文献   

11.
刘光明  任艳  李川  杨宁  唐常杰 《软件学报》2017,28(3):732-743
信息网络数据立方(InfoNetCube)的计算是进行信息网络在线分析处理的基础.然而,不同于传统的数据立方,信息网络数据立方由多个子方体格组成,每个方体格中的任意方体(cuboid)的任意单元格都包含一个主题图(或称图度量),因而空间开销较传统数据立方大2个数量级以上.如何快速、高效进行信息网络数据立方的部分物化是极具挑战的研究课题.本文提出基于“透析计算”思想的信息网络立方物化策略,通过主题图度量在信息维和拓扑维上反单调性运用,提出基于“透析计算”的空间剪枝算法,快速透析掉不可能命中的子图度量、方体单元、方体乃至方体格.实验结果表明,本文提出的基于“透析计算”的部分物化策略,可以对信息网络方体进行有效剪枝,算法较基于基本方体的部分物化策略运行时间平均降低75%.  相似文献   

12.
Graph OLAP: a multi-dimensional framework for graph data analysis   总被引:2,自引:1,他引:1  
Databases and data warehouse systems have been evolving from handling normalized spreadsheets stored in relational databases, to managing and analyzing diverse application-oriented data with complex interconnecting structures. Responding to this emerging trend, graphs have been growing rapidly and showing their critical importance in many applications, such as the analysis of XML, social networks, Web, biological data, multimedia data and spatiotemporal data. Can we extend useful functions of databases and data warehouse systems to handle graph structured data? In particular, OLAP (On-Line Analytical Processing) has been a popular tool for fast and user-friendly multi-dimensional analysis of data warehouses. Can we OLAP graphs? Unfortunately, to our best knowledge, there are no OLAP tools available that can interactively view and analyze graph data from different perspectives and with multiple granularities. In this paper, we argue that it is critically important to OLAP graph structured data and propose a novel Graph OLAP framework. According to this framework, given a graph dataset with its nodes and edges associated with respective attributes, a multi-dimensional model can be built to enable efficient on-line analytical processing so that any portions of the graphs can be generalized/specialized dynamically, offering multiple, versatile views of the data. The contributions of this work are three-fold. First, starting from basic definitions, i.e., what are dimensions and measures in the Graph OLAP scenario, we develop a conceptual framework for data cubes on graphs. We also look into different semantics of OLAP operations, and classify the framework into two major subcases: informational OLAP and topological OLAP. Second, we show how a graph cube can be materialized by calculating a special kind of measure called aggregated graph and how to implement it efficiently. This includes both full materialization and partial materialization where constraints are enforced to obtain an iceberg cube. As we can see, due to the increased structural complexity of data, aggregated graphs that depend on the underlying “network” properties of the graph dataset are much harder to compute than their traditional OLAP counterparts. Third, to provide more flexible, interesting and informative OLAP of graphs, we further propose a discovery-driven multi-dimensional analysis model to ensure that OLAP is performed in an intelligent manner, guided by expert rules and knowledge discovery processes. We outline such a framework and discuss some challenging research issues for discovery-driven Graph OLAP.  相似文献   

13.
A Genetic Selection Algorithm for OLAP Data Cubes   总被引:1,自引:0,他引:1  
Multidimensional data analysis, as supported by OLAP (online analytical processing) systems, requires the computation of many aggregate functions over a large volume of historically collected data. To decrease the query time and to provide various viewpoints for the analysts, these data are usually organized as a multidimensional data model, called data cubes. Each cell in a data cube corresponds to a unique set of values for the different dimensions and contains the metric of interest. The data cube selection problem is, given the set of user queries and a storage space constraint, to select a set of materialized cubes from the data cubes to minimize the query cost and/or the maintenance cost. This problem is known to be an NP-hard problem. In this study, we examined the application of genetic algorithms to the cube selection problem. We proposed a greedy-repaired genetic algorithm, called the genetic greedy method. According to our experiments, the solution obtained by our genetic greedy method is superior to that found using the traditional greedy method. That is, within the same storage constraint, the solution can greatly reduce the amount of query cost as well as the cube maintenance cost.  相似文献   

14.
查询速度是联机分析处理中的一个关键性能指标,人们通过事先生成所有可能的聚集来提高查询速度,然而这样的完全物化是以存储空间为代价的.针对数据立方体数据分布特点和结合压缩技术,本文介绍如何最大化节省存储空间来进行完全物化,然后在此基础上对查询进行了研究,以达到最小存储空间以及较好的查询速度的目的.  相似文献   

15.
基于微软Analysis Services提出一套快速开发OLAP分析工具的解决方案.在此方案中,包括了数据源的连接、立方体的构建以及客户端展现工具的制作.介绍Analysis Services相关体系结构,给出了具体的快速开发方案.  相似文献   

16.
封闭数据立方是一种有效的无损压缩技术,它去掉了数据立方中的冗余信息,从而有效降低了数据立方的存储空间、加快了计算速度,而且几乎不影响查询性能.Hadoop的MapReduce并行计算模型为数据立方的计算提供了技术支持,Hadoop的分布式文件系统HDFS为数据立方的存储提供了保障.为了节省存储空间、加快查询速度,在传统数据立方的基础上提出封闭直方图立方,它在封闭数据立方的基础上通过编码技术进一步节省了存储空间,通过建立索引加快了查询速度.Hadoop并行计算平台不论从扩展性还是均衡性都为封闭直方图立方提供了保证.实验证明:封闭直方图立方对数据立方进行了有效压缩,具有较高的查询性能,根据Hadoop的特点通过增加节点个数明显加快了计算速度.  相似文献   

17.
We present a new full cube computation technique and a cube storage representation approach, called the multidimensional cyclic graph (MCG) approach. The data cube relational operator has exponential complexity and therefore its materialization involves both a huge amount of memory and a substantial amount of time. Reducing the size of data cubes, without a loss of generality, thus becomes a fundamental problem. Previous approaches, such as Dwarf, Star and MDAG, have substantially reduced the cube size using graph representations. In general, they eliminate prefix redundancy and some suffix redundancy from a data cube. The MCG differs significantly from previous approaches as it completely eliminates prefix and suffix redundancies from a data cube. A data cube can be viewed as a set of sub-graphs. In general, redundant sub-graphs are quite common in a data cube, but eliminating them is a hard problem. Dwarf, Star and MDAG approaches only eliminate some specific common sub-graphs. The MCG approach efficiently eliminates all common sub-graphs from the entire cube, based on an exact sub-graph matching solution. We propose a matching function to guarantee one-to-one mapping between sub-graphs. The function is computed incrementally, in a top-down fashion, and its computation uses a minimal amount of information to generate unique results. In addition, it is computed for any measurement type: distributive, algebraic or holistic. MCG performance analysis demonstrates that MCG is 20-40% faster than Dwarf, Star and MDAG approaches when computing sparse data cubes. Dense data cubes have a small number of aggregations, so there is not enough room for runtime and memory consumption optimization, therefore the MCG approach is not useful in computing such dense cubes. The compact representation of sparse data cubes enables the MCG approach to reduce memory consumption by 70-90% when compared to the original Star approach, proposed in [33]. In the same scenarios, the improved Star approach, proposed in [34], reduces memory consumption by only 10-30%, Dwarf by 30-50% and MDAG by 40-60%, when compared to the original Star approach. The MCG is the first approach that uses an exact sub-graph matching function to reduce cube size, avoiding unnecessary aggregation, i.e. improving cube computation runtime.  相似文献   

18.
为了解决数据立方体完全物化占用过多存储空间的问题,以用户兴趣度为依据,从用户查询的实际情况出发,首次提出在矩阵基础之上进行冰山立方体构建的方法MICA,并在此基础上提出冰山立方体的增量式更新方法ICTU,以解决当用户兴趣发生改变时,需要物化的方体发生改变的问题.实验表明,MICC能够大大节省存储空间,有效支持用户查询,且利用增量方法ICIU能够使构建冰山立方体的效率大大提高.  相似文献   

19.
Parallel data processing is a promising approach for efficiently computing data cube in relational databases, because most aggregate functions used in OLAP (On-Line Analytical Processing) are distributive functions. This paper studies the issues of handling data skew in parallel data cube computation. We present a fully dynamic partitioning approach that can effectively distribute workload among processing nodes without priori knowledge of data distribution. As supplement, a simple and effective dynamic load balancing mechanism is also incorporated into our algorithm, which further improves the overall performance. Our experimental results indicated that the proposed techniques are effective even when high data skew exists. The results of scale-up and speedup tests are also satisfactory.  相似文献   

20.
数据方体系统设计中的优化问题   总被引:2,自引:0,他引:2  
支持实时查询的联机分析处理系统的设计是当前一个很重要的研究问题。其中常用的方法是使用数据方体来实现。对于出现频率较高的查询,可以给出对应的数据方体集,使得每个查询都可以直接得到回答。但是在设计基于方体的系统时,需要考虑以下两个问题:(1)数据方体的维护成本,(2)回答频繁查询的响应时间。在用户给出了维护成本上限和响应时间上限后,需要对数据方体集进行优化,使得系统能够满足用户的要求,并回答尽可能多的查询。文章给出了数据方体系统设计优化问题的定义,这是一个NP完全问题,并提出了贪心删除和贪心合并的近似算法。实验表明了算法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号