首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 152 毫秒
1.
多特征方用于计算复杂数据挖掘查询,具有多粒度上多个依赖聚集计算的典型特点.现有的立方体粒度计算技术可以有效计算分布和代数多特征方,整体多特征方还没有提出有效的计算方法.在分析整体多特征方查询特点的基础上提出了一种优化算法:先将立方体分块,再采用冰山查询技术动态选择数据,最后采用查询结果重用技术.实验表明,该优化算法能有效提高整体多特征方查询的性能.  相似文献   

2.
一种基于立方体的复杂查询的高效算法*   总被引:2,自引:0,他引:2  
提出一种高效的整体多特征方查询算法.该算法首先将数据立方体水平分块成多个小数据集,然后将各子查询中的聚集函数分类,并对其中的分布和代数聚集函数使用分布聚集特性优化计算,使得整体多特征方查询可以局部使用分布多特征方查询的优化计算方法.实验结果证明该方法可以有效地提高整体多特征方查询的效率.  相似文献   

3.
n维的立方体将生成2n个聚集立方体.如何进行立方体计算,在存储空间和查询时间方面寻求平衡,成为多维分析应用中的关键问题.基于部分物化的策略,并结合水利普查数据特征,改进Minimal cubing方法,提出了层次维编码片段方法HDEF cubing.该方法利用编码长度较小的层次维编码及其前缀,快速检索出与查询关键字相匹配的层次维编码,减少了多表连接操作,从而提高查询效率.以水利普查数据为例,验证了改进的立方体计算方法能高效地对立方体进行存储和查询,适用于水利普查成果分析.  相似文献   

4.
数据立方体的预计算对于提高联机分析处理性能至关重要。在借鉴多路数组聚集完全立方体计算算法的基础上,提出了利用数据结果集驱动的完全立方体计算算法。算法在扫描完成一个方体的同时,完成方体沿各个维攀升形成的多个新方体的聚集值的计算,从而完成多路方体聚集。该算法支持大数据量立方体的计算。应用结果表明,算法可行,且易于实现。  相似文献   

5.
基于数字立方体的复杂查询是立方体技术的发展方向。该文针对复杂立方体查询中可能存在的3种聚集依赖,分别给出3种基于Caching重用技术的解决方法。在模拟数据集和真实数据集上的实验结果验证了该方法的有效性和正确性。  相似文献   

6.
一种并行处理多维连接和聚集操作的有效方法   总被引:1,自引:0,他引:1  
随着并行计算算法的完善和廉价、功能强大的多处理机系统的成熟,使得采用多处理机系统来并行处理多维数据仓库的连接和聚集操作成为当前有效提高OLAP查询处理性能的首选技术.为此,提出一种降低连接和聚集操作开销的并行算法PJAMDDC(parallel join and aggregation for multi-dimensional data cube).算法充分考虑了多维数据立方体的存储机制和多处理机分布系统的结构特点,在原有聚集计算多维数据立方体的搜索点阵逻辑结构的基础上,采用多维数据仓库的层次联合代理(hierarchy combined surrogate)和对立方体的搜索点阵进行加权的方法,使得立方体数据在多个处理机间的分配达到最佳的状态,从而在分割多维数据的同时,提高了并行处理多维连接和聚集操作的效率.算法实验评估表明,PJAMDDC算法并行处理多维数据仓库的连接和聚集操作是有效的.  相似文献   

7.
在数据流的查询处理中,聚集查询是一种常用的查询类型.系统经常需要在同一个数据源上处理多个聚集查询,而单独地执行每个查询会导致严重的可扩展性问题和性能问题,因此实现相似查询之间的资源共享变得至关重要.针对多个具有不同时间窗口的聚集查询,本文提出了一种优化的窗口聚集算法OPWA(Optimized Paired Window Aggregation).先根据各时间窗口参数对聚集查询进行分组,使得相似查询可以同步调度;再采用paired技术对数据流进行分割.一方面减少了时间切片的数目,降低了空间的需求;另一方面同步地执行相似查询,减少了系统的计算开销.实验表明OPWA具有较好的性能.  相似文献   

8.
多路数组聚集技术主要用于以高维数组为基础数据结构的数据立方体的完全立方体计算。随着大数据时代的到来,对数据高效的分析利用显得尤为重要,但同时海量的数据又使得计算变得富有挑战性。在这种背景下,论述了一种将大数据时代普遍使用的MapReduce技术与多路数组聚集技术相结合的方法,以提高完全立方体计算的效率。  相似文献   

9.
张延松  张宇  黄伟  王珊  陈红 《软件学报》2009,20(Z1):165-175
根据OLAP查询的特点和内存数据库的性能特征提出了由多个内存数据库组成的并行OLAP查询处理系统,将OLAP应用中的多维聚集查询分布到各个计算节点并行进行聚集计算,并将聚集计算的结果进行合并输出.与其他并行处理方法相比,该算法充分利用OLAP DB结构中维表远小于事实表的特性,根据数据库中事实表的数据量和节点的数据处理能力进行水平数据库分片,并根据聚集函数的可分布计算特性提高查询处理的并行度,延迟并行查询处理中的合并过程,充分利用节点的并行处理能力,减少并行查询处理过程中的数据通信量,提高系统并行查询处理性能.该算法易于实现,具有较好的可扩展性和性能,适用于企业级海量数据处理领域的需求.  相似文献   

10.
在ROLAP中往往涉及到大量数据的复杂即席查询,从SQL角度看,这些查询通常都包含多表连接和分组聚集操作。本文提出了一种连接和聚集操作的新算法JAMDHBJI,该算法充分考虑了ROLAP中复杂多维层次的特点,同时考虑到并非全部维都具有维层次的语义特性,将维层次编码和位图连接索引有效结合,把复杂的连接和分组聚集操作转化为在事实表上的区域查询,从而大大提高了连接和分组聚集的效率。理论分析表明该算法是高效的。  相似文献   

11.
This paper proposes a computation method for holistic multi-feature cube (MF-Cube) queries based on the characteristics of MF-Cubes. Three simple yet efficient strategies are designed to optimize the dependent complex aggregate at multiple granularities for a complex data-mining query within data cubes. One strategy is the computation of Holistic MF-Cube queries using the PDAP (Part Distributive Aggregate Property). More efficiency is gained by another strategy, that of dynamic subset data selection (the iceberg query technique), which reduces the size of the materialized data cubes. To extend this efficiency further, the second approach may adopt the chunk-based caching technique that reuses the output of previous queries. By combining these three strategies, we design an algorithm called the PDIC (Part Distributive Iceberg Chunk). We experimentally evaluate this algorithm using synthetic and real-world datasets and demonstrate that our approach delivers up to approximately twice the performance efficiency of traditional computation methods.  相似文献   

12.
封闭数据立方是一种有效的无损压缩技术,它去掉了数据立方中的冗余信息,从而有效降低了数据立方的存储空间、加快了计算速度,而且几乎不影响查询性能.Hadoop的MapReduce并行计算模型为数据立方的计算提供了技术支持,Hadoop的分布式文件系统HDFS为数据立方的存储提供了保障.为了节省存储空间、加快查询速度,在传统数据立方的基础上提出封闭直方图立方,它在封闭数据立方的基础上通过编码技术进一步节省了存储空间,通过建立索引加快了查询速度.Hadoop并行计算平台不论从扩展性还是均衡性都为封闭直方图立方提供了保证.实验证明:封闭直方图立方对数据立方进行了有效压缩,具有较高的查询性能,根据Hadoop的特点通过增加节点个数明显加快了计算速度.  相似文献   

13.
提出一种新的数据立方体结构,通过索引和集合的交并运算来获得查询结果,特别是在进行区域查询时,避免了将区域分解为点后再依次进行点查询的方式,从而在保持较少的磁盘空间和较好的点查询响应速度的情况下,改善区域查询的性能;同时给出其生成和查询算法,并使用合成数据和实际数据进行了实验验证.  相似文献   

14.
OLAP queries involve a lot of aggregations on a large amount of data in data warehouses. To process expensive OLAP queries efficiently, we propose a new method to rewrite a given OLAP query using various kinds of materialized views which already exist in data warehouses. We first define the normal forms of OLAP queries and materialized views based on the selection and aggregation granularities, which are derived from the lattice of dimension hierarchies. Conditions for usability of materialized views in rewriting a given query are specified by relationships between the components of their normal forms. We present a rewriting algorithm for OLAP queries that can effectively utilize materialized views having different selection granularities, selection regions, and aggregation granularities together. We also propose an algorithm to find a set of materialized views that results in a rewritten query which can be executed efficiently. We show the effectiveness and performance of the algorithm experimentally.  相似文献   

15.
在侏儒立方体研究的基础上,提出了一种新的能够保持语义的立方体结构。这种结构改变了侏儒立方体对聚集数据的存储方式,在保持基本立方体上卷、下钻语义的前提下,尽量地去除前缀冗余、后缀冗余,节约存储空间,保证立方体清晰的结构,并且拥有比侏儒立方体更高的存储效率和查询响应速度,对点查询和范围查询能够快速地返回结果,对大数据量情况下的稀疏立方体具有良好的支持。  相似文献   

16.
周龙  郑诚 《微机发展》2006,16(6):101-103
通过对数据仓库和OLAP概念及体系结构的分析,描述了一种OLAP应用系统的设计方案,并介绍了它的具体实现方法。基于数据仓库的查询,一般都是及时特定查询,要在严格的响应时间内执行复杂的查询,遍历百万上亿的记录,同时进行可能很复杂的搜索、连接和汇总的操作。查询的数据吞吐量和响应时间是判断数据仓库性能的重点。CUBE的计算是OLAP及时查询的基础,提高查询的速度需要对OLAP进行预先的计算。文中系统比较了一些计算立方体的算法,并运用到具体的系统当中。  相似文献   

17.
Quotient Cube和QC-tree试图在浓缩一个数据立方尺寸的同时,保持该数据立方蕴涵的语义,但是,前者没有语义关系的存储,后者存储的语义关系是晦涩模糊的.为此提出了下钻立方结构,首次从语义角度考虑数据立方存储,存储的不是类的内容,而是类之间的直接下钻关系.下钻立方不仅能够极大地减小数据立方的存储尺寸,而且可以清晰地表达原数据立方蕴涵的下钻语义.此外,下钻立方具有较高的查询响应性能,这一点在范围查询中表现得尤其显著.实验和分析表明,下钻立方在存储尺寸和查询响应方面明显优于QC-tree,适于用来组织和存储数据立方.  相似文献   

18.
The design of an OLAP system for supporting real-time queries is one of the major research issues. One approach is to use data cubes, which are materialized precomputed multidimensional views of data in a data warehouse. We can derive a set of data cubes to answer each frequently asked query directly. However, there are two practical problems: (1) the maintenance cost of the data cubes, and (2) the query cost to answer those queries. Maintaining a data cube requires disk storage and CPU computation, so the maintenance cost is related to the total size as well as the total number of data cubes materialized. In most cases, materializing all data cubes is impractical. The maintenance cost may be reduced by merging some data cubes. However, the resulting larger data cubes will increase the query cost of answering some queries. If the bounds on the maintenance cost and the query cost are too strict, we help the user decide which queries to be sacrificed and not taken into consideration. We have defined an optimization problem in data cube system design. Given a maintenance-cost bound, a query-cost bound and a set of frequently asked queries, it is necessary to determine a set of data cubes such that the system can answer a largest subset of the queries without violating the two bounds. This is an NP-hard problem. We propose approximate Greedy algorithms GR, 2GM and 2GMM, which are shown to be both effective and efficient by experiments done on a census data set and a forest-cover-type data set.  相似文献   

19.
On-line analytical processing (OLAP) typically involves complex aggregate queries over large datasets. The data cube has been proposed as a structure that materializes the results of such queries in order to accelerate OLAP. A significant fraction of the related work has been on Relational-OLAP (ROLAP) techniques, which are based on relational technology. Existing ROLAP cubing solutions mainly focus on “flat” datasets, which do not include hierarchies in their dimensions. Nevertheless, as shown in this paper, the nature of hierarchies introduces several complications into the entire lifecycle of a data cube including the operations of construction, storage, indexing, query processing, and incremental maintenance. This fact renders existing techniques essentially inapplicable in a significant number of real-world applications and mandates revisiting the entire cube lifecycle under the new perspective. In order to overcome this problem, the CURE algorithm has been recently proposed as an efficient mechanism to construct complete cubes over large datasets with arbitrary hierarchies and store them in a highly compressed format, compatible with the relational model. In this paper, we study the remaining phases in the cube lifecycle and introduce query-processing and incremental-maintenance algorithms for CURE cubes. These are significantly different from earlier approaches, which have been proposed for flat cubes constructed by other techniques and are inadequate for CURE due to its high compression rate and the presence of hierarchies. Our methods address issues such as cube indexing, query optimization, and lazy update policies. Especially regarding updates, such lazy approaches are applied for the first time on cubes. We demonstrate the effectiveness of CURE in all phases of the cube lifecycle through experiments on both real-world and synthetic datasets. Among the experimental results, we distinguish those that have made CURE the first ROLAP technique to complete the construction and usage of the cube of the highest-density dataset in the APB-1 benchmark (12 GB). CURE was in fact quite efficient on this, showing great promise with respect to the potential of the technique overall.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号