首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
李翠平  王珊 《计算机科学》2005,32(9):100-102
尽管利用预计算可以提高OLAP的查询效率,但是,由于存储空间的限制,预计算整个数据方体是不现实的.最近提出的综合数据方体通过将数据单元进行等价划分的方法解决了这个问题.然而,当数据源发生改变的时候,要对这样的数据方体进行维护是很困难的,即使只有一条元组发生了变化,所有的聚集值都必须重新计算,代价非常高.实际上,在有些应用环境中,人们更关注查询响应的速度,在查询结果的精度上可以放低一些要求.本文提出了如何对近似的综合数据方体进行增量维护的方法.实验证明,这些方法是非常有效的.  相似文献   

2.
Data cube computation is a well-known expensive operation and has been studied extensively. It is often not feasible to compute a complete data cube due to the huge storage requirement. Recently proposed quotient cube addressed this fundamental issue through a partitioning method that groups cube cells into equivalent partitions. The effectiveness and efficiency of the quotient cube for cube compression and computation have been proved. However, as changes are made to the data sources, to maintain such a quotient cube is non-trivial since the equivalent classes in it must be split or merged. In this paper, incremental algorithms are designed to update existing quotient cube efficiently based on Galois lattice. Performance study shows that these algorithms are efficient and scalable for large databases.  相似文献   

3.
数据立方体格和形式概念格比较研究表明,两者都基于序结构,并且采用形式概念分析理论(FCA)的等价特征组与数据立方体覆盖等价类对数据单元有相同的划分结果.将FCA与概念格理论引入数据立方体研究,首次提出聚集概念格(ACL)结构.ACL与一般概念格同构,能完整保存立方体中的所有聚集结果,实现与商立方体相同比例的约简.ACL结构仍比较复杂,在ACL基础上,又提出一种约简聚集概念格结构(RACL),该结构只存储非对象概念,而不是所有概念.RACL与基本表联合仍然是完整立方体结构,但能实现更大的约简.给出了ACL和RACL的高效的查询方法,并使用模拟数据和实际数据作了一些实验.理论和实验都表明RACL结构比现有方法更节省空间,同时查询效率也较高.  相似文献   

4.
提出利用Cube中的维层次聚集树(dimension hierarchy aggregate tree,简称DHA-Tree)来对聚集Cube进行增量更新维护,在维层次聚集Cube中进行数据插入和删除等数据更新时,充分利用维层次聚集树中的维层次前缀,由下向上用更新前后的差值对受到更新结点影响的所有祖先结点进行增量更新.在插入新维数据时,在不需要重新构建聚集Cube就可以对聚集Cube进行增量更新,从而减少了Cube的更新时间.对基于维层次聚集树的聚集Cube与传统Cube进行了算法性能分析和比较,结果表明本文所提出的聚集Cube的增量更新算法性能最佳.  相似文献   

5.
如何快速有效地对数据立方体上的聚集查询给出近似的回答,是数据挖掘和数据仓库研究领域中的核心问题之一。现有大多数聚集查询算法在同一个数据立方体上只能支持某种特定的而非多种类型的聚集查询。本文给出了一种新的框架AdenTS,即基于密度的自适应树结构,它可以回答同一数据立方体上的各类聚集查询,也提出了一些近似和启发式技术,改善了查询结果和精度。实验结果表明,这种方法在支持的查询种类和性能上是更好的。  相似文献   

6.
数据仓库系统中一种改进的维层次聚集Cube存储结构   总被引:3,自引:0,他引:3  
提出利用Cube中的维层次(dimension hierarchy)聚集技术来创建高性能的维层次聚集Cube(dimension hierarchy aggregate cube,DHAC).充分利用DHAC已保存的维层次信息,对Cube中多维数据的查询和更新效率进行了优化,并且支持Cube的上探、下钻等语义操作.在DHAC中进行数据插入和删除等数据更新时,由下向上用更新前后的差值对受到更新结点影响的所有祖先结点进行增量更新.实现了在插入新维或维层次时不需要重新构建聚集Cube就可以实现Cube的模式更新.对维层次聚集Cube与传统Cube进行了算法性能分析和比较,理论分析和实验结果都表明,所提出的DHAC性能最佳.  相似文献   

7.
一种并行处理多维连接和聚集操作的有效方法   总被引:1,自引:0,他引:1  
随着并行计算算法的完善和廉价、功能强大的多处理机系统的成熟,使得采用多处理机系统来并行处理多维数据仓库的连接和聚集操作成为当前有效提高OLAP查询处理性能的首选技术.为此,提出一种降低连接和聚集操作开销的并行算法PJAMDDC(parallel join and aggregation for multi-dimensional data cube).算法充分考虑了多维数据立方体的存储机制和多处理机分布系统的结构特点,在原有聚集计算多维数据立方体的搜索点阵逻辑结构的基础上,采用多维数据仓库的层次联合代理(hierarchy combined surrogate)和对立方体的搜索点阵进行加权的方法,使得立方体数据在多个处理机间的分配达到最佳的状态,从而在分割多维数据的同时,提高了并行处理多维连接和聚集操作的效率.算法实验评估表明,PJAMDDC算法并行处理多维数据仓库的连接和聚集操作是有效的.  相似文献   

8.
A statistical database (SDB) is a database that provides simple summary statistics (e.g., SUM, COUNT, MAX, MEDIAN, etc.) about individuals in the database and that supports statistical data analysis. When SDB users infer protected information in the SDB from responses to queries, we say that the SDB is compromised.

Summary tables are tabular representations of summary data. For a given aggregate function and a set of attributes to specify subsets of individuals in the SDB, all possible summary tables form a lattice. The SDB security problem in the lattice model is defined as preventing the users from obtaining the information that a table element (i.e., cell) is of size one.

In this paper, to solve the SDB security problem in the lattice model, we generalize three cell-level control techniques, namely, cell suppression by merging, m-cube, axis merging, into the lattice model. We define the concept of information loss, derive various properties of the minimum information loss, and then investigate the effectiveness of heuristic algorithms for the minimum information loss in each of the three cell-level control techniques.  相似文献   


9.
Efficient Computation of Iceberg Cubes by Bounding Aggregate Functions   总被引:1,自引:0,他引:1  
The iceberg cubing problem is to compute the multidimensional group-by partitions that satisfy given aggregation constraints. Pruning unproductive computation for iceberg cubing when nonantimonotone constraints are present is a great challenge because the aggregate functions do not increase or decrease monotonically along the subset relationship between partitions. In this paper, we propose a novel bound prune cubing (BP-Cubing) approach for iceberg cubing with nonantimonotone aggregation constraints. Given a cube over n dimensions, an aggregate for any group-by partition can be computed from aggregates for the most specific n--dimensional partitions (MSPs). The largest and smallest aggregate values computed this way become the bounds for all partitions in the cube. We provide efficient methods to compute tight bounds for base aggregate functions and, more interestingly, arithmetic expressions thereof, from bounds of aggregates over the MSPs. Our methods produce tighter bounds than those obtained by previous approaches. We present iceberg cubing algorithms that combine bounding with efficient aggregation strategies. Our experiments on real-world and artificial benchmark data sets demonstrate that BP-Cubing algorithms achieve more effective pruning and are several times faster than state-of-the-art iceberg cubing algorithms and that BP-Cubing achieves the best performance with the top-down cubing approach.  相似文献   

10.
On-line analytical processing (OLAP) typically involves complex aggregate queries over large datasets. The data cube has been proposed as a structure that materializes the results of such queries in order to accelerate OLAP. A significant fraction of the related work has been on Relational-OLAP (ROLAP) techniques, which are based on relational technology. Existing ROLAP cubing solutions mainly focus on “flat” datasets, which do not include hierarchies in their dimensions. Nevertheless, as shown in this paper, the nature of hierarchies introduces several complications into the entire lifecycle of a data cube including the operations of construction, storage, indexing, query processing, and incremental maintenance. This fact renders existing techniques essentially inapplicable in a significant number of real-world applications and mandates revisiting the entire cube lifecycle under the new perspective. In order to overcome this problem, the CURE algorithm has been recently proposed as an efficient mechanism to construct complete cubes over large datasets with arbitrary hierarchies and store them in a highly compressed format, compatible with the relational model. In this paper, we study the remaining phases in the cube lifecycle and introduce query-processing and incremental-maintenance algorithms for CURE cubes. These are significantly different from earlier approaches, which have been proposed for flat cubes constructed by other techniques and are inadequate for CURE due to its high compression rate and the presence of hierarchies. Our methods address issues such as cube indexing, query optimization, and lazy update policies. Especially regarding updates, such lazy approaches are applied for the first time on cubes. We demonstrate the effectiveness of CURE in all phases of the cube lifecycle through experiments on both real-world and synthetic datasets. Among the experimental results, we distinguish those that have made CURE the first ROLAP technique to complete the construction and usage of the cube of the highest-density dataset in the APB-1 benchmark (12 GB). CURE was in fact quite efficient on this, showing great promise with respect to the potential of the technique overall.  相似文献   

11.
张延松  肖艳芹  王珊  陈红 《软件学报》2010,21(10):2494-2512
What-If分析能够提供比传统的OLAP(on-line analysis processing)分析更加有意义的决策支持信息.基于历史数据的应用场景假设分析需要更加有效的what-if数据视图生成机制的支持.在传统的delta表合并算法的基础上,提出了基于内存记录指针的deltaMap算法来提高what-if数据视图的合并性能.根据OLAP分析的应用特点,提出了pre-merge算法来处理支持分布式计算的聚集运算.根据不同的假设更新类型,对查询重写算法和△cube算法作了详细的性能测试并进行了全面的性能分析对比,在此基础上提出了what-if分析的代价模型,以应用场景模式、假设更新率、假设更新复杂度、查询结果集的基数作为参数,有效地描述系统what-if查询处理策略,为what-if分析的解决方案提供了一个可行的框架结构.  相似文献   

12.
在侏儒立方体研究的基础上,提出了一种新的能够保持语义的立方体结构。这种结构改变了侏儒立方体对聚集数据的存储方式,在保持基本立方体上卷、下钻语义的前提下,尽量地去除前缀冗余、后缀冗余,节约存储空间,保证立方体清晰的结构,并且拥有比侏儒立方体更高的存储效率和查询响应速度,对点查询和范围查询能够快速地返回结果,对大数据量情况下的稀疏立方体具有良好的支持。  相似文献   

13.
基于扩展概念格的Web关系挖掘   总被引:1,自引:0,他引:1  
姜峰  范玉顺 《软件学报》2010,21(10):2432-2444
针对Web服务因缺少有效的组织和管理机制而产生的应用瓶颈问题,引入基于概念覆盖度函数的扩展概念格,通过构建基于输入和输出参数的Web服务集的扩展概念格模型,给出了Web服务间等价、替代和流关系的离线挖掘算法以及增量和减量的在线更新算法.在真实Web服务集上的测试结果表明,扩展概念格模型是Web服务集的一种有效的组织形式,可用于Web服务关系的自动挖掘和维护,从而为Web服务的选择、优化和组合提供智能支持.  相似文献   

14.
A Genetic Selection Algorithm for OLAP Data Cubes   总被引:1,自引:0,他引:1  
Multidimensional data analysis, as supported by OLAP (online analytical processing) systems, requires the computation of many aggregate functions over a large volume of historically collected data. To decrease the query time and to provide various viewpoints for the analysts, these data are usually organized as a multidimensional data model, called data cubes. Each cell in a data cube corresponds to a unique set of values for the different dimensions and contains the metric of interest. The data cube selection problem is, given the set of user queries and a storage space constraint, to select a set of materialized cubes from the data cubes to minimize the query cost and/or the maintenance cost. This problem is known to be an NP-hard problem. In this study, we examined the application of genetic algorithms to the cube selection problem. We proposed a greedy-repaired genetic algorithm, called the genetic greedy method. According to our experiments, the solution obtained by our genetic greedy method is superior to that found using the traditional greedy method. That is, within the same storage constraint, the solution can greatly reduce the amount of query cost as well as the cube maintenance cost.  相似文献   

15.
时间序列数据立方的存储与聚集计算   总被引:1,自引:0,他引:1  
本文讨论了从时序数建造、存储数据立方,以及聚集计算的算法,其中N23算法和扩展的EN23算法可以方便地将一个N(N〉3)维数据方立转换为三维数据立方,大大降低了I/O次数,极大地提高了运行效率。  相似文献   

16.
Parallel data processing is a promising approach for efficiently computing data cube in relational databases, because most aggregate functions used in OLAP (On-Line Analytical Processing) are distributive functions. This paper studies the issues of handling data skew in parallel data cube computation. We present a fully dynamic partitioning approach that can effectively distribute workload among processing nodes without priori knowledge of data distribution. As supplement, a simple and effective dynamic load balancing mechanism is also incorporated into our algorithm, which further improves the overall performance. Our experimental results indicated that the proposed techniques are effective even when high data skew exists. The results of scale-up and speedup tests are also satisfactory.  相似文献   

17.
New Algorithm for Computing Cube on Very Large Compressed Data Sets   总被引:2,自引:0,他引:2  
Data compression is an effective technique to improve the performance of data warehouses. Since cube operation represents the core of online analytical processing in data warehouses, it is a major challenge to develop efficient algorithms for computing cube on compressed data warehouses. To our knowledge, very few cube computation techniques have been proposed for compressed data warehouses to date in the literature. This paper presents a novel algorithm to compute cubes on compressed data warehouses. The algorithm operates directly on compressed data sets without the need of first decompressing them. The algorithm is applicable to a large class of mapping complete data compression methods. The complexity of the algorithm is analyzed in detail. The analytical and experimental results show that the algorithm is more efficient than all other existing cube algorithms. In addition, a heuristic algorithm to generate an optimal plan for computing cube is also proposed  相似文献   

18.
View materialization is an important way of improving the performance of query processing. When an update occurs to the source data from which a materialized view is derived, the materialized view has to be updated so that it is consistent with the source data. This update process is called view maintenance. The incremental method of view maintenance, which computes the new view using the old view and the update to the source data, is widely preferred to full view recomputation when the update is small in size. In this paper we investigate how to incrementally maintain views in object-relational (OR) databases. The investigation focuses on maintaining views defined in OR-SQL, a language containing the features of object referencing, inheritance, collection, and aggregate functions including user-defined set aggregate functions. We propose an architecture and algorithms for incremental OR viewmaintenance. We implement all algorithms and analyze the performance of them in comparison with full view recomputation. The analysis shows that the algorithms significantly reduce the cost of updating a vieww hen the size of an update to the source data is relatively small. Received 23 May 2000 / Revised 27 March 2001 / Accepted in revised form 30 April 2001 Correspondence and offprint requests to: Jixue Liu, School of Computer and Information Science, University of South Australia, Mawson Lakes, Adelaide SA5084, Australia. Email: jixue.liu@unisa.edu.auau  相似文献   

19.
针对决策表存在数据删除的情况,首先提出决策表等价类链表存储结构,并引入基于该存储结构的简化决策表定义和基于简化决策表核属性定义,同时证明了该核属性与原始决策表核属性是等价的;然后,分别从删除指定对象和删除指定信息两个方面研究核属性更新理论,并给出相应的算法实现;最后,通过实例验证了所提出算法的有效性.  相似文献   

20.
张安珍  李建中  高宏 《软件学报》2020,31(2):406-420
本文研究了基于符号语义的不完整数据聚集查询处理问题.不完整数据又称为缺失数据,缺失值包括可填充的和不可填充的两种类型.现有的缺失值填充算法不能保证填充后查询结果的准确度,为此,本文给出不完整数据聚集查询结果的区间估计.本文在符号语义中扩展传统关系数据库模型,提出一种通用不完整数据库模型,该模型可以处理可填充的和不可填充的两种类型缺失值.在该模型下,提出一种新的不完整数据聚集查询结果语义:可靠结果.可靠结果是真实查询结果的区间估计,可以保证真实查询结果很大概率在该估计区间范围内.本文给出线性时间求解SUM、COUNT和AVG查询可靠结果的方法.真实数据集和合成数据集上的扩展实验验证了本文所提方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号