首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
现实世界中高维数据无处不在,然而在高维数据中往往存在大量的冗余和噪声信息,这导致很多传统聚类算法在对高维数据聚类时不能获得很好的性能.实践中发现高维数据的类簇结构往往嵌入在较低维的子空间中.因而,降维成为挖掘高维数据类簇结构的关键技术.在众多降维方法中,基于图的降维方法是研究的热点.然而,大部分基于图的降维算法存在以下两个问题:(1)需要计算或者学习邻接图,计算复杂度高;(2)降维的过程中没有考虑降维后的用途.针对这两个问题,提出一种基于极大熵的快速无监督降维算法MEDR. MEDR算法融合线性投影和极大熵聚类模型,通过一种有效的迭代优化算法寻找高维数据嵌入在低维子空间的潜在最优类簇结构. MEDR算法不需事先输入邻接图,具有样本个数的线性时间复杂度.在真实数据集上的实验结果表明,与传统的降维方法相比, MEDR算法能够找到更好地将高维数据投影到低维子空间的投影矩阵,使投影后的数据有利于聚类.  相似文献   

2.
Many real‐life databases are updated by means of incoming business information. In these databases (e.g., transactional data from large retail chains, call‐detail records), the content evolves through periodical insertions (or deletions) of data blocks. Since data evolve over time, algorithms have to be devised to incrementally update data mining models. This paper presents a novel index, called I‐Forest, to support itemset mining on incoming data blocks, where new blocks are inserted periodically, or old blocks are discarded. The I‐Forest structure provides a complete data representation and allows different kind of analyses (e.g., investigate quarterly data), besides supporting user‐defined time and support constraints. The I‐Forest index has been implemented into the PostgreSQL open source DBMS and exploits its physical level access methods. Experiments, run for both sparse and dense data distributions, show the effectiveness of the I‐Forest‐based approach to perform itemset mining with both time and support constraints. The execution time of the I‐Forest‐based itemset mining technique is often faster than the Prefix‐Tree algorithm accessing static data on flat files. © 2010 Wiley Periodicals, Inc.  相似文献   

3.
Mining very large databases   总被引:1,自引:0,他引:1  
Ganti  V. Gehrke  J. Ramakrishnan  R. 《Computer》1999,32(8):38-45
Established companies have had decades to accumulate masses of data about their customers, suppliers, products and services, and employees. Data mining, also known as knowledge discovery in databases, gives organizations the tools to sift through these vast data stores to find the trends, patterns, and correlations that can guide strategic decision making. Traditionally, algorithms for data analysis assume that the input data contains relatively few records. Current databases however, are much too large to be held in main memory. To be efficient, the data mining techniques applied to very large databases must be highly scalable. An algorithm is said to be scalable if (given a fixed amount of main memory), its runtime increases linearly with the number of records in the input database. Recent work has focused on scaling data mining algorithms to very large data sets. The authors describe a broad range of algorithms that address three classical data mining problems: market basket analysis, clustering, and classification  相似文献   

4.
We propose a new algorithm, called Stripe-join, for performing a join given a join index. Stripe-join is inspired by an algorithm called ‘Jive-join’ developed by Li and Ross. Stripe-join makes a single sequential pass through each input relation, in addition to one pass through the join index and two passes through a set of temporary files that contain tuple identifiers but no input tuples. Stripe-join performs this efficiently even when the input relations are much larger than main memory, as long as the number of blocks in main memory is of the order of the square root of the number of blocks in the participating relations. Stripe-join is particularly efficient for self-joins. To our knowledge, Stripe-join is the first algorithm that, given a join index and a relation significantly larger than main memory, can perform a self-join with just a single pass over the input relation and without storing input tuples in intermediate files. Almost all the I/O is sequential, thus minimizing the impact of seek and rotational latency. The algorithm is resistant to data skew. It can also join multiple relations while still making only a single pass over each input relation. Using a detailed cost model, Stripe-join is analyzed and compared with competing algorithms. For large input relations, Stripe-join performs significantly better than Valduriez's algorithm and hash join algorithms. We demonstrate circumstances under which Stripe-join performs significantly better than Jive-join. Unlike Jive-join, Stripe-join makes no assumptions about the order of the join index.  相似文献   

5.
Fast joins using join indices   总被引:1,自引:0,他引:1  
Two new algorithms, “Jive join” and “Slam join,” are proposed for computing the join of two relations using a join index. The algorithms are duals: Jive join range-partitions input relation tuple ids and then processes each partition, while Slam join forms ordered runs of input relation tuple ids and then merges the results. Both algorithms make a single sequential pass through each input relation, in addition to one pass through the join index and two passes through a temporary file, whose size is half that of the join index. Both algorithms require only that the number of blocks in main memory is of the order of the square root of the number of blocks in the smaller relation. By storing intermediate and final join results in a vertically partitioned fashion, our algorithms need to manipulate less data in memory at a given time than other algorithms. The algorithms are resistant to data skew and adaptive to memory fluctuations. Selection conditions can be incorporated into the algorithms. Using a detailed cost model, the algorithms are analyzed and compared with competing algorithms. For large input relations, our algorithms perform significantly better than Valduriez's algorithm, the TID join algorithm, and hash join algorithms. An experimental study is also conducted to validate the analytical results and to demonstrate the performance characteristics of each algorithm in practice. Received July 21, 1997 / Accepted June 8, 1998  相似文献   

6.
7.
We introduce the problem of diverse dimension decomposition in transactional databases, where a dimension is a set of mutually exclusive itemsets. The problem we consider requires to find a decomposition of the itemset space into dimensions, which are orthogonal to each other and which provide high coverage of the input database. The mining framework we propose can be interpreted as a dimensionality-reducing transformation from the space of all items to the space of orthogonal dimensions. Relying on information-theoretic concepts, we formulate the diverse dimension decomposition problem with a single objective function that simultaneously captures constraints on coverage, exclusivity, and orthogonality. We show that our problem is NP-hard, and we propose a greedy algorithm exploiting the well-known FP-tree data structure. Our algorithm is equipped with strategies for pruning the search space deriving directly from the objective function. We also prove a property that allows assessing the level of informativeness for newly added dimensions, thus allowing to define criteria for terminating the decomposition. We demonstrate the effectiveness of our solution by experimental evaluation on synthetic datasets with known dimension and three real-world datasets, flickr, del.icio.us and dblp. The problem we study is largely motivated by applications in the domain of collaborative tagging; however, the mining task we introduce in this paper is useful in other application domains as well.  相似文献   

8.
We propose a new multi-attribute index. Our approach combines the hB-tree, a multi-attribute index, and the -tree, an abstract index which offers efficient concurrency and recovery methods. We call the resulting method the hB-tree. We describe several versions of the hB-tree, each using a different node-splitting and index-term-posting algorithm. We also describe a new node deletion algorithm. We have implemented all the versions of the hB-tree. Our performance results show that even the version that offers no performance guarantees, actually performs very well in terms of storage utilization, index size (fan-out), exact-match and range searching, under various data types and distributions. We have also shown that our index is fairly insensitive to increases in dimension. Thus, it is suitable for indexing high-dimensional applications. This property and the fact that all our versions of the hB-tree can use the -tree concurrency and recovery algorithms make the hB-tree a promising candidate for inclusion in a general-purpose DBMS. Edited by R. Sacks-Davis. Received 27 June 1994 / Accepted 26 September 1995  相似文献   

9.
分形维数能够有效地描述数据集,反映复杂数据集中隐含的规律性,基于分形理论的数据挖掘算法通常都步及到分形维数的计算。但是现有的分形维数计算方法的时间复杂度和空间复杂度都比较高,大大降低了算法的效率,使算法很难适应高速、海量的数据流环境。因此,总结分析了现有的几种分形维数计算方法,并提出一种随机型方法,利用固定的内存空间快速估计数据流的关联维数。最后通过与现有算法进行对比实验,证明了这一随机型算法的有效性。  相似文献   

10.
We describe JGAP, a web‐based platform for designing and implementing Java‐coded graph algorithms. The platform contains a library of common data structures for implementing graph algorithms, features a ‘plug‐and‐play’ modular design for adding new algorithm modules, and includes a performance meter to measure the execution time of implemented algorithms. JGAP is also equipped with a graph editor to generate and modify graphs to have specific properties. JGAP's graphic user interface further allows users to compose, in a functional way, computation sequences from existing algorithm modules so that output from an algorithm is used as input for another algorithm. Hence, JGAP can be viewed as a visual graph calculator for helping experiment with and teach graph algorithm design. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

11.
A parallel algorithm for regular triangulations is presented. For the purpose of fully dynamic and kinetic particle simulations it allows vertex insertion, deletion, movement, and weight changes. We describe new algorithms for incremental construction of regular triangulations, parallel vertex deletion and insertion. Finally, a parallel Lawson flip algorithm for vertex displacements is presented. The performance analysis demonstrates a significant parallel efficiency for various system sizes and performed changes.  相似文献   

12.
钱雪忠  惠亮 《计算机应用》2011,31(5):1339-1343
基于FP-tree的最大频繁模式挖掘算法是目前较为高效的频繁模式挖掘算法,针对这些算法需要递归生成条件FP-tree、产生大量候选最大频繁项集等问题,在分析FPMax、DMFIA算法的基础上,提出基于降维的最大频繁模式挖掘算法(BDRFI)。该算法改传统的FP-tree为数字频繁模式树DFP-tree,提高了超集检验的效率;采用的预测剪枝策略减少了挖掘的次数;基于降低项集维度的挖掘方式,减少了候选项的数目,避免了递归地产生条件频繁模式树,提高了算法的效率。实验结果表明,BDRFI的效率是同类算法的2~8倍。  相似文献   

13.
董林  舒红 《计算机应用》2013,33(11):3049-3051
为了得到有趣且有效的空间关联规则通常需要多次执行挖掘操作,可以使用增量维护算法来提高挖掘效率。然而,能够直接使用空间数据的关联规则增量更新算法尚属空白。为解决这一问题,对挖掘阈值改变和空间数据集更新后通过筛选或增量挖掘等方法实现规则维护的策略进行了分析,并提出适用于支持度阈值减小和空间图层增加这两类情况的增量挖掘算法——ISA。ISA算法不依赖于空间事务表的构建与更新,可以直接使用空间图层作为输入数据。在基于实际数据的实验中,采用ISA算法所得结果与类Apriori算法一致,耗时则相对缩短20.0%至71.0%;此外,对1372772条规则进行了基于筛选的更新,耗时低于0.1s。实验结果表明,所提出的空间关联规则增量维护策略和算法是可行、正确且高效的。  相似文献   

14.
田华  何翼 《计算机应用研究》2020,37(12):3586-3589
针对大数据分析在大规模并行分布式系统和软件平台上可扩展的问题,提出了一个基于无参数围绕质心二进制分裂聚类(clustering using binary splitting,CLUBS)的大数据挖掘技术。该技术以完全无监督的方式工作,基于最小二次距离的准则进行分裂聚类将数据与噪声分离,通过中级精炼来识别仅包含异常值的块并为剩余块生成全面的簇,设计CLUBS的并行化版本以实现对大数据进行快速有效的聚类。实验表明CLUBS并行算法不受数据维度和噪声的影响,且比现有算法具有更好的可扩展性且速度较快。  相似文献   

15.
一种有效的关联规则增量式更新算法   总被引:6,自引:2,他引:6  
关联规则是数据挖掘中的一个重要研究内容。目前已经提出了许多用于高效地发现大规模数据库中的关联规则的算法,而对已发现规则的更新及维护问题的研究却较少。文章提出了基于频繁模式树的关联规则增量式更新算法,以处理事务数据库中增加了新的事务数据集后相应关联规则的更新问题,并对其性能进行了分析。  相似文献   

16.
一种实用的关联规则增量式更新算法   总被引:2,自引:0,他引:2  
薛锦  陈原斌 《计算机工程与应用》2003,39(13):212-213,217
关联规则是数据挖掘中的一个重要研究内容。目前已经提出了许多用于高效地发现大规模数据库中的关联规则的算法,而对已发现规则的更新及维护问题的研究却较少。该文提出了一种实用的关联规则增量式更新算法,以处理事务数据库中增加了新的事务数据集后相应的关联规则的更新问题,并对其性能进行了分析。  相似文献   

17.
Business intelligence and bioinformatics applications increasingly require the mining of datasets consisting of millions of data points, or crafting real-time enterprise-level decision support systems for large corporations and drug companies. In all cases, there needs to be an underlying data mining system, and this mining system must be highly scalable. To this end, we describe a new rule learner called DataSqueezer. The learner belongs to the family of inductive supervised rule extraction algorithms. DataSqueezer is a simple, greedy, rule builder that generates a set of production rules from labeled input data. In spite of its relative simplicity, DataSqueezer is a very effective learner. The rules generated by the algorithm are compact, comprehensible, and have accuracy comparable to rules generated by other state-of-the-art rule extraction algorithms. The main advantages of DataSqueezer are very high efficiency, and missing data resistance. DataSqueezer exhibits log-linear asymptotic complexity with the number of training examples, and it is faster than other state-of-the-art rule learners. The learner is also robust to large quantities of missing data, as verified by extensive experimental comparison with the other learners. DataSqueezer is thus well suited to modern data mining and business intelligence tasks, which commonly involve huge datasets with a large fraction of missing data.  相似文献   

18.
关联规则挖掘是数据挖掘中的一个重要任务,传统关联规则挖掘方法计算复杂度高、效率较低,而智能算法在搜索过程中具有保持种群多样性、鲁棒性等优点。本文提出基于免疫克隆文化算法的关联规则挖掘模型,该模型将免疫克隆算法嵌入到文化算法的框架中,利用免疫克隆算法的全局收敛性在数据库中迅速搜索频繁项目集,进而提取用户感兴趣的关联规则;利用文化算法信念空间的知识结构指导种群的进化,增强了搜索的目的性和方向性。实验表明,该模型具有较快的运行速度,提高了所得关联规则的准确率。  相似文献   

19.
Set-oriented data mining in relational databases   总被引:2,自引:0,他引:2  
Data mining is an important real-life application for businesses. It is critical to find efficient ways of mining large data sets. In order to benefit from the experience with relational databases, a set-oriented approach to mining data is needed. In such an approach, the data mining operations are expressed in terms of relational or set-oriented operations. Query optimization technology can then be used for efficient processing.

In this paper, we describe set-oriented algorithms for mining association rules. Such algorithms imply performing multiple joins and thus may appear to be inherently less efficient than special-purpose algorithms. We develop new algorithms that can be expressed as SQL queries, and discuss optimization of these algorithms. After analytical evaluation, an algorithm named SETM emerges as the algorithm of choice. Algorithm SETM uses only simple database primitives, viz., sorting and merge-scan join. Algorithm SETM is simple, fast, and stable over the range of parameter values. It is easily parallelized and we suggest several additional optimizations. The set-oriented nature of Algorithm SETM makes it possible to develop extensions easily and its performance makes it feasible to build interactive data mining tools for large databases.  相似文献   


20.
最大频繁序列发现是数据挖掘中的一个重要分支.本文提出一种发现最大频繁序列集的算法MAXSeq,该算法通过对潜在的最大频繁序列进行选择性的扩展,直接判断其是否为最大序列,无须对候选最大序列进行维护,从而显著减小了存储开销.同时,优化策略的恰当运用对降低CPU时间起着至关重要的作用.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号