首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 170 毫秒
1.
In this paper, we study the problem of efficiently computing k-medians over high-dimensional and high speed data streams. The focus of this paper is on the issue of minimizing CPU time to handle high speed data streams on top of the requirements of high accuracy and small memory. Our work is motivated by the following observation: the existing algorithms have similar approximation behaviors in practice, even though they make noticeably different worst case theoretical guarantees. The underlying reason is that in order to achieve high approximation level with the smallest possible memory, they need rather complex techniques to maintain a sketch, along time dimension, by using some existing off-line clustering algorithms. Those clustering algorithms cannot guarantee the optimal clustering result over data segments in a data stream but accumulate errors over segments, which makes most algorithms behave the same in terms of approximation level, in practice. We propose a new grid-based approach which divides the entire data set into cells (not along time dimension). We can achieve high approximation level based on a novel concept called (1-∈)-dominant. We further extend the method to the data stream context, by leveraging a density-based heuristic and frequent item mining techniques over data streams. We only need to apply an existing clustering once to computing k-medians, on demand, which reduces CPU time significantly. We conducted extensive experimental studies, and show that our approaches outperform other well-known approaches.  相似文献   

2.
In this paper, we study the problem of efficiently computing k-medians over high-dimensional and high speed data streams. The focus of this paper is on the issue of minimizing CPU time to handle high speed data streams on top of the requirements of high accuracy and small memory. Our work is motivated by the following observation: the existing algorithms have similar approximation behaviors in practice, even though they make noticeably different worst case theoretical guarantees. The underlying reason is that in order to achieve high approximation level with the smallest possible memory, they need rather complex techniques to maintain a sketch, along time dimension, by using some existing off-line clustering algorithms. Those clustering algorithms cannot guarantee the optimal clustering result over data segments in a data stream but accumulate errors over segments, which makes most algorithms behave the same in terms of approximation level, in practice. We propose a new grid-based approach which divides the entire data set into cells (not along time dimension). We can achieve high approximation level based on a novel concept called (1 - ε)-dominant. We further extend the method to the data stream context, by leveraging a density-based heuristic and frequent item mining techniques over data streams. We only need to apply an existing clustering once to computing k-medians, on demand, which reduces CPU time significantly. We conducted extensive experimental studies, and show that our approaches outperform other well-known approaches.  相似文献   

3.
Clustering analysis is an unsupervised method to find out hidden structures in datasets.Most partitional clustering algorithms are sensitive to the selection of initial exemplars,the outliers and noise.In this paper,a novel technique called data competition algorithm is proposed to solve the problems.First the concept of aggregation field model is defined to describe the partitional clustering problem.Next,the exemplars are identified according to the data competition.Then,the members will be assigned to the suitable clusters.Data competition algorithm is able to avoid poor solutions caused by unlucky initializations,outliers and noise,and can be used to detect the coexpression gene,cluster the image,diagnose the disease,distinguish the variety,etc.The provided experimental results validate the feasibility and effectiveness of the proposed schemes and show that the proposed approach of data competition algorithm is simple,stable and efficient.The experimental results also show that the proposed approach of data competition clustering outperforms three of the most well known clustering algorithms K-means clustering,affinity propagation clustering,hierarchical clustering.  相似文献   

4.
Generally, abnormal points (noise and outliers) cause cluster analysis to produce low accuracy especially in fuzzy clustering. These data not only stay in clusters but also deviate the centroids from their true positions. Traditional fuzzy clustering like Fuzzy C-Means (FCM) always assigns data to all clusters which is not reasonable in some circumstances. By reformulating objective function in exponential equation, the algorithm aggressively selects data into the clusters. However noisy data and outliers cannot be properly handled by clustering process therefore they are forced to be included in a cluster because of a general probabilistic constraint that the sum of the membership degrees across all clusters is one. In order to improve this weakness, possibilistic approach relaxes this condition to improve membership assignment. Nevertheless, possibilistic clustering algorithms generally suffer from coincident clusters because their membership equations ignore the distance to other clusters. Although there are some possibilistic clustering approaches that do not generate coincident clusters, most of them require the right combination of multiple parameters for the algorithms to work. In this paper, we theoretically study Possibilistic Exponential Fuzzy Clustering (PXFCM) that integrates possibilistic approach with exponential fuzzy clustering. PXFCM has only one parameter and not only partitions the data but also filters noisy data or detects them as outliers. The comprehensive experiments show that PXFCM produces high accuracy in both clustering results and outlier detection without generating coincident problems.  相似文献   

5.
Mining with streaming data is a hot topic in data mining. When performing classification on data streams, traditional classification algorithms based on decision trees, such as ID3 and C4.5, have a relatively poor efficiency in both time and space due to the characteristics of streaming data. There are some advantages in time and space when using random decision trees. An incremental algorithm for mining data streams, SRMTDS (Semi-Random Multiple decision Trees for Data Streams), based on random decision trees is proposed in this paper. SRMTDS uses the inequality of Hoeffding bounds to choose the minimum number of split-examples, a heuristic method to compute the information gain for obtaining the split thresholds of numerical attributes, and a Naive Bayes classifier to estimate the class labels of tree leaves. Our extensive experimental study shows that SRMTDS has an improved performance in time, space, accuracy and the anti-noise capability in comparison with VFDTc, a state-of-the-art decision-tree algorithm for classifying data streams.  相似文献   

6.
The K-means method is a well-known clustering algorithm with an extensive range of applications,such as biological classification,disease analysis,data mining,and image compression.However,the plain K-means method is not fast when the number of clusters or the number of data points becomes large.A modified K-means algorithm was presented by Fahim et al.(2006).The modified algorithm produced clusters whose mean square error was very similar to that of the plain K-means,but the execution time was shorter.In this study,we try to further increase its speed.There are two rules in our method:a selection rule,used to acquire a good candidate as the initial center to be checked,and an erasure rule,used to delete one or many unqualified centers each time a specified condition is satisfied.Our clustering results are identical to those of Fahim et al.(2006).However,our method further cuts computation time when the number of clusters increases.The mathematical reasoning used in our design is included.  相似文献   

7.
Clustering in very large databases based on distance and density   总被引:8,自引:0,他引:8       下载免费PDF全文
Clustering in vergy large databases or data warehouses,with many applications in areas such as spatial computation,web information coollection,pattern recognition and econmic analysis,is a huge task that challenges data mining researches.Current clustering methods always have the problems:1)scanning the whole databased leads to high I/O cost and expensive maintenance(e.g.,R^*-tree);2)pre-specifying the uncertain parameter k,with which clustering can only be refined by trial and test many times;3) lacking high efficiency in treating arbitrary shape under very large data set environment.In this paper,we first present a new hybrid-clustering algorithm to solve these problesm,This new algorithm,which combines both distance and density strategies,can handle any arbitrary shape clusters effectively.It makes full use of statistics information in mining to reduce the time complexity greatly while keeping good clustering quality.Furthermore,this algorithm can easily eliminate noises and inentify outliers.An experimental evaluation is performed on a spatial database with this method and other popular clustering algorithms(CURE and DBSCAN).The results show that our algorithm outperforms them in terms of efficiency and cost,and even gets much more speedup as the data size scales up much larger.  相似文献   

8.
For data sets of arbitrary shapes and densities,the existing clusterings have much space to be improved to obtain better results.In this paper,clustering is considered as a cognitive problem,and cognitive features are of vital importance to clustering.In combination with psychological experiment,we propose three cognitive features of clustering and model them as a flexible similarity measurement.Meanwhile a new clustering framework is put forward to integrate the cognitive features by employing the similarity measurement.The two attractive advantages are its low complexity and fitness for various types of data sets,such as data sets of diferent shapes and densities.Some synthetic and real data sets are employed to exhibit the superiority of the new clustering algorithm.  相似文献   

9.
In many sensor network applications,it is essential to get the data distribution of the attribute value over the network.Such data distribution can be got through clustering,which partitions the network into contiguous regions,each of which contains sensor nodes of a range of similar readings.This paper proposes a method named Distributed,Hierarchical Clustering (DHC) for online data analysis and mining in senior networks.Different from the acquisition and aggregation of raw sensory data,DHC clusters sensor nodes based on their current data values as well as their geographical proximity,and computes a summary for each cluster.Furthermore,these clusters,together with their summaries,are produced in a distributed,bottom-up manner.The resulting hierarchy of clusters and their summaries facilitates interactive data exploration at multiple resolutions.It can also be used to improve the efficiency of data-centric routing and query processing in sensor networks.We also design and evaluate the maintenance mechanisms for DHC to make it be able to work on evolving data.Our simulation results on real world datasets as well as synthetic datasets show the effectiveness and efficiency of our approach.  相似文献   

10.
Clustering has long been an important data processing task in different applications. Typically, it attempts to partition the available data into groups according to their underlying distributions, and each cluster is represented by a center or an exemplar. In this paper, a new clustering algorithm called gravitational-force-based affinity propagation (GAP) is proposed, based on the well-known Newton''s law of universal gravitation. It views the available data points as nodes of a network (or planets of a universe) and the clusters and their corresponding exemplars can be obtained by transmitting affinity messages based on the gravitational forces between data points in a network. While GAP is inspired by the recently proposed affinity propagation (AP) clustering approach, it provides a new definition of the similarity between data points which makes the AP process more convincing and at the same time facilitates the differentiation of data points'' importance. The experimental results show that the GAP clustering algorithm, with comparable clustering accuracy, is even more efficient than the original AP clustering approach.  相似文献   

11.
Density-based clustering algorithms are attractive for the task of class identification in spatial database. However, in many cases, very different local-density clusters exist in different regions of data space, therefore, DBSCAN method [M. Ester, H.-P. Kriegel, J. Sander, X. Xu, A density-based algorithm for discovering clusters in large spatial databases with noise, in: E. Simoudis, J. Han, U.M. Fayyad (Eds.), Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, OR, AAAI, Menlo Park, CA, 1996, pp. 226–231] using a global density parameter is not suitable. Although OPTICS [M. Ankerst, M.M. Breunig, H.-P. Kriegel, J. Sander, OPTICS: ordering points to identify the clustering structure, in: A. Delis, C. Faloutsos, S. Ghandeharizadeh (Eds.), Proceedings of ACM SIGMOD International Conference on Management of Data Philadelphia, PA, ACM, New York, 1999, pp. 49–60] provides an augmented ordering of the database to represent its density-based clustering structure, it only generates the clusters with local-density exceeds certain thresholds but not the cluster of similar local-density; in addition, it does not produce clusters of a data set explicitly. Furthermore, the parameters required by almost all the major clustering algorithms are hard to determine although they significantly impact on the clustering result. In this paper, a new clustering algorithm LDBSCAN relying on a local-density-based notion of clusters is proposed. In this technique, the selection of appropriate parameters is not difficult; it also takes the advantage of the LOF [M.M. Breunig, H.-P. Kriegel, R.T. Ng, J. Sander, LOF: identifying density-based local outliers, in: W. Chen, J.F. Naughton, P.A. Bernstein (Eds.), Proceedings of ACM SIGMOD International Conference on Management of Data, Dalles, TX, ACM, New York, 2000, pp. 93–104] to detect the noises comparing with other density-based clustering algorithms. The proposed algorithm has potential applications in business intelligence.  相似文献   

12.
一种基于密度的空间数据流在线聚类算法   总被引:2,自引:0,他引:2  
于彦伟  王沁  邝俊  何杰 《自动化学报》2012,38(6):1051-1059
为了解决空间数据流中任意形状簇的聚类问题,提出了一种基于密度的空间数据流在线聚类算法(On-line density-based clustering algorithm for spatial datastream,OLDStream),该算法在先前聚类结果上聚类增量空间数据,仅对新增空间点及其满足核心点条件的邻域数据做局部聚类更新,降低聚类更新的时间复杂度,实现对空间数据流的在线聚类.OLDStream算法具有快速处理大规模空间数据流、实时获取全局任意形状的聚类簇结果、对数据流的输入顺序不敏感、并能发现孤立点数据等优势.在真实数据和合成数据上的综合实验验证了算法的聚类效果、高效率性和较高的可伸缩性,同时实验结果的统计分析显示仅有4%的空间点消耗最坏运行时间,对每个空间点的平均聚类时间约为0.033 ms.  相似文献   

13.
一种改进的基于密度的聚类算法   总被引:10,自引:0,他引:10  
基于密度的聚类是聚类算法中的一种,其主要优点是可以发现任意形状的簇,对噪声不敏感。而现有的该类算法对于空间数据分布不均匀的情况聚类效果不佳。鉴于此,文中提出一种改进的基于密度的聚类算法,保持了基于密度的聚类算法的优点,并且可以有效地处理分布不均的数据集,减少了时间复杂度,适用于对大规模数据库的挖掘与分析。  相似文献   

14.
高维数据流子空间聚类发现及维护算法   总被引:3,自引:2,他引:3  
近年来由于数据流应用的大量涌现,基于数据流模型的数据挖掘算法研究已成为重要的应用前沿课题.提出一种基于Hoeffding界的高维数据流的子空间聚类发现及维护算法--SHStream.算法将数据流分段(分段长度由Hoeffding界确定),在数据分段上进行子空间聚类,通过迭代逐步得到满足聚类精度要求的聚类结果,同时针对数据流的动态性,算法对聚类结果进行调整和维护.算法可以有效地处理高雏数据流和对任意形状分布数据的聚类问题.基于真实数据集与仿真数据集的实验表明,算法具有良好的适用性和有效性.  相似文献   

15.
基于密度复杂簇聚类算法研究与实现   总被引:3,自引:2,他引:1       下载免费PDF全文
聚类算法在模式识别、数据分析、图像处理、以及市场研究的应用中,需要解决的关键技术是如何有效地聚类各种复杂的数据对象簇。在分析与研究现有聚类算法的基础上,提出了一种基于密度和自适应密度可达的改进算法。实验证明,该算法能够有效聚类任意分布形状、不同密度、不同尺度的簇;同时,算法的计算复杂度与传统基于密度的聚类算法相比有明显的降低。  相似文献   

16.
针对CluStream算法对非球状簇聚类的不足,同时基于均匀网格划分的聚类算法多数是以降低聚类精度为代价来提高聚类效率,给出了一种新的数据流聚类算法一GTSClu算法,该算法是基于网格的最小生成树(MST)数据流聚类算法.算法分为在线处理与离线聚类两部分,并运用了网格拆分与最小生成树技术,可以有效排除噪声数据,发现任意...  相似文献   

17.
Existing density-based data stream clustering algorithms use a two-phase scheme approach consisting of an online phase, in which raw data is processed to gather summary statistics, and an offline phase that generates the clusters by using the summary data. In this article we propose a data stream clustering method based on a multi-agent system that uses a decentralized bottom-up self-organizing strategy to group similar data points. Data points are associated with agents and deployed onto a 2D space, to work simultaneously by applying a heuristic strategy based on a bio-inspired model, known as flocking model. Agents move onto the space for a fixed time and, when they encounter other agents into a predefined visibility range, they can decide to form a flock if they are similar. Flocks can join to form swarms of similar groups. This strategy allows to merge the two phases of density-based approaches and thus to avoid the computing demanding offline cluster computation, since a swarm represents a cluster. Experimental results show that the bio-inspired approach can obtain very good results on real and synthetic data sets.  相似文献   

18.
面向复杂簇的聚类算法研究与实现   总被引:2,自引:0,他引:2  
有效聚类各种复杂的数据对象簇是聚类算法应用干事务对象划分、图像分割、机器学习等方面需要解决的关键技术.在分析与研究现有聚类算法的基础上,提出一种基于密度和自适应密度可达的改进算法.实验证明,该算法能够有效聚类任意分布形状、不同密度、不同尺度的簇;同时,算法的计算复杂度与传统基于密度的聚类算法相比有明显的降低.  相似文献   

19.
高维数据流的自适应子空间聚类算法   总被引:1,自引:0,他引:1       下载免费PDF全文
高维数据流聚类是数据挖掘领域中的研究热点。由于数据流具有数据量大、快速变化、高维性等特点,许多聚类算法不能取得较好的聚类质量。提出了高维数据流的自适应子空间聚类算法SAStream。该算法改进了HPStream中的微簇结构并定义了候选簇,只在相应的子空间内计算新来数据点到候选簇质心的距离,减少了聚类时被检查微簇的数目,将形成的微簇存储在金字塔时间框架中,使用时间衰减函数删除过期的微簇;当数据流量大时,根据监测的系统资源使用情况自动调整界限半径和簇选择因子,从而调节聚类的粒度。实验结果表明,该算法具有良好的聚类质量和快速的数据处理能力。  相似文献   

20.
概念漂移数据流挖掘算法综述   总被引:1,自引:0,他引:1  
丁剑  韩萌  李娟 《计算机科学》2016,43(12):24-29, 62
数据流是一种新型的数据模型,具有动态、无限、高维、有序、高速和变化等特性。在真实的数据流环境中,一些数据分布是随着时间改变的,即具有概念漂移特征,称为可变数据流或概念漂移数据流。因此处理数据流模型的方法需要处理时空约束和自适应调整概念变化。对概念漂移问题和概念漂移数据流分类、聚类和模式挖掘等内容进行综述。首先介绍概念漂移的类型和常用概念改变检测方法。为了解决概念漂移问题,数据流挖掘中常使用滑动窗口模型对新近事务进行处理。数据流分类常用的模型包括单分类模型和集成分类模型,常用的方法包括决策树、分类关联规则等。数据流聚类方式通常包括基于k- means的和非基于k- means的。模式挖掘可以为分类、聚类和关联规则等提供有用信息。概念漂移数据流中的模式包括频繁模式、序列模式、episode、模式树、模式图和高效用模式等。最后详细介绍其中的频繁模式挖掘算法和高效用模式挖掘算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号