首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
基于距离的分布式RFID数据流孤立点检测   总被引:1,自引:0,他引:1  
RFID技术已广泛应用于实时监控、对象标识及跟踪等领域,及时发现被监控标签对象的异常状态显得十分重要.然而,由于无线通信技术的不可靠性及环境因素影响,RFID阅读器收集到的数据常常包含噪声.针对分布式RFID数据流的海量、易变、不可靠及分布等特点,提出了基于距离的局部流孤立点检测算法LSOD和基于近似估计的全局流孤立点检测算法GSOD.LSOD需要维护数据流结构CSL来识别安全内点,然后运用安全内点的特性来节省流数据的存储空间和查询时间.根据基于距离的孤立点定义,在中心节点上的全局孤立点是位于每个分布节点上孤立点集合的子集.GSOD采用抽样方法进行全局孤立点近似估计,以减少中心节点的通信量及计算负荷.实验表明,所给出的算法具有运行时间短、占用内存小、准确率高等特点.  相似文献   

2.
在分析现有的孤立点探测算法的基础上,提出一种新颖的基于聚类的孤立点集挖掘算法.该算法不但能够探测出所有的孤立点,还能根据孤立点产生的原因对这些孤立点进行分类。通过实验数据测试,本算法有较好的稳定性和性能的优越性。  相似文献   

3.
国刚  王凯 《数字社区&智能家居》2013,(17):3907-3908,3917
该文介绍了孤立点、孤立点挖掘以及基于单元的孤立点提取算法的相关概念。主要讨论了应用于二维数据集的基于单元的孤立点提取算法,分析了该算法的程序实现过程和时间复杂度。  相似文献   

4.
孤立点检测是一项有价值的、重要的知识发现任务。在对大规模数据集中的孤立点数据进行检测时.样本数据集的选择技术至关重要。本文提出了一种新的基于密度的偏差抽样技术作为数据约简的手段.并给出了基于密度偏差抽样的孤立点检测算法,该算法可以用来识别样本数据集低密度区域中的孤立点数据,并从理论和实验两个方面对其进行分析评估,分析与实践证明该算法是有效的。  相似文献   

5.
基于2k-距离的孤立点算法研究   总被引:1,自引:0,他引:1  
杨臻 《福建电脑》2009,25(2):77-78
孤立点检测一直是数据挖掘中一个活跃的领域,如信用卡欺诈,入侵检测等。在这些应用领域中研究孤立点的异常行为能够发现隐藏在数据集中更有价值的知识。文章介绍了相关概念,分析了几类有代表性的算法。最后,给出了一个判定孤立点的新的定义,并按此定义进行了检测,用实际数据进行了实验。实验结果表明,该算法能够能够有效地检测出孤立点。  相似文献   

6.
李光兴 《计算机科学》2016,43(Z6):236-238, 280
根据孤立点是数据集合中与大多数数据的属性不一致的数据,边界点是位于不同密度数据区域边缘的数据对象,提出了基于相对密度的孤立点和边界点识别算法(OBRD)。该算法判断一个数据点是否为边界点或孤立点的方法是:将以该数据点为中心、r为半径的邻域按维平分为2个半邻域,由这些半邻域与原邻域的相对密度确定该数据点的孤立度和边界度,再结合阈值作出判断。实验结果表明,该算法能精准有效地对多密度数据集的孤立点和聚类边界点进行识别。  相似文献   

7.
数据挖掘中基于密度和距离聚类算法设计   总被引:1,自引:0,他引:1  
田地  王世卿 《微机发展》2006,16(10):49-51
介绍聚类分析的基本概念,并说明了关于聚类分析相关研究工作。对聚类、数据对象、对象的密度、簇的密度、距离和ε-邻域等基本概念进行了描述。在此基础上提出并分析了基于密度和距离聚类算法,并与其他聚类方法作了比较,显示了其优越性。  相似文献   

8.
数据挖掘中基于密度和距离聚类算法设计   总被引:2,自引:0,他引:2  
介绍聚类分析的基本概念,并说明了关于聚类分析相关研究工作。对聚类、数据对象、对象的密度、簇的密度、距离和ε-邻域等基本概念进行了描述。在此基础上提出并分析了基于密度和距离聚类算法,并与其他聚类方法作了比较,显示了其优越性。  相似文献   

9.
聚类算法是数据挖掘里的一个重要研究问题.简单介绍CLARANS算法的基本思想,详尽描述了改进的CLARANS算法的基本思想和基本步骤,通过实验数据对其进行进一步分析.并对其应用领域做出简单概要.  相似文献   

10.
一种基于划分的孤立点检测算法   总被引:7,自引:0,他引:7  
孤立点是不具备数据一般特性的数据对象.划分的方法是通过将数据集中的数据点分布的空间划分为不相交的超矩形单元集合,匹配数据对象到单元中,然后通过各个单元的统计信息来发现孤立点.由于大多真实数据集具有较大偏斜,因此划分后会产生影响算法性能的大量空单元.由此,提出了一种新的索引结构--CD-Tree(cell dimension tree),用于索引非空单元.为了优化CD-Tree结构和指导对数据的划分,提出了基于划分的数据偏斜度(skew of data,简称SOD)概念.基于CD-Tree与SOD,设计了新的孤立点检测算法.实验结果表明,该算法与基于单元的算法相比,在效率及有效处理的维数方面均有显著提高.  相似文献   

11.
Anomaly detection is considered an important data mining task, aiming at the discovery of elements (known as outliers) that show significant diversion from the expected case. More specifically, given a set of objects the problem is to return the suspicious objects that deviate significantly from the typical behavior. As in the case of clustering, the application of different criteria leads to different definitions for an outlier. In this work, we focus on distance-based outliers: an object x is an outlier if there are less than k objects lying at distance at most R from x. The problem offers significant challenges when a stream-based environment is considered, where data arrive continuously and outliers must be detected on-the-fly. There are a few research works studying the problem of continuous outlier detection. However, none of these proposals meets the requirements of modern stream-based applications for the following reasons: (i) they demand a significant storage overhead, (ii) their efficiency is limited and (iii) they lack flexibility in the sense that they assume a single configuration of the k and R parameters. In this work, we propose new algorithms for continuous outlier monitoring in data streams, based on sliding windows. Our techniques are able to reduce the required storage overhead, are more efficient than previously proposed techniques and offer significant flexibility with regard to the input parameters. Experiments performed on real-life and synthetic data sets verify our theoretical study.  相似文献   

12.
为了满足大规模数据集快速离群点检测的需要,提出了一种基于分化距离的离群点检测算法,该算法综合考虑了数据对象周围的密度及数据对象间的距离等因素对离群点的影响,通过比较每一对象与其他对象的分化距离来计算其周围的友邻点密度,挖掘出数据集中隐含的离群点。实验表明,该算法能有效地识别离群点,同时能反映出数据对象在数据集中的孤立程度。算法的复杂度较低,适用于大规模数据集快速离群点检测。  相似文献   

13.
Defining outliers by their distance to neighboring data points has been shown to be an effective non-parametric approach to outlier detection. In recent years, many research efforts have looked at developing fast distance-based outlier detection algorithms. Several of the existing distance-based outlier detection algorithms report log-linear time performance as a function of the number of data points on many real low-dimensional datasets. However, these algorithms are unable to deliver the same level of performance on high-dimensional datasets, since their scaling behavior is exponential in the number of dimensions. In this paper, we present RBRP, a fast algorithm for mining distance-based outliers, particularly targeted at high-dimensional datasets. RBRP scales log-linearly as a function of the number of data points and linearly as a function of the number of dimensions. Our empirical evaluation demonstrates that we outperform the state-of-the-art algorithm, often by an order of magnitude.  相似文献   

14.
Traditional minimum spanning tree-based clustering algorithms only make use of information about edges contained in the tree to partition a data set. As a result, with limited information about the structure underlying a data set, these algorithms are vulnerable to outliers. To address this issue, this paper presents a simple while efficient MST-inspired clustering algorithm. It works by finding a local density factor for each data point during the construction of an MST and discarding outliers, i.e., those whose local density factor is larger than a threshold, to increase the separation between clusters. This algorithm is easy to implement, requiring an implementation of iDistance as the only k-nearest neighbor search structure. Experiments performed on both small low-dimensional data sets and large high-dimensional data sets demonstrate the efficacy of our method.  相似文献   

15.
Distance-based outliers: algorithms and applications   总被引:20,自引:0,他引:20  
This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers can only deal efficiently with two dimensions/attributes of a dataset. In this paper, we study the notion of DB (distance-based) outliers. Specifically, we show that (i) outlier detection can be done efficiently for large datasets, and for k-dimensional datasets with large values of k (e.g., ); and (ii), outlier detection is a meaningful and important knowledge discovery task. First, we present two simple algorithms, both having a complexity of , k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear with respect to N, but exponential with respect to k. We provide experimental results indicating that this algorithm significantly outperforms the two simple algorithms for . Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most three passes over a dataset. Again, experimental results show that this algorithm is by far the best for . Finally, we discuss our work on three real-life applications, including one on spatio-temporal data (e.g., a video surveillance application), in order to confirm the relevance and broad applicability of DB outliers. Received February 15, 1999 / Accepted August 1, 1999  相似文献   

16.
For many real world problems we must perform classification under widely varying amounts of computational resources. For example, if asked to classify an instance taken from a bursty stream, we may have anywhere from several milliseconds to several minutes to return a class prediction. For such problems an anytime algorithm may be especially useful. In this work we show how we convert the ubiquitous nearest neighbor classifier into an anytime algorithm that can produce an instant classification, or if given the luxury of additional time, can continue computations to increase classification accuracy. We demonstrate the utility of our approach with a comprehensive set of experiments on data from diverse domains. We further show the utility of our work with two deployed applications, in classifying and counting fish, and in classifying insects.  相似文献   

17.
Efficient and effective processing of the distance-based join query (DJQ) is of great importance in spatial databases due to the wide area of applications that may address such queries (mapping, urban planning, transportation planning, resource management, etc.). The most representative and studied DJQs are the K Closest Pairs Query (KCPQ) and εDistance Join Query (εDJQ). These spatial queries involve two spatial data sets and a distance function to measure the degree of closeness, along with a given number of pairs in the final result (K) or a distance threshold (ε). In this paper, we propose four new plane-sweep-based algorithms for KCPQs and their extensions for εDJQs in the context of spatial databases, without the use of an index for any of the two disk-resident data sets (since, building and using indexes is not always in favor of processing performance). They employ a combination of plane-sweep algorithms and space partitioning techniques to join the data sets. Finally, we present results of an extensive experimental study, that compares the efficiency and effectiveness of the proposed algorithms for KCPQs and εDJQs. This performance study, conducted on medium and big spatial data sets (real and synthetic) validates that the proposed plane-sweep-based algorithms are very promising in terms of both efficient and effective measures, when neither inputs are indexed. Moreover, the best of the new algorithms is experimentally compared to the best algorithm that is based on the R-tree (a widely accepted access method), for KCPQs and εDJQs, using the same data sets. This comparison shows that the new algorithms outperform R-tree based algorithms, in most cases.  相似文献   

18.
Privacy preserving data mining has become increasingly popular because it allows sharing of privacy-sensitive data for analysis purposes. However, existing techniques such as random perturbation do not fare well for simple yet widely used and efficient Euclidean distance-based mining algorithms. Although original data distributions can be pretty accurately reconstructed from the perturbed data, distances between individual data points are not preserved, leading to poor accuracy for the distance-based mining methods. Besides, they do not generally focus on data reduction. Other studies on secure multi-party computation often concentrate on techniques useful to very specific mining algorithms and scenarios such that they require modification of the mining algorithms and are often difficult to generalize to other mining algorithms or scenarios. This paper proposes a novel generalized approach using the well-known energy compaction power of Fourier-related transforms to hide sensitive data values and to approximately preserve Euclidean distances in centralized and distributed scenarios to a great degree of accuracy. Three algorithms to select the most important transform coefficients are presented, one for a centralized database case, the second one for a horizontally partitioned, and the third one for a vertically partitioned database case. Experimental results demonstrate the effectiveness of the proposed approach.  相似文献   

19.
Expressing a cooperative control system requires combining tools from control theory and distributed computation. After reviewing several possible formalisms appropriate for the job, the authors settle on the computation and control language and illustrate its main features and advantages using a cooperative tracking task. We demonstrate some of the concepts involved in using a multirobot task. We also discuss CCL's ability to be a programming language as well as a modeling tool, which lets us directly simulate or execute CCL models.  相似文献   

20.
He  Xuan-sen  He  Fan  Xu  Li 《Multimedia Tools and Applications》2021,80(6):8281-8308
Multimedia Tools and Applications - In underdetermined blind source separation (UBSS), the estimation of the mixing matrix is crucial because it directly affects the performance of UBSS. To improve...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号