首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Uncertain data are common due to the increasing usage of sensors, radio frequency identification(RFID), GPS and similar devices for data collection. The causes of uncertainty include limitations of measurements, inclusion of noise, inconsistent supply voltage and delay or loss of data in transfer. In order to manage, query or mine such data, data uncertainty needs to be considered. Hence,this paper studies the problem of top-k distance-based outlier detection from uncertain data objects. In this work, an uncertain object is modelled by a probability density function of a Gaussian distribution. The naive approach of distance-based outlier detection makes use of nested loop. This approach is very costly due to the expensive distance function between two uncertain objects. Therefore,a populated-cells list(PC-list) approach of outlier detection is proposed. Using the PC-list, the proposed top-k outlier detection algorithm needs to consider only a fraction of dataset objects and hence quickly identifies candidate objects for top-k outliers. Two approximate top-k outlier detection algorithms are presented to further increase the efficiency of the top-k outlier detection algorithm.An extensive empirical study on synthetic and real datasets is also presented to prove the accuracy, efficiency and scalability of the proposed algorithms.  相似文献   

2.
针对数据流中离群点挖掘问题,在K-means聚类算法基础上,提出了基于距离的准则进行数据间离群点判断的离群点检测DOKM算法。根据数据流概念漂移检测结果来自适应地调整滑动窗口大小,从而实现对数据流的离群点检测,与其他离群点算法的一系列实验验证和对比结果表明,DOKM算法在人工数据集和真实数据集中均可以实现对离群点的有效检测。  相似文献   

3.
Defining outliers by their distance to neighboring data points has been shown to be an effective non-parametric approach to outlier detection. In recent years, many research efforts have looked at developing fast distance-based outlier detection algorithms. Several of the existing distance-based outlier detection algorithms report log-linear time performance as a function of the number of data points on many real low-dimensional datasets. However, these algorithms are unable to deliver the same level of performance on high-dimensional datasets, since their scaling behavior is exponential in the number of dimensions. In this paper, we present RBRP, a fast algorithm for mining distance-based outliers, particularly targeted at high-dimensional datasets. RBRP scales log-linearly as a function of the number of data points and linearly as a function of the number of dimensions. Our empirical evaluation demonstrates that we outperform the state-of-the-art algorithm, often by an order of magnitude.  相似文献   

4.
In this paper, we consider the problem of efficient computation of distance between uncertain objects. In many real life applications, data like sensor readings and weather forecasts are usually uncertain when they are collected or produced. An uncertain object has a probability distribution function (PDF) to represent the probability that it is actually located in a particular location. A fast and accurate distance computation between uncertain objects is important to many uncertain query evaluation (e.g., range queries and nearest‐neighbor queries) and uncertain data mining tasks (e.g., classifications, clustering, and outlier detection). However, existing approaches involve distance computations between samples of two objects, which is very computationally intensive. On one hand, it is expensive to calculate and store the actual distribution of the possible distance values between two uncertain objects. On the other hand, the expected distance (the weighted average of the pairwise distances among samples of two uncertain objects) provides very limited information and also restricts the definitions and usefulness of queries and mining tasks. In this paper, we propose several approaches to calculate the mean of the actual distance distribution and approximate its variance. Based on these, we suggest that the actual distance distribution could be approximated using a standard distribution like Gaussian or Gamma distribution. Experiments on real data and synthetic data show that our approach produces an approximation in a very short time with acceptable accuracy (about 90% ). We suggest that it is practical for the research communities to define and develop more powerful queries and data mining tasks based on the distance distribution instead of the expected distance.  相似文献   

5.
一种有效的可视化孤立点发现与预测新途径   总被引:1,自引:1,他引:0  
孤立点发现是数据挖掘活动的重要组成部分,被广泛应用于电子贸易、信用卡等领域的欺诈检测。由于优良的拓扑结构保持和概率分布保持特性,SOM(Self-Organizing Maps)可作为一种有效的降维工具供分析人员获取隐藏于数据中的分布结构信息。在分析了当前基于距离的孤立点发现的基础上,提出了一种基于SOM的孤立点发现与预测新途径,具有可扩展性、可预测性、交互性、简明性等特征。实验结果表明,基于SOM的孤立点发现与预测是有效的。  相似文献   

6.
Distance-based range search is crucial in many real applications. In particular, given a database and a query issuer, a distance-based range search retrieves all the objects in the database whose distances from the query issuer are less than or equal to a given threshold. Often, due to the accuracy of positioning devices, updating protocols or characteristics of applications (for example, location privacy protection), data obtained from real world are imprecise or uncertain. Therefore, existing approaches over exact databases cannot be directly applied to the uncertain scenario. In this paper, we redefine the distance-based range query in the context of uncertain databases, namely the probabilistic uncertain distance-based range (PUDR) queries, which obtain objects with confidence guarantees. We categorize the topological relationships between uncertain objects and uncertain search ranges into six cases and present the probability evaluation in each case. It is verified by experiments that our approach outperform Monte-Carlo method utilized in most existing work in precision and time cost for uniform uncertainty distribution. This approach approximates the probabilities of objects following other practical uncertainty distribution, such as Gaussian distribution with acceptable errors. Since the retrieval of a PUDR query requires accessing all the objects in the databases, which is quite costly, we propose spatial pruning and probabilistic pruning techniques to reduce the search space. Two metrics, false positive rate and false negative rate are introduced to measure the qualities of query results. An extensive empirical study has been conducted to demonstrate the efficiency and effectiveness of our proposed algorithms under various experimental settings.  相似文献   

7.

Enabling information systems to face anomalies in the presence of uncertainty is a compelling and challenging task. In this work the problem of unsupervised outlier detection in large collections of data objects modeled by means of arbitrary multidimensional probability density functions is considered. We present a novel definition of uncertain distance-based outlier under the attribute level uncertainty model, according to which an uncertain object is an object that always exists but its actual value is modeled by a multivariate pdf. According to this definition an uncertain object is declared to be an outlier on the basis of the expected number of its neighbors in the dataset. To the best of our knowledge this is the first work that considers the unsupervised outlier detection problem on data objects modeled by means of arbitrarily shaped multidimensional distribution functions. We present the UDBOD algorithm which efficiently detects the outliers in an input uncertain dataset by taking advantages of three optimized phases, that are parameter estimation, candidate selection, and the candidate filtering. An experimental campaign is presented, including a sensitivity analysis, a study of the effectiveness of the technique, a comparison with related algorithms, also in presence of high dimensional data, and a discussion about the behavior of our technique in real case scenarios.

  相似文献   

8.
Outlier detection is a very useful technique in many applications, where data is generally uncertain and could be described using probability. While having been studied intensively in the field of deterministic data, outlier detection is still novel in the emerging uncertain data field. In this paper, we study the semantic of outlier detection on probabilistic data stream and present a new definition of distance-based outlier over sliding window. We then show the problem of detecting an outlier over a set o...  相似文献   

9.
基于距离的孤立点检测研究   总被引:15,自引:0,他引:15  
孤立点检测是一个重要的知识发现任务,在分析基于距离的孤立点及其检测算法的基础上,文章提出了一个判定孤立点的新定义,并设计了基于抽样的近似检测算法,用实际数据进行了实验。实验结果表明,新的定义不仅与DB(p,d)孤立点定义有着相同的结果,而且简化了孤立点检测对用户的要求,同时给出了数据对象在数据集中的孤立程度。  相似文献   

10.
在分析了当前基于距离的离群数据挖掘算法的基础上,提出了一种基于SOM的离群数据挖掘集成框架,其具有可扩展性、可预测性、交互性、适应性、简明性等特征.实验结果表明,基于SOM的离群数据挖掘是有效的.  相似文献   

11.
粒计算理论提供了一种新的处理不确定、不完全与不一致知识的有效方法。知识粒度是粒计算理论中度量不确定信息的重要工具之一。已有的异常数据挖掘算法主要针对确定性的异常数据挖掘,采用知识粒度度量不确定性数据,进行异常数据挖掘的研究尚未报道。为此,在引入知识粒度概念的基础上,定义了相对知识粒度及异常度来度量数据之间的异常程度,并提出基于知识粒度的异常数据挖掘算法,该算法可有效进行异常数据的挖掘。实例验证了该算法的有效性。  相似文献   

12.
Outlier detection algorithms are often computationally intensive because of their need to score each point in the data. Even simple distance-based algorithms have quadratic complexity. High-dimensional outlier detection algorithms such as subspace methods are often even more computationally intensive because of their need to explore different subspaces of the data. In this paper, we propose an exceedingly simple subspace outlier detection algorithm, which can be implemented in a few lines of code, and whose complexity is linear in the size of the data set and the space requirement is constant. We show that this outlier detection algorithm is much faster than both conventional and high-dimensional algorithms and also provides more accurate results. The approach uses randomized hashing to score data points and has a neat subspace interpretation. We provide a visual representation of this interpretability in terms of outlier sensitivity histograms. Furthermore, the approach can be easily generalized to data streams, where it provides an efficient approach to discover outliers in real time. We present experimental results showing the effectiveness of the approach over other state-of-the-art methods.  相似文献   

13.
基于奇异值分解的异常切片挖掘   总被引:3,自引:0,他引:3  
切片操作是联机分析处理的主要功能之一,在决策支持应用中发挥着重要作用.由于人工的切片过程非常低效,且易忽略重要信息,提出了一种自动、智能的异常切片挖掘方法.该方法基于奇异值分解技术来提取切片的数据分布特征,然后在提取出的奇异值特征之上,利用基于距离的孤立点检测技术发现异常的切片.在人工生成的数据和实际应用的切片数据上所作的实验结果都表明了该方法的高效性和可行性.  相似文献   

14.
This work proposes a method for detecting distance-based outliers in data streams under the sliding window model. The novel notion of one-time outlier query is introduced in order to detect anomalies in the current window at arbitrary points-in-time. Three algorithms are presented. The first algorithm exactly answers to outlier queries, but has larger space requirements than the other two. The second algorithm is derived from the exact one, reduces memory requirements and returns an approximate answer based on estimations with a statistical guarantee. The third algorithm is a specialization of the approximate algorithm working with strictly fixed memory requirements. Accuracy properties and memory consumption of the algorithms have been theoretically assessed. Moreover experimental results have confirmed the effectiveness of the proposed approach and the good quality of the solutions.  相似文献   

15.
16.
针对基于距离的离群点检测算法受全局阈值的限制, 只能检测全局离群点, 提出了基于聚类划分的两阶段离群点检测算法挖掘局部离群点。首先基于凝聚层次聚类迭代出K-means所需的k值, 然后再利用K-means的方法将数据集划分成若干个微聚类; 其次为了提高挖掘效率, 提出基于信息熵的聚类过滤机制, 判定微聚类中是否包含离群点; 最后从包含离群点的微聚类中利用基于距离的方法挖掘出相应的局部离群点。实验结果表明, 该算法效率高、检测精度高、时间复杂度低。  相似文献   

17.
Unsupervised clustering for datasets with severe outliers inside is a difficult task. In this approach, we propose a cluster-dependent multi-metric clustering approach which is robust to severe outliers. A dataset is modeled as clusters each contaminated by noises of cluster-dependent unknown noise level in formulating outliers of the cluster. With such a model, a multi-metric Lp-norm transformation is proposed and learnt which maps each cluster to the most Gaussian distribution by minimizing some non-Gaussianity measure. The approach is composed of two consecutive phases: multi-metric location estimation (MMLE) and multi-metric iterative chi-square cutoff (ICSC). Algorithms for MMLE and ICSC are proposed. It is proved that the MMLE algorithm searches for the solution of a multi-objective optimization problem and in fact learns a cluster-dependent multi-metric Lq-norm distance and/or a cluster-dependent multi-kernel defined in data space for each cluster. Experiments on heavy-tailed alpha-stable mixture datasets, Gaussian mixture datasets with radial and diffuse outliers added respectively, and the real Wisconsin breast cancer dataset and lung cancer dataset show that the proposed method is superior to many existent robust clustering and outlier detection methods in both clustering and outlier detection performances.  相似文献   

18.
洪沙  林佳丽  张月良 《计算机科学》2015,42(5):230-233, 264
针对不确定数据集进行离群点检测,设计了基于密度的不确定数据的局部离群因子(Uncertain Local Outlier Factor,ULOF)算法.通过建立不确定数据的可能世界模型来确定不确定对象在可能世界中的概率.结合传统的LOF算法推导出ULOF算法,根据ULOF值判断不确定对象的局部离群程度;然后对ULOF算法的效率性和准确性进行了详细分析,提出了基于网格的剪枝策略、k最近邻查询优化来减少数据的候选集;最后通过实验证明了ULOF算法对不确定数据检测的可行性和效率性,优化后的方法有效地提高了异常检测准确率,降低了时间复杂度,改善了不确定数据的异常检测性能.  相似文献   

19.
数据挖掘中孤立点的分析研究在实践中应用   总被引:5,自引:0,他引:5  
介绍了孤立点的定义和三种挖掘算法,即基于统计的方法、基于距离的方法和基于偏离的方法,在这个基础上,尝试了利用孤立点检测方法对教务管理系统中积累的数据进行分析,并验证了基于距离和的孤立点检测算法的有效性,通过实验,结果分析表明:基于距离和的算法降低了检测过程对用户设置阈值的要求,在时间复杂度上,稍微优于循环嵌套算法。  相似文献   

20.
Distance-based detection and prediction of outliers   总被引:4,自引:0,他引:4  
A distance-based outlier detection method that finds the top outliers in an unlabeled data set and provides a subset of it, called outlier detection solving set, that can be used to predict the outlierness of new unseen objects, is proposed. The solving set includes a sufficient number of points that permits the detection of the top outliers by considering only a subset of all the pairwise distances from the data set. The properties of the solving set are investigated, and algorithms for computing it, with subquadratic time requirements, are proposed. Experiments on synthetic and real data sets to evaluate the effectiveness of the approach are presented. A scaling analysis of the solving set size is performed, and the false positive rate, that is, the fraction of new objects misclassified as outliers using the solving set instead of the overall data set, is shown to be negligible. Finally, to investigate the accuracy in separating outliers from inliers, ROC analysis of the method is accomplished. Results obtained show that using the solving set instead of the data set guarantees a comparable quality of the prediction, but at a lower computational cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号