首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 46 毫秒
1.
为能及时发现数据流上的局部离群点,分析数据流已有的离群点挖掘算法,提出基于小波密度估计的离群点检测算法。利用小波密度估计多尺度和多粒度的特点,通过小波概率阈值判断数据流中当前滑动窗口内的数据点是否为离群点,并对数据流中离群点检测过程进行讨论。仿真结果表明,与核密度估计算法相比,该算法的检测效率与精度较高。  相似文献   

2.
基于动态网格的数据流离群点快速检测算法   总被引:8,自引:0,他引:8       下载免费PDF全文
离群点检测问题作为数据挖掘的一个重要任务,在众多领域中得到了应用.近年来,基于数据流数据的挖掘算法研究受到越来越多的重视.为了解决数据流数据中的离群点检测问题,提出了一种基于数据空间动态网格划分的快速数据流离群点检测算法.算法利用动态网格对空间中的稠密和稀疏区域进行划分,过滤处于稠密区域的大量主体数据,有效地减少了算法所需考察的数据对象的规模.而对于稀疏区域中的候选离群点,采用近似方法计算其离群度,具有高离群度的数据作为离群点输出.在保证一定精确度的条件下,算法的运行效率可以得到大幅度提高.对模拟数据集和真实数据集的实验检测均验证了该算法具有良好的适用性和有效性.  相似文献   

3.
离群点检测是数据挖掘领域的一个重要分支,当前数据流的离群点检测研究越来越受到关注.为了快速准确地检测出数据流中离群点,提出一种在线数据流离群点检测算法ODDS(outlier detection in online data stream s).它利用数据与频繁模式的相异程度来度量数据的离群程度,通过构建ODDS-Tree树,能动态地更新数据流中候选离群点的离群信息.实验结果验证了该算法与其他同类算法相比具有较高的效率与优良的可扩展性能.  相似文献   

4.
针对数据流中离群点挖掘问题;在K-means聚类算法基础上;提出了基于距离的准则进行数据间离群点判断的离群点检测DOKM算法。根据数据流概念漂移检测结果来自适应地调整滑动窗口大小;从而实现对数据流的离群点检测;与其他离群点算法的一系列实验验证和对比结果表明;DOKM算法在人工数据集和真实数据集中均可以实现对离群点的有效检测。  相似文献   

5.
局部离群点检测是近年来数据挖掘领域的热点问题之一.针对交通数据去噪问题,提出一种基于局部估计密度的局部离群点检测算法,算法使用核密度估计方法计算每个数据对象的密度估计值,来表示该数据对象的局部估计密度,并在核函数的带宽函数计算中引入数据对象的k-邻域平均距离作为其邻域信息,然后利用求出的局部估计密度计算数据对象的局部离群因子,依据局部离群因子的大小来判断数据对象是否为离群点.实验表明,该算法在UCI标准数据集与模拟数据集上都可以取得较好的表现.  相似文献   

6.
面向滑动窗口的连续离群点检测问题是数据流管理领域中的重要问题.该问题在信用卡欺诈检测、网络入侵防御,地质灾害预警等诸多领域发挥着重要作用.现有算法大多需要利用范围查询判断对象之间的位置关系,而范围查询的查询代价大,无法满足实时性要求.本文提出基于滑动窗口模型下的查询处理框架GBEH(grid-based excepted heap).首先,它以网格为基础构建索引GQBI(grid queue based index)管理数据流.该索引一方面维护数据流之间的位置关系,另一方面利用队列维护数据流的时序关系.其次, GBEH提出离群点检测算法PBH(priority based heap).该算法利用查询范围与网格单元格的相交面积计算该单元格中包含于查询范围对象数目的数学期望,并以此为基础构建基于小顶堆执行范围查询,从而有效降低范围查询代价,实现高效检测.理论分析和实验验证GBEH的高效性和稳定性.  相似文献   

7.
高维类别属性数据流离群点快速检测算法   总被引:2,自引:1,他引:1       下载免费PDF全文
提出类别属性数据流数据离群度量--加权频繁模式离群因子(weighted frequent pattern outlier factor,简称WFPOF),并在此基础上给出一种快速数据流离群点检测算法FODFP-Stream(fast outlier detection for high dimensional categorical data streams based on frequent pattern).该算法通过动态发现和维护频繁模式来计算离群度,能够有效地处理高维类别属性数据流,并可进一步扩  相似文献   

8.
罗剑 《计算机工程》2011,37(17):46-48,60
将面向大规模数据集的基于网格重心的分箱核密度估计理论扩展到数据流应用领域,在引入密度衰减技术的基础上,指出对于演化数据流以网格重心取代网格离散数据点集合的分箱核密度估计方法的近似误差是可控的,由此构造多维演化数据流核密度估计算法。实验结果表明,该方法在保持足够计算精度的同时能够精确捕获数据流的实时演化行为。  相似文献   

9.
杨显飞  张健沛  杨静  初妍 《计算机应用》2010,30(11):2949-2951
传统的离群点挖掘算法无法有效挖掘数据流中的离群点。针对数据流的无限输入和动态变化等特点,提出一种新的基于距离的数据流离群点挖掘算法。通过Hoeffding定理及独立同分布中心极限定理,对数据流概率分布变化进行动态检测,利用检测结果自适应调整滑动窗口大小对数据流离群点进行挖掘。实验结果表明,该算法在人工数据集和真实数据集KDD-CUP99中可以对数据流中的离群点进行有效挖掘。  相似文献   

10.
毛亚琼  田立勤  王艳  毛亚萍  王志刚 《计算机工程》2020,46(11):132-138,147
现有数据流离群点检测算法在面对海量高维数据流时普遍存在运算时间过长的问题.为此,提出一种引入局部向量点积密度的高维数据流离群点快速检测算法.以保存少量中间结果的方式只对窗口内受影响的数据点进行增量计算,同时设计2种优化策略和1条剪枝规则,减少检测过程中各点之间距离的计算次数,降低算法的时空开销,从而提高检测效率.理论分...  相似文献   

11.
基于滑动窗口的异常检测是数据流挖掘研究的一个重要课题,在许多应用中数据流通常在一个分布网络上传输,解决这类问题时常采用分布计算技术,以便获得实时高质量的计算结果。对分布演化数据流上连续异常检测问题,进行形式化地阐述,提出了两个基于核密度估计的异常检测定义和算法,并通过大量真实数据集的实验,表明该算法具有良好的高效性和可扩展性,完全适应数据流应用的需求。  相似文献   

12.
异常值检测是数据挖掘领域中的核心问题,在工业生产中也有着广泛的应用。准确高效的异常值检测方法能够及时反映出工业系统运行状态,为相关人员提供参考,而传统的异常值检测方法无法很好地检测出变化模式复杂、变化范围小、具有流数据特性的数据中的异常值。因此,本文提出了一种新的针对该类型数据的异常值检测方法:首先通过对数据进行聚类划分,将相似的数据进行归类,从而将原本复杂的数据分布拆解成为每个聚类下简单数据分布的叠加;然后使用核密度估计假设检验的方法对待检测数据进行异常值检测。在标准数据集和真实数据上的实验结果表明,该方法相比于传统的异常值检测方法在检测精度上有一定的提升。  相似文献   

13.
In many data stream mining applications, traditional density estimation methods such as kemel density estimation, reduced set density estimation can not be applied to the density estimation of data streams because of their high computational burden, processing time and intensive memory allocation requirement. In order to reduce the time and space complexity, a novel density estimation method Dm-KDE over data streams based on the proposed algorithm m-KDE which can be used to design a KDE estimator with the fixed number of kernel components for a dataset is proposed. In this method, Dm-KDE sequence entries are created by algorithm m-KDE instead of all kemels obtained from other density estimation methods. In order to further reduce the storage space, Dm-KDE sequence entries can be merged by calculating their KL divergences. Finally, the probability density functions over arbitrary time or entire time can be estimated through the obtained estimation model. In contrast to the state-of-the-art algorithm SOMKE, the distinctive advantage of the proposed algorithm Dm-KDE exists in that it can achieve the same accuracy with much less fixed number of kernel components such that it is suitable for the scenarios where higher on-line computation about the kernel density estimation over data streams is required. We compare Dm-KDE with SOMKE and M-kernel in terms of density estimation accuracy and running time for various stationary datasets. We also apply Dm-KDE to evolving data streams. Experimental results illustrate the effectiveness of the pro- posed method.  相似文献   

14.
    
In many data stream mining applications, traditional density estimation methods such as kernel density estimation, reduced set density estimation can not be applied to the density estimation of data streams because of their high computational burden, processing time and intensive memory allocation requirement. In order to reduce the time and space complexity, a novel density estimation method Dm-KDE over data streams based on the proposed algorithm m-KDE which can be used to design a KDE estimator with the fixed number of kernel components for a dataset is proposed. In this method, Dm-KDE sequence entries are created by algorithm m-KDE instead of all kernels obtained from other density estimation methods. In order to further reduce the storage space, Dm-KDE sequence entries can be merged by calculating their KL divergences. Finally, the probability density functions over arbitrary time or entire time can be estimated through the obtained estimation model. In contrast to the state-of-the-art algorithm SOMKE, the distinctive advantage of the proposed algorithm Dm-KDE exists in that it can achieve the same accuracy with much less fixed number of kernel components such that it is suitable for the scenarios where higher on-line computation about the kernel density estimation over data streams is required.We compare Dm-KDE with SOMKE and M-kernel in terms of density estimation accuracy and running time for various stationary datasets. We also apply Dm-KDE to evolving data streams. Experimental results illustrate the effectiveness of the proposed method.  相似文献   

15.
基于距离的分布式RFID数据流孤立点检测   总被引:1,自引:0,他引:1       下载免费PDF全文
RFID技术已广泛应用于实时监控、对象标识及跟踪等领域,及时发现被监控标签对象的异常状态显得十分重要.然而,由于无线通信技术的不可靠性及环境因素影响,RFID阅读器收集到的数据常常包含噪声.针对分布式RFID数据流的海量、易变、不可靠及分布等特点,提出了基于距离的局部流孤立点检测算法LSOD和基于近似估计的全局流孤立点检测算法GSOD.LSOD需要维护数据流结构CSL来识别安全内点,然后运用安全内点的特性来节省流数据的存储空间和查询时间.根据基于距离的孤立点定义,在中心节点上的全局孤立点是位于每个分布节点上孤立点集合的子集.GSOD采用抽样方法进行全局孤立点近似估计,以减少中心节点的通信量及计算负荷.实验表明,所给出的算法具有运行时间短、占用内存小、准确率高等特点.  相似文献   

16.
提出了一个基于邻域密度的异常检测方法,它能处理混合数据的异常值。在该方法中,样本的异常指标被定义为该样本的邻域大小和该样本的平均邻域密度的加权和。为了验证提出的方法,进行了一系列实验。实验结果表明新提出的方法适用于混合数据,并且比其他检测方法更有效。  相似文献   

17.
    
With the rapid development of communication technology, various complex heterogeneous sensor network applications produce a large number of high-dimensional dynamic data streams, which results in more difficult to anomaly detection than ever before. So, anomaly detection for high-dimensional dynamic data streams is of a more and more challenging problem. This paper proposes a novel method for detecting anomalies in high-dimensional dynamic data streams by utilizing several components. Firstly, it uses a stacked habituation autoencoder with habituation physiological mechanism to detect similarity anomalies more easily and capture feature relationships. Secondly, a union kernel density estimator with micro-cluster is designed to improve online anomaly detection accuracy by estimating the data density. Lastly, candidate anomaly sets and a delayed processing approach are utilized to cope with conceptual drift and evolution in the data stream, allowing the system to adapt to changes in the data over time. Extensive experiments on four high-dimensional dynamic data streams of the Internet of Things show that the proposed method is very effective.  相似文献   

18.
    
In this work, we focus on distance-based outliers in a metric space, where the status of an entity as to whether it is an outlier is based on the number of other entities in its neighborhood. In recent years, several solutions have tackled the problem of distance-based outliers in data streams, where outliers must be mined continuously as new elements become available. An interesting research problem is to combine the streaming environment with massively parallel systems to provide scalable stream-based algorithms. However, none of the previously proposed techniques refer to a massively parallel setting. Our proposal fills this gap and investigates the challenges in transferring state-of-the-art techniques to Apache Flink, a modern platform for intensive streaming analytics. We thoroughly present the technical challenges encountered and the alternatives that may be applied, of which a micro-clustering-based one is the most efficient. We show speed-ups of up to 2.27 times over advanced non-parallel solutions, by using just an ordinary four-core machine and a real-world dataset. When moving to a three-machine cluster, due to less contention, we manage to achieve both better scalability in terms of the window slide size and the data dimensionality, and even higher speed-ups, e.g., by a factor of more than 11X. Overall, our results demonstrate that outlier mining can be achieved in an efficient and scalable manner. The resulting techniques have been made publicly available as open-source software.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号