首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
基于反向K近邻的孤立点检测算法   总被引:3,自引:0,他引:3  
提出了基于反向K近邻(RKNN)的孤立点检测算法ODRKNN。ODRKNN算法用每个数据点的反向k近邻个数来衡量该数据点的偏离程度,在综合数据集和真实数据集上的实验结果表明,该算法能有效地检测出孤立点,且算法的效率高于算法LOF和LSC的效率.  相似文献   

2.
Outlier or anomaly detection is a fundamental data mining task with the aim to identify data points, events, transactions which deviate from the norm. The identification of outliers in data can provide insights about the underlying data generating process. In general, outliers can be of two kinds: global and local. Global outliers are distinct with respect to the whole data set, while local outliers are distinct with respect to data points in their local neighbourhood. While several approaches have been proposed to scale up the process of global outlier discovery in large databases, this has not been the case for local outliers. We tackle this problem by optimising the use of local outlier factor (LOF) for large and high-dimensional data. We propose projection-indexed nearest-neighbours (PINN), a novel technique that exploits extended nearest-neighbour sets in a reduced-dimensional space to create an accurate approximation for k-nearest-neighbour distances, which is used as the core density measurement within LOF. The reduced dimensionality allows for efficient sub-quadratic indexing in the number of items in the data set, where previously only quadratic performance was possible. A detailed theoretical analysis of random projection (RP) and PINN shows that we are able to preserve the density of the intrinsic manifold of the data set after projection. Experimental results show that PINN outperforms the standard projection methods RP and PCA when measuring LOF for many high-dimensional real-world data sets of up to 300,000 elements and 102,600 dimensions. A further investigation into the use of high-dimensionality-specific indexing such as spatial approximate sample hierarchy (SASH) shows that our novel technique holds benefits over even these types of highly efficient indexing. We cement the practical applications of our novel technique with insights into what it means to find local outliers in real data including image and text data, and include potential applications for this knowledge.  相似文献   

3.
针对基于密度的局部离群因子算法(LOF),需要计算距离矩阵来进行[k]近邻查寻,算法时间复杂度高,不适合大规模数据集检测的问题,提出基于网格查询的局部离群点检测算法。算法利用距离目标网格中的数据点最近的[k]个其他数据点,一定在该目标网格或在该目标网格的最近邻接网格中这一特性,来改进LOF算法的邻域查询操作,以此减少LOF算法在邻域查询时的计算量。实验结果证明,提出的LOGD算法在与原LOF算法具有基本相同的检测准确率的情况下,能够有效地降低离群点检测的时间。  相似文献   

4.

Outlier detection has become an important research area in the field of stream data mining due to its vast applications. In the literature, many methods have been proposed, but they work well for simple and positive regions of outliers, where boundary regions are not given much importance. Moreover, an algorithm which processes stream data must be effective and able to compute infinite data in one pass or limited number of passes. These problems have motivated us to propose an outlier detection approach for large-scale data stream. The proposed algorithm employs the concept of relative cardinality, entropy outlier factor theory of information-based system, and size-variant sliding window in stream data. In addition, we propose a new methodology for concept drift adaptation on evolving data streams. The proposed method is executed on nine benchmark datasets and compared with six existing methods that are EXPoSE, iForest, OC-SVM, LOF, KDE, and FastAbod. Experimental results show that the proposed method outperforms six existing methods in terms of receiver operating characteristic curve, precision recall, and computational time for positive regions as well as for boundary regions.

  相似文献   

5.
离群点检测算法在网络入侵检测、医疗辅助诊断等领域具有十分广泛的应用。针对LDOF、CBOF及LOF算法在大规模数据集和高维数据集的检测过程中存在的执行时间长及检测率较低的问题,提出了基于图上随机游走(BGRW)的离群点检测算法。首先初始化迭代次数、阻尼因子以及数据集中每个对象的离群值;其次根据对象之间的欧氏距离推导出漫步者在各对象之间的转移概率;然后通过迭代计算得到数据集中每个对象的离群值;最后将数据集中离群值最高的对象判定为离群点并输出。在UCI真实数据集与复杂分布的合成数据集上进行实验,将BGRW算法与LDOF、CBOF和LOF算法在执行时间、检测率和误报率指标上进行对比。实验结果表明,BGRW算法能够有效降低执行时间并在检测率及误报率指标上优于对比算法。  相似文献   

6.
局部异常检测(Local outlier factor,LOF)能够有效解决数据倾斜分布下的异常检测问题,在很多应用领域具有较好的异常检测效果.本文面向大数据异常检测,提出了一种快速的Top-n局部异常点检测算法MTLOF(Multi-granularity upper bound pruning based top-n LOF detection),融合索引结构和多层LOF上界设计了多粒度的剪枝策略,以快速发现Top-n局部异常点.首先,提出了四个更接近真实LOF值的上界,以避免直接计算LOF值,并对它们的计算复杂度进行了理论分析;其次,结合索引结构和UB1、UB2上界,提出了两层的Cell剪枝策略,不仅采用全局Cell剪枝策略,还引入了基于Cell内部数据对象分布的局部剪枝策略,有效解决了高密度区域的剪枝问题;再次,利用所提的UB3和UB4上界,提出了两个更加合理有效的数据对象剪枝策略,UB3和UB4上界更加接近于真实LOF值,有利于剪枝更多数据对象,而基于计算复用的上界计算方法,大大降低了计算成本;最后,优化了初始Top-n局部异常点的选择方法,利用区域划分和建立的索引结构,在数据稀疏区域选择初始局部异常点,有利于将LOF值较大的数据对象选为初始局部异常点,有效提升初始剪枝临界值,使得初始阶段剪枝掉更多的数据对象,进一步提高检测效率.在六个真实数据集上的综合实验评估验证MTLOF算法的高效性和可扩展性,相比最新的TOLF(Top-n LOF)算法,时间效率提升可高达3.5倍.  相似文献   

7.
IncLOF:动态环境下局部异常的增量挖掘算法   总被引:12,自引:1,他引:12  
异常检测是数据挖掘领域研究的最基本的问题之一,它在欺诈甄别、贷款审批、气象预报、客户分类等方面有广泛的应用,以前的异常检测算法只适应于静态环境,在数据更新时需要进行重新计算,在基于密度的局部异常检测算法LOF的基础上,提出一种在动态环境下局部异常挖掘的增量算法IncLOF,当数据库中的数据更新时,只对受到影响的点进行重新计算,这样可以大大提高异常的挖掘速度,实验表明,在动态环境下IncLOF的运行时间远远小于LOF的运行时间,并且用户定义的邻域中的最小对象个数与记录数之比越小,效果越明显.  相似文献   

8.
为了提高数据挖掘中异常检测算法在数据量增大时的准确度、灵敏度和执行效率,本文提出了一种基于MapReduce框架和Local Outlier Factor (LOF)算法的并行异常检测算法(MR-DLOF)。首先,将存放在Hadoop分布式文件系统(HDFS)上的数据集逻辑地切分为多个数据块。然后,利用MapReduce原理将各个数据块中的数据并行处理,使得每个数据点的k-邻近距离和LOF值的计算仅在单个块中执行,从而提高了算法的执行效率;同时重新定义了k-邻近距离的概念,避免了数据集中存在大于或等于k个重复点而导致局部密度为无穷大的情况。最后,将LOF值较大的数据点合并重新计算其LOF值,从而提高算法准确度和灵敏度。通过真实数据集验证了MR-DLOF算法的有效性、高效性和可扩展性。  相似文献   

9.
Detecting outliers in a dataset is an important data mining task with many applications, such as detection of credit card fraud or network intrusions. Traditional methods assume numerical data and compute pair-wise distances among points. Recently, outlier detection methods were proposed for categorical and mixed-attribute data using the concept of Frequent Itemsets (FIs). These methods face challenges when dealing with large high-dimensional data, where the number of generated FIs can be extremely large. To address this issue, we propose several outlier detection schemes inspired by the well-known condensed representation of FIs, Non-Derivable Itemsets (NDIs). Specifically, we contrast a method based on frequent NDIs, FNDI-OD, and a method based on the negative border of NDIs, NBNDI-OD, with their previously proposed FI-based counterparts. We also explore outlier detection based on Non-Almost Derivable Itemsets (NADIs), which approximate the NDIs in the data given a δ parameter. Our proposed methods use a far smaller collection of sets than the FI collection in order to compute an anomaly score for each data point. Experiments on real-life data show that, as expected, methods based on NDIs and NADIs offer substantial advantages in terms of speed and scalability over FI-based Outlier Detection method. What is significant is that NDI-based methods exhibit similar or better detection accuracy compared to the FI-based methods, which supports our claim that the NDI representation is especially well-suited for the task of detecting outliers. At the same time, the NDI approximation scheme, NADIs is shown to exhibit similar accuracy to the NDI-based method for various δ values and further runtime performance gains. Finally, we offer an in-depth discussion and experimentation regarding the trade-offs of the proposed algorithms and the choice of parameter values.  相似文献   

10.
Distance-based outliers: algorithms and applications   总被引:20,自引:0,他引:20  
This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers can only deal efficiently with two dimensions/attributes of a dataset. In this paper, we study the notion of DB (distance-based) outliers. Specifically, we show that (i) outlier detection can be done efficiently for large datasets, and for k-dimensional datasets with large values of k (e.g., ); and (ii), outlier detection is a meaningful and important knowledge discovery task. First, we present two simple algorithms, both having a complexity of , k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear with respect to N, but exponential with respect to k. We provide experimental results indicating that this algorithm significantly outperforms the two simple algorithms for . Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most three passes over a dataset. Again, experimental results show that this algorithm is by far the best for . Finally, we discuss our work on three real-life applications, including one on spatio-temporal data (e.g., a video surveillance application), in order to confirm the relevance and broad applicability of DB outliers. Received February 15, 1999 / Accepted August 1, 1999  相似文献   

11.
异常检测一直是数据挖掘领域的重要工作之一。基于欧式距离的异常检测算法在应用于高维数据时存在检测精度无法保证和运行时间过长的问题。在基于角度方差的异常检测算法基础上提出了一种多层次的高维数据异常检测算法(Hybrid outlier detection algorithm based on angle variance for High-dimensional data, HODA)。算法结合了粗糙集理论,分析属性之间的相互作用以排除影响较小的属性;通过分析各维度上的数据分布,对数据进行网格划分,寻找可能存在异常点的网格;最后对可能存在异常点的网格计算角度方差异常因子,筛选异常数据。实验结果表明,与ABOD, FastVOA和经典LOF算法相比,HODA算法在保证精测精度的前提下,运行时间显著缩短且可扩展性强。  相似文献   

12.
梁绍一  韩德强 《控制与决策》2019,34(7):1433-1440
异常点检测(outlier detection)领域的大量研究都集中于一类“基于密度的”方法,这类方法能够克服许多传统异常点检测方法的缺陷,但仍大多使用基于几何距离的方式进行数据点局部密度的估计,导致在某些情况下反直观结果的出现.针对该问题,用一种基于邻域链的方法取代传统方法进行局部密度的估计,设计新的异常点检测方法.实验结果表明,对比经典的基于密度的异常点检测方法LOF(Local outlier factor)以及几种基于LOF的改进方法,所提出的方法能够更加准确地区分正常和异常数据点,避免反直观结果的出现.  相似文献   

13.
离群数据检测,主要目的是从海量数据中发现异常数据。其有以下两点好处:第一,作为数据预处理工作,减少噪声点对模型的影响;第二,针对特定场景检测出异常,并对异常现象本身进行挖掘,也非常有价值。目前,国内外主流的方法像LOF、KNN、ORCA等,无法兼顾全局离群点、局部离群点和离群簇同时存在的复杂场景的检测。 针对这一情况,提出了一种新的离群数据检测模型。为了能够最大限度对全局、局部离群数据以及离群簇的全面检测,基于iForest、LOF、DBSCAN分别对于全局离群点、局部离群点、离群簇的高度敏感度,选定该三种特定基分类器,并且改变其目标函数,修正框架的错误率计算方式,进行融合,形成了新的离群数据检测模型ILD-BOOST。实验结果表明,该模型充分兼顾了全局和局部离群数据及离群簇的检测,且效果优于目前主流的离群数据检测方法。  相似文献   

14.
When scanning an object using a 3D laser scanner, the collected scanned point cloud is usually contaminated by numerous measurement outliers. These outliers can be sparse outliers, isolated or non-isolated outlier clusters. The non-isolated outlier clusters pose a great challenge to the development of an automatic outlier detection method since such outliers are attached to the scanned data points from the object surface and difficult to be distinguished from these valid surface measurement points. This paper presents an effective outlier detection method based on the principle of majority voting. The method is able to detect non-isolated outlier clusters as well as the other types of outliers in a scanned point cloud. The key component is a majority voting scheme that can cut the connection between non-isolated outlier clusters and the scanned surface so that non-isolated outliers become isolated. An expandable boundary criterion is also proposed to remove isolated outliers and preserve valid point clusters more reliably than a simple cluster size threshold. The effectiveness of the proposed method has been validated by comparing with several existing methods using a variety of scanned point clouds.  相似文献   

15.

We consider the numerical solution of a phase field model for polycrystallization in the solidification of binary mixtures in a domain \( \varOmega \subset \mathbb {R}^2\). The model is based on a free energy in terms of three order parameters: the local orientation \(\varTheta \) of the crystals, the local crystallinity \(\phi \), and the concentration c of one of the components of the binary mixture. The equations of motion are given by an initial-boundary value problem for a coupled system of partial differential equations consisting of a regularized second order total variation flow in \( \varTheta \), an \(L^2\) gradient flow in \(\phi \), and a \(W^{1,2}(\varOmega )^*\) gradient flow in c. Based on an implicit discretization in time by the backward Euler scheme, we suggest a splitting method such that the three semidiscretized equations can be solved separately and prove existence of a solution. As far as the discretization in space is concerned, the fourth order Cahn–Hilliard type equation in c is taken care of by a \(\hbox {C}^0\) Interior Penalty Discontinuous Galerkin approximation which has the advantage that the same finite element space can be used as well for the spatial discretization of the equations in \( \varTheta \) and \( \phi \). The fully discretized equations represent parameter dependent nonlinear algebraic systems with the discrete time as a parameter. They are solved by a predictor corrector continuation strategy featuring an adaptive choice of the time-step. Numerical results illustrate the performance of the suggested numerical method.

  相似文献   

16.
异常检测是数据挖掘的一个重要组成部分,其中基于密度的方法LOF是目前常用的主要方法。然而LOF方法进行检测时需要设定参数k和MinPts,检测结果对参数非常敏感,容易造成检测错误。该文提出了一种基于Voronoi图的异常检测算法VOD,采用Voronoi图来确定对象间的邻近关系,解决了基于密度方法存在的问题,算法的时间复杂性从O(N2)降低到O(NlogN)。  相似文献   

17.
复杂领域中,异常检测的困难是异常信息和正常信息高度混杂,针对此问题,提出了基于方差的异常检测模型(variance-based outlier detection model,VODM).此模型把数据集的信息分解为正常信息和异常信息两部分,使得在正常信息损失最小的目标下,异常点集合就是前k个包含最多异常信息的样本.VODM只是一种检测异常的理论框架,为此,采用主曲线作为其实现算法.股票市场中异常收益检测的实验表明,VODM及其算法是有效的.  相似文献   

18.
基于密度的离群点挖掘在入侵检测中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
闫少华  张巍  滕少华 《计算机工程》2011,37(18):240-242
给出一种基于密度的局部离群点挖掘方法。采用KDD99数据集进行实验,对数据集中的41个属性提取特征,利用基于密度的聚类对统计处理过的数据集实行剪枝操作,剪除数据集中大部分密集的数据对象,保留未被剪除的候选离群对象集。采用局部离群挖掘方法计算离群候选对象的离群因子,检测出异常攻击。实验结果表明,该方法能保证较高的检测率和较低的误报率。  相似文献   

19.
Identification of attacks by a network intrusion detection system (NIDS) is an important task. In signature or rule based detection, the previously encountered attacks are modeled, and signatures/rules are extracted. These rules are used to detect such attacks in future, but in anomaly or outlier detection system, the normal network traffic is modeled. Any deviation from the normal model is deemed to be an outlier/ attack. Data mining and machine learning techniques are widely used in offline NIDS. Unsupervised and supervised learning techniques differ the way NIDS dataset is treated. The characteristic features of unsupervised and supervised learning are finding patterns in data, detecting outliers, and determining a learned function for input features, generalizing the data instances respectively. The intuition is that if these two techniques are combined, better performance may be obtained. Hence, in this paper the advantages of unsupervised and supervised techniques are inherited in the proposed hierarchical model and devised into three stages to detect attacks in NIDS dataset. NIDS dataset is clustered using Dirichlet process (DP) clustering based on the underlying data distribution. Iteratively on each cluster, local denser areas are identified using local outlier factor (LOF) which in turn is discretized into four bins of separation based on LOF score. Further, in each bin the normal data instances are modeled using one class classifier (OCC). A combination of Density Estimation method, Reconstruction method, and Boundary methods are used for OCC model. A product rule combination of the threemethods takes into consideration the strengths of each method in building a stronger OCC model. Any deviation from this model is considered as an attack. Experiments are conducted on KDD CUP’99 and SSENet-2011 datasets. The results show that the proposed model is able to identify attacks with higher detection rate and low false alarms.  相似文献   

20.
LOF(Local Outlier Factor)是一种经典基于密度的局部离群点检测算法,为提高算法的精确度,以便更精准挖掘出局部离群点,在LOF算法的基础上,提出了一种基于数据场的改进LOF离群点检测算法。通过对数据集每一维的属性值应用数据场理论,计算势值,进而引入平均势差的概念,针对每一维度中大于平均势差的任意两点在计算距离时加入一个权值,从而提高离群点检测的精确度,实验结果表明该算法是可行的,并且拥有更高的精确度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号