首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Efficiently detecting outliers or anomalies is an important problem in many areas of science, medicine and information technology. Applications range from data cleaning to clinical diagnosis, from detecting anomalous defects in materials to fraud and intrusion detection. Over the past decade, researchers in data mining and statistics have addressed the problem of outlier detection using both parametric and non-parametric approaches in a centralized setting. However, there are still several challenges that must be addressed. First, most approaches to date have focused on detecting outliers in a continuous attribute space. However, almost all real-world data sets contain a mixture of categorical and continuous attributes. Categorical attributes are typically ignored or incorrectly modeled by existing approaches, resulting in a significant loss of information. Second, there have not been any general-purpose distributed outlier detection algorithms. Most distributed detection algorithms are designed with a specific domain (e.g. sensor networks) in mind. Third, the data sets being analyzed may be streaming or otherwise dynamic in nature. Such data sets are prone to concept drift, and models of the data must be dynamic as well. To address these challenges, we present a tunable algorithm for distributed outlier detection in dynamic mixed-attribute data sets.  相似文献   

2.
In this paper we present a genetic solution to the outlier detection problem. The essential idea behind this technique is to define outliers by examining those projections of the data, along which the data points have abnormal or inconsistent behavior (defined in terms of their sparsity values). We use a partitioning method to divide the data set into groups such that all the objects in a group can be considered to behave similarly. We then identify those groups that contain outliers. The algorithm assigns an ‘outlier-ness’ value that gives a relative measure of how strong an outlier group is. An evolutionary search computation technique is employed for determining those projections of the data over which the outliers can be identified. A new data structure, called the grid count tree (GCT), is used for efficient computation of the sparsity factor. GCT helps in quickly determining the number of points within any grid defined over the projected space and hence facilitates faster computation of the sparsity factor. A new crossover is also defined for this purpose. The proposed method is applicable for both numeric and categorical attributes. The search complexity of the GCT traversal algorithm is provided. Results are demonstrated for both artificial and real life data sets including four gene expression data sets.  相似文献   

3.
The logical analysis of data (LAD) is one of the most promising data mining methods developed to date for extracting knowledge from data. The key feature of the LAD is the capability of detecting hidden patterns in the data. Because patterns are basically combinations of certain attributes, they can be used to build a decision boundary for classification in the LAD by providing important information to distinguish observations in one class from those in the other. The use of patterns may result in a more stable performance in terms of being able to classify both positive and negative classes due to their robustness to measurement errors.The LAD technique, however, tends to choose too many patterns by solving a set covering problem to build a classifier; this is especially the case when outliers exist in the data set. In the set covering problem of the LAD, each observation should be covered by at least one pattern, even though the observation is an outlier. Thus, existing approaches tend to select too many patterns to cover these outliers, resulting in the problem of overfitting. Here, we propose new pattern selection approaches for LAD that take both outliers and the coverage of a pattern into account. The proposed approaches can avoid the problem of overfitting by building a sparse classifier. The performances of the proposed pattern selection approaches are compared with existing LAD approaches using several public data sets. The computational results show that the sparse classifiers built on the patterns selected by the proposed new approaches yield an improved classification performance compared to the existing approaches, especially when outliers exist in the data set.  相似文献   

4.
A fuzzy support vector machine (FSVM) is an improvement in SVMs for dealing with data sets with outliers. In FSVM, a key step is to compute the membership for every training sample. Existing approaches of computing the membership of a sample are motivated by the existence of outliers in data sets and do not take account of the inconsistency between conditional attributes and decision classes. However, this kind of inconsistency can affect membership for every sample and has been considered in fuzzy rough set theory. In this paper, we develop a new method to compute membership for FSVMs by using a Gaussian kernel-based fuzzy rough set. Furthermore, we employ a technique of attribute reduction using Gaussian kernel-based fuzzy rough sets to perform feature selection for FSVMs. Based on these discussions we combine the FSVMs and fuzzy rough sets methods together. The experimental results show that the proposed approaches are feasible and effective.  相似文献   

5.
Software is quite often expensive to develop and can become a major cost factor in corporate information systems budgets. With the variability of software characteristics and the continual emergence of new technologies the accurate prediction of software development costs is a critical problem within the project management context. In order to address this issue a large number of software cost prediction models have been proposed. Each model succeeds to some extent but they all encounter the same problem, i.e., the inconsistency and inadequacy of the historical data sets. Often a preliminary data analysis has not been performed and it is possible for the data to contain non-dominated or confounded variables. Moreover, some of the project attributes or their values are inappropriately out of date, for example the type of computer used for project development in the COCOMO 81 (Boehm, 1981) data set. This paper proposes a framework composed of a set of clearly identified steps that should be performed before a data set is used within a cost estimation model. This framework is based closely on a paradigm proposed by Maxwell (2002). Briefly, the framework applies a set of statistical approaches, that includes correlation coefficient analysis, Analysis of Variance and Chi-Square test, etc., to the data set in order to remove outliers and identify dominant variables. To ground the framework within a practical context the procedure is used to analyze the ISBSG (International Software Benchmarking Standards Group data—Release 8) data set. This is a frequently used accessible data collection containing information for 2,008 software projects. As a consequence of this analysis, 6 explanatory variables are extracted and evaluated.  相似文献   

6.
空间孤立点是指与邻居具有不连续性的空间点,或者是偏离观测值以至使人们认为是由不同的体系产生的。空间孤立点检测在交通、生态、公共安全、卫生健康、地震、海啸等领域有广泛应用。传统的根据一个非空间属性值进行孤立点判断的方法客易引起孤立点判断失误。作者在针对多个属性进行考虑的基础上,提出以空间维确定邻居关系,非空间维定义距离函数,使用Mahalanobis距离检测孤立点,研究一种新的检测空间孤立点的算法。并时时间复杂度进行分析。仿真实验说明算法可以有效地发现大规模空间数据中的孤立点。  相似文献   

7.
Social choice deals with aggregating the preferences of a number of voters into a collective preference. We will use this idea for software project effort estimation, substituting the voters by project attributes. Therefore, instead of supplying numeric values for various project attributes that are then used in regression or similar methods, a new project only needs to be placed into one ranking per attribute, necessitating only ordinal values. Using the resulting aggregate ranking the new project is again placed between other projects whose actual expended effort can be used to derive an estimation. In this paper we will present this method and extensions using weightings derived from genetic algorithms. We detail a validation based on several well-known data sets and show that estimation accuracy similar to classic methods can be achieved with considerably lower demands on input data.  相似文献   

8.
Spatial outlier detection is an important research problem that has received much attentions in recent years. Most existing approaches are designed for numerical attributes, but are not applicable to categorical ones (e.g., binary, ordinal, and nominal) that are popular in many applications. The main challenges are the modeling of spatial categorical dependency as well as the computational efficiency. This paper presents the first outlier detection framework for spatial categorical data. Specifically, a new metric, named as Pair Correlation Ratio (PCR), is measured for each pair of category sets based on their co-occurrence frequencies at specific spatial distance ranges. The relevances among spatial objects are then calculated using PCR values with regard to their spatial distances. The outlierness for each object is defined as the inverse of the average relevance between an object and its spatial neighbors. Those objects with the highest outlier scores are returned as spatial categorical outliers. A set of algorithms are further designed for single-attribute and multi-attribute spatial categorical datasets. Extensive experimental evaluations on both simulated and real datasets demonstrated the effectiveness and efficiency of our proposed approaches.  相似文献   

9.
Due to the extensive usage of data-based techniques in industrial processes, detecting outliers for industrial process data become increasingly indispensable. This paper proposes an outlier detection scheme that can be directly used for either process monitoring or process control. Based on traditional Gaussian process regression, we develop several detection algorithms, of which the mean function, covariance function, likelihood function and inference method are specially devised. Compared with traditional detection methods, the proposed scheme has less postulation and is more suitable for modern industrial processes. The effectiveness of the proposed scheme is verified by experiments on both synthetic and real-life data sets.  相似文献   

10.
In this study, we propose a novel local outlier detection approach - called LOMA - to mining local outliers in high-dimensional data sets. To improve the efficiency of outlier detection, LOMA prunes irrelevance attributes and objects in the data set by analyzing attribute relevance with a sparse factor threshold. Such a pruning technique substantially reduce the size of data sets. The core of LOMA is searching sparse subspace, which implements the particle swarm optimization method in reduced data sets. In the process of searching sparse subspace, we introduce the sparse coefficient threshold to represent sparse degrees of data objects in a subspace, where the data objects are considered as local outliers. The attribute relevance analysis provides a guidance for experts and users to identify useless attributes for detecting outliers. In addition, our sparse-subspace-based outlier algorithm is a novel technique for local-outlier detection in a wide variety of applications. Experimental results driven by both synthetic and UCI data sets validate the effectiveness and accuracy of our LOMA. In particular, LOMA achieves high mining efficiency and accuracy when the sparse factor threshold is set to a small value.  相似文献   

11.
针对高维数据集中存在不相关的属性与冗余数据导致无法检测出异常值的问题,提出了一种新的基于稀疏子空间的局部异常值检测算法(SSLOD)。根据数据对象在每个维度上的局部密度定义了对象的异常因子;依据异常因子阈值约简数据集中与局部异常值不相关的属性以及冗余的数据对象;用改进的粒子群优化算法在约简后的数据集中搜索稀疏子空间,该子空间中的数据对象即为异常值。通过在仿真数据集和真实数据集上的综合实验验证了该算法的有效性和准确性。  相似文献   

12.
基于频繁模式的离群点挖掘在入侵检测中的应用   总被引:1,自引:0,他引:1  
王茜  唐锐 《计算机应用研究》2013,30(4):1208-1211
针对网络安全数据高维度的特征,对传统离群点检测不能有效发现的网络数据中入侵行为细节进行检测。提出一种基于频繁模式的算法,通过检测数据项的频繁模式和关联规则,剥离数据流中或安全日志数据中的噪声和异常点,计算安全数据的加权频繁离群因子,精确定位离群点,最后从中自动筛选出异常属性。实验证明,该方法在较好的空间复杂性与时间复杂性下,能有效地发现在高维安全数据中异常的属性。  相似文献   

13.
A Simulation Tool for Efficient Analogy Based Cost Estimation   总被引:1,自引:0,他引:1  
Estimation of a software project effort, based on project analogies, is a promising method in the area of software cost estimation. Projects in a historical database, that are analogous (similar) to the project under examination, are detected, and their effort data are used to produce estimates. As in all software cost estimation approaches, important decisions must be made regarding certain parameters, in order to calibrate with local data and obtain reliable estimates. In this paper, we present a statistical simulation tool, namely the bootstrap method, which helps the user in tuning the analogy approach before application to real projects. This is an essential step of the method, because if inappropriate values for the parameters are selected in the first place, the estimate will be inevitably wrong. Additionally, we show how measures of accuracy and in particular, confidence intervals, may be computed for the analogy-based estimates, using the bootstrap method with different assumptions about the population distribution of the data set. Estimate confidence intervals are necessary in order to assess point estimate accuracy and assist risk analysis and project planning. Examples of bootstrap confidence intervals and a comparison with regression models are presented on well-known cost data sets.  相似文献   

14.
离群数据检测,主要目的是从海量数据中发现异常数据。其有以下两点好处:第一,作为数据预处理工作,减少噪声点对模型的影响;第二,针对特定场景检测出异常,并对异常现象本身进行挖掘,也非常有价值。目前,国内外主流的方法像LOF、KNN、ORCA等,无法兼顾全局离群点、局部离群点和离群簇同时存在的复杂场景的检测。 针对这一情况,提出了一种新的离群数据检测模型。为了能够最大限度对全局、局部离群数据以及离群簇的全面检测,基于iForest、LOF、DBSCAN分别对于全局离群点、局部离群点、离群簇的高度敏感度,选定该三种特定基分类器,并且改变其目标函数,修正框架的错误率计算方式,进行融合,形成了新的离群数据检测模型ILD-BOOST。实验结果表明,该模型充分兼顾了全局和局部离群数据及离群簇的检测,且效果优于目前主流的离群数据检测方法。  相似文献   

15.
16.
The classification of imbalanced data is a major challenge for machine learning. In this paper, we presented a fuzzy total margin based support vector machine (FTM-SVM) method to handle the class imbalance learning (CIL) problem in the presence of outliers and noise. The proposed method incorporates total margin algorithm, different cost functions and the proper approach of fuzzification of the penalty into FTM-SVM and formulates them in nonlinear case. We considered an excellent type of fuzzy membership functions to assign fuzzy membership values and got six FTM-SVM settings. We evaluated the proposed FTM-SVM method on two artificial data sets and 16 real-world imbalanced data sets. Experimental results show that the proposed FTM-SVM method has higher G_Mean and F_Measure values than some existing CIL methods. Based on the overall results, we can conclude that the proposed FTM-SVM method is effective for CIL problem, especially in the presence of outliers and noise in data sets.  相似文献   

17.
空间离群点是指与其邻居具有明显区别的属性值的空间对象。已有的空间离散点检测算法一个主要的缺陷就是这些方法导致一些真正的离群点被忽略而把一些非离群点当成了空间离群点。提出了一种迭代算法,该算法通过多次迭代检测离群点,取得较好效果。实验表明该算法具有较好的实用性。  相似文献   

18.
离群点的查找算法主要有两类:第一类是面向统计数据,把各种数据都看成是多维空间,没有区分空间维与非空间维,这类算法可能产生错误的判断或找到的是无意义的离群点;第二类算法面向空间数据,区分空间维与非空间维,但该类算法查找效率太低或不能查找邻域离群点。引入熵权的概念,提出了一种新的基于熵权的空间邻域离群点度量算法。算法面向空间数据,区分空间维与非空间维,利用空间索引划分空间邻域,用非空间属性计算空间偏离因子,由此度量空间邻域的离群点。理论分析表明,该算法是合理的。实验结果表明,算法具有对用户依赖性小、检测精度和计算效率高的优点。  相似文献   

19.
高维类别属性数据流离群点快速检测算法   总被引:1,自引:1,他引:1  
提出类别属性数据流数据离群度量--加权频繁模式离群因子(weighted frequent pattern outlier factor,简称WFPOF),并在此基础上给出一种快速数据流离群点检测算法FODFP-Stream(fast outlier detection for high dimensional categorical data streams based on frequent pattern).该算法通过动态发现和维护频繁模式来计算离群度,能够有效地处理高维类别属性数据流,并可进一步扩  相似文献   

20.
Access control policies may contain anomalies such as incompleteness and inconsistency, which can result in security vulnerabilities. Detecting such anomalies in large sets of complex policies automatically is a difficult and challenging problem. In this paper, we propose a novel method for detecting inconsistency and incompleteness in access control policies with the help of data classification tools well known in data mining. Our proposed method consists of three phases: firstly, we perform parsing on the policy data set; this includes ordering of attributes and normalization of Boolean expressions. Secondly, we generate decision trees with the help of our proposed algorithm, which is a modification of the well-known C4.5 algorithm. Thirdly, we execute our proposed anomaly detection algorithm on the resulting decision trees. The results of the anomaly detection algorithm are presented to the policy administrator who will take remediation measures. In contrast to other known policy validation methods, our method provides means for handling incompleteness, continuous values and complex Boolean expressions. In order to demonstrate the efficiency of our method in discovering inconsistencies, incompleteness and redundancies in access control policies, we also provide a proof-of-concept implementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号