首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
AdaBoost.M2 and AdaBoost.MH are boosting algorithms for learning from multiclass datasets. They have received less attention than other boosting algorithms because they require base classifiers that can handle the pseudoloss or Hamming loss, respectively. The difficulty with these loss functions is that each example is associated with k weights, where k is the number of classes. We address this issue by transforming an m-example dataset with k weights per example into a dataset with km examples and one weight per example. Minimising error on the transformed dataset is equivalent to minimising loss on the original dataset. Resampling the transformed dataset can be used for time efficiency and base classifiers that cannot handle weighted examples. We empirically apply the transformation on several multiclass datasets using naive Bayes and decision trees as base classifiers. Our experiment shows that it is competitive with AdaBoost.ECC, a boosting algorithm using output coding.  相似文献   

2.

In the fields of pattern recognition and machine learning, the use of data preprocessing algorithms has been increasing in recent years to achieve high classification performance. In particular, it has become inevitable to use the data preprocessing method prior to classification algorithms in classifying medical datasets with the nonlinear and imbalanced data distribution. In this study, a new data preprocessing method has been proposed for the classification of Parkinson, hepatitis, Pima Indians, single proton emission computed tomography (SPECT) heart, and thoracic surgery medical datasets with the nonlinear and imbalanced data distribution. These datasets were taken from UCI machine learning repository. The proposed data preprocessing method consists of three steps. In the first step, the cluster centers of each attribute were calculated using k-means, fuzzy c-means, and mean shift clustering algorithms in medical datasets including Parkinson, hepatitis, Pima Indians, SPECT heart, and thoracic surgery medical datasets. In the second step, the absolute differences between the data in each attribute and the cluster centers are calculated, and then, the average of these differences is calculated for each attribute. In the final step, the weighting coefficients are calculated by dividing the mean value of the difference to the cluster centers, and then, weighting is performed by multiplying the obtained weight coefficients by the attribute values in the dataset. Three different attribute weighting methods have been proposed: (1) similarity-based attribute weighting in k-means clustering, (2) similarity-based attribute weighting in fuzzy c-means clustering, and (3) similarity-based attribute weighting in mean shift clustering. In this paper, we aimed to aggregate the data in each class together with the proposed attribute weighting methods and to reduce the variance value within the class. Thus, by reducing the value of variance in each class, we have put together the data in each class and at the same time, we have further increased the discrimination between the classes. To compare with other methods in the literature, the random subsampling has been used to handle the imbalanced dataset classification. After attribute weighting process, four classification algorithms including linear discriminant analysis, k-nearest neighbor classifier, support vector machine, and random forest classifier have been used to classify imbalanced medical datasets. To evaluate the performance of the proposed models, the classification accuracy, precision, recall, area under the ROC curve, κ value, and F-measure have been used. In the training and testing of the classifier models, three different methods including the 50–50% train–test holdout, the 60–40% train–test holdout, and tenfold cross-validation have been used. The experimental results have shown that the proposed attribute weighting methods have obtained higher classification performance than random subsampling method in the handling of classifying of the imbalanced medical datasets.

  相似文献   

3.
Distance-based outliers: algorithms and applications   总被引:20,自引:0,他引:20  
This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers can only deal efficiently with two dimensions/attributes of a dataset. In this paper, we study the notion of DB (distance-based) outliers. Specifically, we show that (i) outlier detection can be done efficiently for large datasets, and for k-dimensional datasets with large values of k (e.g., ); and (ii), outlier detection is a meaningful and important knowledge discovery task. First, we present two simple algorithms, both having a complexity of , k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear with respect to N, but exponential with respect to k. We provide experimental results indicating that this algorithm significantly outperforms the two simple algorithms for . Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most three passes over a dataset. Again, experimental results show that this algorithm is by far the best for . Finally, we discuss our work on three real-life applications, including one on spatio-temporal data (e.g., a video surveillance application), in order to confirm the relevance and broad applicability of DB outliers. Received February 15, 1999 / Accepted August 1, 1999  相似文献   

4.
To cleanse mislabeled examples from a training dataset for efficient and effective induction, most existing approaches adopt a major set oriented scheme: the training dataset is separated into two parts (a major set and a minor set). The classifiers learned from the major set are used to identify noise in the minor set. The obvious drawbacks of such a scheme are twofold: (1) when the underlying data volume keeps growing, it would be either physically impossible or time consuming to load the major set into the memory for inductive learning; and (2) for multiple or distributed datasets, it can be either technically infeasible or factitiously forbidden to download data from other sites (for security or privacy reasons). Therefore, these approaches have severe limitations in conducting effective global data cleansing from large, distributed datasets.In this paper, we propose a solution to bridge the local and global analysis for noise cleansing. More specifically, the proposed effort tries to identify and eliminate mislabeled data items from large or distributed datasets through local analysis and global incorporation. For this purpose, we make use of distributed datasets or partition a large dataset into subsets, each of which is regarded as a local subset and is small enough to be processed by an induction algorithm at one time to construct a local model for noise identification. We construct good rules from each subset, and use the good rules to evaluate the whole dataset. For a given instance I k , two error count variables are used to count the number of times it has been identified as noise by all data subsets. The instance with higher error values will have a higher probability of being a mislabeled example. Two threshold schemes, majority and non-objection, are used to identify and eliminate the noisy examples. Experimental results and comparative studies on both real-world and synthetic datasets are reported to evaluate the effectiveness and efficiency of the proposed approach.A preliminary version of this paper was published in the Proceedings of the 20th International Conference on Machine Learning, Washington D.C., USA, 2003, pp. 920–927.  相似文献   

5.
龚奇源  杨明  罗军舟 《软件学报》2016,27(11):2828-2842
在发布同时包含关系和事务属性的数据(简称为关系-事务数据)时,由于关系数据和事务数据均有可能受到链接攻击,需要同时匿名这两部分的数据.现有的数据匿名技术在匿名化关系-事务数据时会造成严重的数据缺损,无法保障数据可用性.针对此问题,提出了(k,l)-多样化模型,通过等价类上的l-多样化约束和事务数据上的k-匿名约束来保证用户隐私不被泄露.在此基础上,设计并实现了APA和PAA两种满足该模型的匿名算法,以不同的顺序对关系-事务数据进行匿名,并提出了相应的数据缺损评估方法.实际公开数据集上的实验结果表明,与现有的数据匿名技术相比,APA和PAA能够在保护用户隐私的前提下,以更低的数据缺损和更高的效率完成对关系-事务数据的匿名.  相似文献   

6.
针对现有基于距离的离群点检测算法在处理大规模数据时效率低的问题,提出一种基于聚类和索引的分布式离群点检测(DODCI) 算法。首先利用聚类方法将大数据集划分成簇;然后在分布式环境中的各节点处并行创建各个簇的索引;最后使用两个优化策略和两条剪枝规则以循环的方式在各节点处进行离群点检测。在合成数据集和整理后的KDD CUP数据集上的实验结果显示,在数据量较大时该算法比Orca和iDOoR算法快近一个数量级。理论和实验分析表明,该算法可以有效提高大规模数据中离群点的检测效率。  相似文献   

7.
Clustering has been widely used in different fields of science, technology, social science, etc. Naturally, clusters are in arbitrary (non-convex) shapes in a dataset. One important class of clustering is distance based method. However, distance based clustering methods usually find clusters of convex shapes. Classical single-link is a distance based clustering method, which can find arbitrary shaped clusters. It scans dataset multiple times and has time requirement of O(n2), where n is the size of the dataset. This is potentially a severe problem for a large dataset. In this paper, we propose a distance based clustering method, l-SL to find arbitrary shaped clusters in a large dataset. In this method, first leaders clustering method is applied to a dataset to derive a set of leaders; subsequently single-link method (with distance stopping criteria) is applied to the leaders set to obtain final clustering. The l-SL method produces a flat clustering. It is considerably faster than the single-link method applied to dataset directly. Clustering result of the l-SL may deviate nominally from final clustering of the single-link method (distance stopping criteria) applied to dataset directly. To compensate deviation of the l-SL, an improvement method is also proposed. Experiments are conducted with standard real world and synthetic datasets. Experimental results show the effectiveness of the proposed clustering methods for large datasets.  相似文献   

8.
Conventional clustering ensemble algorithms employ a set of primary results; each result includes a set of clusters which are emerged from data. Given a large number of available clusters, one is faced with the following questions: (a) can we obtain the same quality of results with a smaller number of clusters instead of full ensemble? (b) If so, which subset of clusters is more efficient to be used in the ensemble? In this paper, these two questions are going to be answered. We explore a clustering ensemble approach combined with a cluster stability criterion as well as a dataset simplicity criterion to discover the finest subset of base clusters for each kind of datasets. Also, a novel method is proposed in order to accumulate the selected clusters and to extract final partitioning. Although it is expected that by reducing the size of ensemble the performance decreases, our experimental results show that our selecting mechanism generally lead to superior results.  相似文献   

9.
Multi-label classification exhibits several challenges not present in the binary case. The labels may be interdependent, so that the presence of a certain label affects the probability of other labels’ presence. Thus, exploiting dependencies among the labels could be beneficial for the classifier’s predictive performance. Surprisingly, only a few of the existing algorithms address this issue directly by identifying dependent labels explicitly from the dataset. In this paper we propose new approaches for identifying and modeling existing dependencies between labels. One principal contribution of this work is a theoretical confirmation of the reduction in sample complexity that is gained from unconditional dependence. Additionally, we develop methods for identifying conditionally and unconditionally dependent label pairs; clustering them into several mutually exclusive subsets; and finally, performing multi-label classification incorporating the discovered dependencies. We compare these two notions of label dependence (conditional and unconditional) and evaluate their performance on various benchmark and artificial datasets. We also compare and analyze labels identified as dependent by each of the methods. Moreover, we define an ensemble framework for the new methods and compare it to existing ensemble methods. An empirical comparison of the new approaches to existing base-line and state-of-the-art methods on 12 various benchmark datasets demonstrates that in many cases the proposed single-classifier and ensemble methods outperform many multi-label classification algorithms. Perhaps surprisingly, we discover that the weaker notion of unconditional dependence plays the decisive role.  相似文献   

10.
The structure and dynamic nature of real-world networks can be revealed by communities that help in promotion of recommendation systems. Social Media platforms were initially developed for effective communication, but now it is being used widely for extending and to obtain profit among business community. The numerous data generated through these platforms are utilized by many companies that make a huge profit out of it. A giant network of people in social media is grouped together based on their similar properties to form a community. Community detection is recent topic among the research community due to the increase usage of online social network. Community is one of a significant property of a network that may have many communities which have similarity among them. Community detection technique play a vital role to discover similarities among the nodes and keep them strongly connected. Similar nodes in a network are grouped together in a single community. Communities can be merged together to avoid lot of groups if there exist more edges between them. Machine Learning algorithms use community detection to identify groups with common properties and thus for recommendation systems, health care assistance systems and many more. Considering the above, this paper presents alternative method SimEdge-CD (Similarity and Edge between's based Community Detection) for community detection. The two stages of SimEdge-CD initially find the similarity among nodes and group them into one community. During the second stage, it identifies the exact affiliations of boundary nodes using edge betweenness to create well defined communities. Evaluation of proposed method on synthetic and real datasets proved to achieve a better accuracy-efficiency trade-of compared to other existing methods. Our proposed SimEdge-CD achieves ideal value of 1 which is higher than existing sim closure like LPA, Attractor, Leiden and walktrap techniques.  相似文献   

11.
目的 鞋印是刑事侦查的重要物证之一,如何对积累的大量鞋印花纹图像进行自动归类管理是刑事技术迫切需要解决的问题之一。与其他类图像不同,鞋印花纹图像具有种类多但数目未知、同类花纹分布不均匀且同类花纹数目少的特点。基于鞋印花纹图像的这些特点,用目前典型的聚类算法对鞋印花纹图像集进行聚类,并不能取得很好的效果。在对鞋印花纹图像进行分析的基础上,提出一种K步稳定的鞋印花纹图像自动聚类算法。方法 对已标记的鞋印花纹图像进行统计发现,各类鞋印花纹之间在特征空间上存在互不相交的区域(本文称为隔离带)。算法的核心思想是寻找各类鞋印花纹之间的隔离带,来将各类分开。过程为:以单调递增或递减的方式调整特征空间中判定两点为一类的阈值,得到数据集的多次划分;若在连续K次划分的过程中,某一类的成员不发生变化,则说明这K次调整是在隔离带中进行的,即聚出一类,并从数据集中删除已标记的数据;选择下一个阈值对剩余的数据集进行划分,输出K步不变的类;依此类推,直到剩余数据集为空,聚类完成。结果 在两类公开测试数据集和实际鞋印花纹数据集上进行实验,本文算法的主要性能指标都超过典型算法,其中在包含5792枚实际鞋印花纹数据集上的聚类准确率和F-Measure值分别达到了99.68%和95.99%。结论 针对鞋印花纹图像特点,提出了一种通过寻找各类之间的隔离带进行自动聚类的算法,并在实际应用中取得了很好的效果。且算法性能受参数的变化以及类的形状影响较小。本文算法同样适用于具有类似特点的其他数据集的自动聚类。  相似文献   

12.
Exploring spatial datasets with histograms   总被引:2,自引:0,他引:2  
As online spatial datasets grow both in number and sophistication, it becomes increasingly difficult for users to decide whether a dataset is suitable for their tasks, especially when they do not have prior knowledge of the dataset. In this paper, we propose browsing as an effective and efficient way to explore the content of a spatial dataset. Browsing allows users to view the size of a result set before evaluating the query at the database, thereby avoiding zero-hit/mega-hit queries and saving time and resources. Although the underlying technique supporting browsing is similar to range query aggregation and selectivity estimation, spatial dataset browsing poses some unique challenges. In this paper, we identify a set of spatial relations that need to be supported in browsing applications, namely, the contains, contained and the overlap relations. We prove a lower bound on the storage required to answer queries about the contains relation accurately at a given resolution. We then present three storage-efficient approximation algorithms which we believe to be the first to estimate query results about these spatial relations. We evaluate these algorithms with both synthetic and real world datasets and show that they provide highly accurate estimates for datasets with various characteristics. Recommended by: Sunil Prabhakar Work supported by NSF grants IIS 02-23022 and CNF 04-23336. An earlier version of this paper appeared in the 17th International Conference on Data Engineering (ICDE 2001).  相似文献   

13.
杨旭华  朱钦鹏  童长飞 《计算机科学》2018,45(1):292-296, 306
聚类分析是一种重要的数据挖掘工具,可以衡量不同数据之间的相似性,并把它们分到不同的类别中,在模式识别、经济学和生物学等领域有着广泛的应用。 文中提出了一种新的聚类算法。首先,把待分类的数据集转换成一个加权的完全图,每个数据点为一个节点,两个数据点之间的距离为相应两个节点之间边的权值。然后,用Laplacian中心性来计算和评价该网络每个节点的局部重要性,聚类中心为局部的密度中心,它具有比周围的邻居节点更高的Laplacian中心性,并且与具有更高Laplacian中心性的节点之间的距离也较大。新算法是一种真正的无参数聚类方法,不需要任何先验参数便可以自动地对数据集进行分类。在6种数据集中将其与9种知名聚类算法做了对比,结果显示该算法具有良好的聚类效果。  相似文献   

14.

In this paper, with respect to reviewing and comparing existing social networks’ datasets, we introduce SNEFL dataset: the first social network dataset that includes the level of users’ likes (fuzzy like) data in addition to the likes between users. With users’ privacy in mind, the data has been collected from a social network. It includes several additional features including age, gender, marital status, height, weight, educational level and religiosity of the users. We have described its structure, analysed its features and evaluated its advantages in comparison with other social network datasets. On top of that, using unique feature of SNEFL dataset (fuzzy like) for the first time a rule-based algorithm has been developed to detect involuntary celibates (Incels) in social networks. Despite Incels activities in online social networks, until now no study on computer science has been performed to identify them. This study is the first step to address this challenge that society is facing today. Experimental results show that the accuracy of the proposed algorithm in identifying Incels among all social network users is 23.21% and among users who have fuzzy like data is 68.75%. In addition to the Incel detection, SNEFL dataset can be used by researchers in different fields to produce more accurate results. Some study areas that SNEFL dataset can be used in are network analysis, frequent pattern mining, classification and clustering.

  相似文献   

15.
Density based clustering techniques like DBSCAN are attractive because it can find arbitrary shaped clusters along with noisy outliers. Its time requirement is O(n2) where n is the size of the dataset, and because of this it is not a suitable one to work with large datasets. A solution proposed in the paper is to apply the leaders clustering method first to derive the prototypes called leaders from the dataset which along with prototypes preserves the density information also, then to use these leaders to derive the density based clusters. The proposed hybrid clustering technique called rough-DBSCAN has a time complexity of O(n) only and is analyzed using rough set theory. Experimental studies are done using both synthetic and real world datasets to compare rough-DBSCAN with DBSCAN. It is shown that for large datasets rough-DBSCAN can find a similar clustering as found by the DBSCAN, but is consistently faster than DBSCAN. Also some properties of the leaders as prototypes are formally established.  相似文献   

16.
We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves detection accuracy when learning from few examples. Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. The resulting transformed Dirichlet process (TDP) leads to Monte Carlo algorithms which simultaneously segment and recognize objects in street and office scenes.  相似文献   

17.
Volume datasets tend to grow larger and larger as modern technology advances, thus imposing a storage constraint on most systems. One general solution to alleviate this problem is to apply volume compression on volume datasets. However, as volume rendering is often the most important reason why a volume dataset was generated in the first place, we must take into account how a volume dataset could be efficiently rendered when it is stored in a compressed form. Our previous work [21] has shown that it is possible to perform an on-the-fly direct volume rendering from irregular volume data. In this paper, we further extend that work to demonstrate that a similar integration can also be achieved on iso-surface extraction and volume decompression for irregular volume data. In particular, our work involves a dataset decomposition process, where instead of a coordinate-based decomposition used by conventional out-of-core iso-surface extraction algorithms, we choose to use a layer-based structure. Each such layer contains a collection of tetrahedra whose associated scalar values fall within a specific range, and can be compressed independently to reduce its storage requirement. The layer structure is particularly suitable for out-of-core iso-surface extraction, where the required memory exceeds the physical memory capacity of the machine that the process is running on. Furthermore, with this work, we can perform on-the-fly iso-surface extraction during decompression, and the computation only involves the layer that contains the query value, rather than the entire dataset. Experiments show that our approach can improve the performance up to ten times when compared with the results based on traditional coordinate-based approaches.  相似文献   

18.
Mining spatial colocation patterns: a different framework   总被引:2,自引:0,他引:2  
Recently, there has been considerable interest in mining spatial colocation patterns from large spatial datasets. Spatial colocation patterns represent the subsets of spatial events whose instances are often located in close geographic proximity. Most studies of spatial colocation mining require the specification of two parameter constraints to find interesting colocation patterns. One is a minimum prevalent threshold of colocations, and the other is a distance threshold to define spatial neighborhood. However, it is difficult for users to decide appropriate threshold values without prior knowledge of their task-specific spatial data. In this paper, we propose a different framework for spatial colocation pattern mining. To remove the first constraint, we propose the problem of finding N-most prevalent colocated event sets, where N is the desired number of colocated event sets with the highest interest measure values per each pattern size. We developed two alternative algorithms for mining the N-most patterns. They reduce candidate events effectively and use a filter-and-refine strategy for efficiently finding colocation instances from a spatial dataset. We prove the algorithms are correct and complete in finding the N-most prevalent colocation patterns. For the second constraint, a distance threshold for spatial neighborhood determination, we present various methods to estimate appropriate distance bounds from user input data. The result can help an user to set a distance for a conceptualization of spatial neighborhood. Our experimental results with real and synthetic datasets show that our algorithmic design is computationally effective in finding the N-most prevalent colocation patterns. The discovered patterns were different depending on the distance threshold, which shows that it is important to select appropriate neighbor distances.  相似文献   

19.

Closed circuit television cameras (CCTV) are widely used in monitoring. This paper presents an intelligent CCTV crowd counting system based on two algorithms that estimate the density of each pixel in each frame and use it as a basis for counting people. One algorithm uses scale-invariant feature transform (SIFT) features and clustering to represent pixels of frames (SIFT algorithm) and the other uses features from accelerated segment test (FAST) corner points with SIFT features (SIFT-FAST algorithm). Each algorithm is designed using a novel combination of pixel-wise, motion-region, grid map, background segmentation using Gaussian mixture model (GMM) and edge detection. A fusion technique is proposed and used to validate the accuracy by combining the result of the algorithms at frame level. The proposed system is more practical than the state of the art regression methods because it is trained with a small number of frames so it is relatively easy to deploy. In addition, it reduces the training error, set-up time, cost and open the door to develop more accurate people detection methods. The University of California (UCSD) and Mall datasets have been used to test the proposed algorithms. The mean deviation error, mean squared error and the mean absolute error of the proposed system are less than 0.1, 16.5 and 3.1, respectively, for the Mall dataset and less than 0.07, 5.5 and 1.9, respectively, for UCSD dataset.

  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号