首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In recent years, classification learning for data streams has become an important and active research topic. A major challenge posed by data streams is that their underlying concepts can change over time, which requires current classifiers to be revised accordingly and timely. To detect concept change, a common methodology is to observe the online classification accuracy. If accuracy drops below some threshold value, a concept change is deemed to have taken place. An implicit assumption behind this methodology is that any drop in classification accuracy can be interpreted as a symptom of concept change. Unfortunately however, this assumption is often violated in the real world where data streams carry noise that can also introduce a significant reduction in classification accuracy. To compound this problem, traditional noise cleansing methods are incompetent for data streams. Those methods normally need to scan data multiple times whereas learning for data streams can only afford one-pass scan because of data’s high speed and huge volume. Another open problem in data stream classification is how to deal with missing values. When new instances containing missing values arrive, how a learning model classifies them and how the learning model updates itself according to them is an issue whose solution is far from being explored. To solve these problems, this paper proposes a novel classification algorithm, flexible decision tree (FlexDT), which extends fuzzy logic to data stream classification. The advantages are three-fold. First, FlexDT offers a flexible structure to effectively and efficiently handle concept change. Second, FlexDT is robust to noise. Hence it can prevent noise from interfering with classification accuracy, and accuracy drop can be safely attributed to concept change. Third, it deals with missing values in an elegant way. Extensive evaluations are conducted to compare FlexDT with representative existing data stream classification algorithms using a large suite of data streams and various statistical tests. Experimental results suggest that FlexDT offers a significant benefit to data stream classification in real-world scenarios where concept change, noise and missing values coexist.  相似文献   

2.
It is challenging to use traditional data mining techniques to deal with real-time data stream classifications. Existing mining classifiers need to be updated frequently to adapt to the changes in data streams. To address this issue, in this paper we propose an adaptive ensemble approach for classification and novel class detection in concept drifting data streams. The proposed approach uses traditional mining classifiers and updates the ensemble model automatically so that it represents the most recent concepts in data streams. For novel class detection we consider the idea that data points belonging to the same class should be closer to each other and should be far apart from the data points belonging to other classes. If a data point is well separated from the existing data clusters, it is identified as a novel class instance. We tested the performance of this proposed stream classification model against that of existing mining algorithms using real benchmark datasets from UCI (University of California, Irvine) machine learning repository. The experimental results prove that our approach shows great flexibility and robustness in novel class detection in concept drifting and outperforms traditional classification models in challenging real-life data stream applications.  相似文献   

3.
Classifying non-stationary and imbalanced data streams encompasses two important challenges, namely concept drift and class imbalance. Concept drift is changes in the underlying function being learnt, and class imbalance is vast difference between the numbers of instances in different classes of data. Class imbalance is an obstacle for the efficiency of most classifiers. Previous methods for classifying non-stationary and imbalanced data streams mainly focus on batch solutions, in which the classification model is trained using a chunk of data. Here, we propose two online classifiers. The classifiers are one-layer NNs. In the proposed classifiers, class imbalance is handled with two separate cost-sensitive strategies. The first one incorporates a fixed and the second one an adaptive misclassification cost matrix. The proposed classifiers are evaluated on 3 synthetic and 8 real-world datasets. The results show statistically significant improvements in imbalanced data metrics.  相似文献   

4.
数据流分类是数据挖掘领域的重要研究任务之一,已有的数据流分类算法大多是在有标记数据集上进行训练,而实际应用领域数据流中有标记的数据数量极少。为解决这一问题,可通过人工标注的方式获取标记数据,但人工标注昂贵且耗时。考虑到未标记数据的数量极大且隐含大量信息,因此在保证精度的前提下,为利用这些未标记数据的信息,本文提出了一种基于Tri-training的数据流集成分类算法。该算法采用滑动窗口机制将数据流分块,在前k块含有未标记数据和标记数据的数据集上使用Tri-training训练基分类器,通过迭代的加权投票方式不断更新分类器直到所有未标记数据都被打上标记,并利用k个Tri-training集成模型对第k+1块数据进行预测,丢弃分类错误率高的分类器并在当前数据块上重建新分类器从而更新当前模型。在10个UCI数据集上的实验结果表明:与经典算法相比,本文提出的算法在含80%未标记数据的数据流上的分类精度有显著提高。  相似文献   

5.
概念漂移是数据流学习领域中的一个难点问题,同时数据流中存在的类不平衡问题也会严重影响算法的分类性能。针对概念漂移和类不平衡的联合问题,在基于数据块集成的方法上引入在线更新机制,结合重采样和遗忘机制提出了一种增量加权集成的不平衡数据流分类方法(incremental weighted ensemble for imbalance learning, IWEIL)。该方法以集成框架为基础,利用基于可变大小窗口的遗忘机制确定基分类器对窗口内最近若干实例的分类性能,并计算基分类器的权重,随着新实例的逐个到达,在线更新IWEIL中每个基分器及其权重。同时,使用改进的自适应最近邻SMOTE方法生成符合新概念的新少数类实例以解决数据流中类不平衡问题。在人工数据集和真实数据集上进行实验,结果表明,相比于DWMIL算法,IWEIL在HyperPlane数据集上的G-mean和recall指标分别提升了5.77%和6.28%,在Electricity数据集上两个指标分别提升了3.25%和6.47%。最后,IWEIL在安卓应用检测问题上表现良好。  相似文献   

6.
现有概念漂移处理算法在检测到概念漂移发生后,通常需要在新到概念上重新训练分类器,同时“遗忘”以往训练的分类器。在概念漂移发生初期,由于能够获取到的属于新到概念的样本较少,导致新建的分类器在短时间内无法得到充分训练,分类性能通常较差。进一步,现有的基于在线迁移学习的数据流分类算法仅能使用单个分类器的知识辅助新到概念进行学习,在历史概念与新到概念相似性较差时,分类模型的分类准确率不理想。针对以上问题,文中提出一种能够利用多个历史分类器知识的数据流分类算法——CMOL。CMOL算法采取分类器权重动态调节机制,根据分类器的权重对分类器池进行更新,使得分类器池能够尽可能地包含更多的概念。实验表明,相较于其他相关算法,CMOL算法能够在概念漂移发生时更快地适应新到概念,显示出更高的分类准确率。  相似文献   

7.
周胜  刘三民 《计算机工程》2020,46(5):139-143,149
为解决数据流分类中的概念漂移和噪声问题,提出一种基于样本确定性的多源迁移学习方法。该方法存储多源领域上由训练得到的分类器,求出各源领域分类器对目标领域数据块中每个样本的类别后验概率和样本确定性值。在此基础上,将样本确定性值满足当前阈值限制的源领域分类器与目标领域分类器进行在线集成,从而将多个源领域的知识迁移到目标领域。实验结果表明,该方法能够有效消除噪声数据流给不确定分类器带来的不利影响,与基于准确率选择集成的多源迁移学习方法相比,具有更高的分类准确率和抗噪稳定性。  相似文献   

8.
集成式数据流挖掘是对存在概念漂移的数据流进行学习的重要方法。对于类别分布严重不均衡的应用,集成式数据流挖掘中数据块的学习方式导致样本数多的类别的分类精度高,样本数少的类别的分类精度低的问题,现有算法无法满足此类应用的需求。针对上述问题,对基于回忆机制的集成式数据流学习算法MAE(Memorizing based Adaptive Ensemble)进行改进,提出面向类别严重不均衡应用的在线数据流学习算法UMAE(Unbalanced data Lear-ning based on MAE)。UMAE算法为每个类别设置了一个样本滑动窗口,对于新到达的数据块,其样本依据自身的类别分别进入相应的滑动窗口,最后利用各类别滑动窗口内的样本构建用于在线学习的数据块。与5种典型的数据流挖掘算法的比较结果表明,UMAE算法在满足实时性的同时,不仅整体分类精度高,而且对于样本数很少的小类别的分类精度有大幅度提高;对于异常检测等类别分布严重不均衡的应用,UMAE算法的实用性明显优于其他算法。  相似文献   

9.
Ensembles of classifiers are among the best performing classifiers available in many data mining applications, including the mining of data streams. Rather than training one classifier, multiple classifiers are trained, and their predictions are combined according to a given voting schedule. An important prerequisite for ensembles to be successful is that the individual models are diverse. One way to vastly increase the diversity among the models is to build an heterogeneous ensemble, comprised of fundamentally different model types. However, most ensembles developed specifically for the dynamic data stream setting rely on only one type of base-level classifier, most often Hoeffding Trees. We study the use of heterogeneous ensembles for data streams. We introduce the Online Performance Estimation framework, which dynamically weights the votes of individual classifiers in an ensemble. Using an internal evaluation on recent training data, it measures how well ensemble members performed on this and dynamically updates their weights. Experiments over a wide range of data streams show performance that is competitive with state of the art ensemble techniques, including Online Bagging and Leveraging Bagging, while being significantly faster. All experimental results from this work are easily reproducible and publicly available online.  相似文献   

10.
在开放环境下,数据流具有数据高速生成、数据量无限和概念漂移等特性.在数据流分类任务中,利用人工标注产生大量训练数据的方式昂贵且不切实际.包含少量有标记样本和大量无标记样本且还带概念漂移的数据流给机器学习带来了极大挑战.然而,现有研究主要关注有监督的数据流分类,针对带概念漂移的数据流的半监督分类的研究尚未引起足够的重视....  相似文献   

11.
杜超  王志海  江晶晶  孙艳歌 《软件学报》2017,28(11):2891-2904
基于模式的贝叶斯分类模型是解决数据挖掘领域分类问题的一种有效方法.然而,大多数基于模式的贝叶斯分类器只考虑模式在目标类数据集中的支持度,而忽略了模式在对立类数据集合中的支持度.此外,对于高速动态变化的无限数据流环境,在静态数据集下的基于模式的贝叶斯分类器就不能适用.为了解决这些问题,提出了基于显露模式的数据流贝叶斯分类模型EPDS(Bayesian classifier algorithm based on emerging pattern for data stream).该模型使用一个简单的混合森林结构来维护内存中事务的项集,并采用一种快速的模式抽取机制来提高算法速度.EPDS采用半懒惰式学习策略持续更新显露模式,并为待分类事务在每个类下建立局部分类模型.大量实验结果表明,该算法比其他数据流分类模型有较高的准确度.  相似文献   

12.
刁树民  王永利 《计算机应用》2009,29(6):1578-1581
在进行组合决策时,已有的组合分类方法需要对多个组合分类器均有效的公共已知标签训练样本。为了解决在没有已知标签样本的情况下数据流组合分类决策问题,提出一种基于约束学习的数据流组合分类器的融合策略。在判定测试样本上的决策时,根据直推学习理论设计满足每一个局部分类器约束度量的方法,保证了约束的可行性,解决了分布式分类聚集时最大熵的直推扩展问题。测试数据集上的实验证明,与已有的直推学习方法相比,此方法可以获得更好的决策精度,可以应用于数据流组合分类的融合。  相似文献   

13.
One-vs-One strategy is a common and established technique in Machine Learning to deal with multi-class classification problems. It consists of dividing the original multi-class problem into easier-to-solve binary subproblems considering each possible pair of classes. Since several classifiers are learned, their combination becomes crucial in order to predict the class of new instances. Due to the division procedure a series of difficulties emerge at this stage, such as the non-competence problem. Each classifier is learned using only the instances of its corresponding pair of classes, and hence, it is not competent to classify instances belonging to the rest of the classes; nevertheless, at classification time all the outputs of the classifiers are taken into account because the competence cannot be known a priori (the classification problem would be solved). On this account, we develop a distance-based combination strategy, which weights the competence of the outputs of the base classifiers depending on the closeness of the query instance to each one of the classes. Our aim is to reduce the effect of the non-competent classifiers, enhancing the results obtained by the state-of-the-art combinations for One-vs-One strategy. We carry out a thorough experimental study, supported by the proper statistical analysis, showing that the results obtained by the proposed method outperform, both in terms of accuracy and kappa measures, the previous combinations for One-vs-One strategy.  相似文献   

14.
In many real applications, data are not all available at the same time, or it is not affordable to process them all in a batch process, but rather, instances arrive sequentially in a stream. The scenario of streaming data introduces new challenges to the machine learning community, since difficult decisions have to be made. The problem addressed in this paper is that of classifying incoming instances for which one attribute arrives only after a given delay. In this formulation, many open issues arise, such as how to classify the incomplete instance, whether to wait for the delayed attribute before performing any classification, or when and how to update a reference set. Three different strategies are proposed which address these issues differently. Orthogonally to these strategies, three classifiers of different characteristics are used. Keeping on-line learning strategies independent of the classifiers facilitates system design and contrasts with the common alternative of carefully crafting an ad hoc classifier. To assess how good learning is under these different strategies and classifiers, they are compared using learning curves and final classification errors for fifteen data sets. Results indicate that learning in this stringent context of streaming data and delayed attributes can successfully take place even with simple on-line strategies. Furthermore, active strategies behave generally better than more conservative passive ones. Regarding the classifiers, it was found that simple instance-based classifiers such as the well-known nearest neighbor may outperform more elaborate classifiers such as the support vector machines, especially if some measure of classification confidence is considered in the process.  相似文献   

15.
Data stream classification with artificial endocrine system   总被引:3,自引:3,他引:0  
Due to concept drifts, maintaining an up-to-date model is a challenging task for most of the current classification approaches used in data stream mining. Both the incremental classifiers and the ensemble classifiers spend most of their time in updating their temporary models and at the same time, a big sample buffer for training a classifier is necessary for most of them. These two drawbacks constrain further application in classifying a data stream. In this paper, we present a hormone based nearest neighbor classification algorithm for data stream classification, in which the classifier is updated every time a new record arrives. The records could be seen as locations in the feature space, and each location can accommodate only one endocrine cell. The classifier consists of endocrine cells on the boundaries of different classes. Every time a new record arrives, the cell that resides in the most unfit location will move to the new arrived record. In this way, the changing boundaries between different classes are recorded by the locations where endocrine cells reside in. The main advantages of the proposed method are the saving of the sample buffer and the improving of the classification accuracy. It is very important for conditions where the hardware resources are very expensive or the main memory is limited. Experiments on synthetic and real life data sets show that the proposed algorithm is able to classify data streams with less memory space and classification error.  相似文献   

16.
现有分类算法对不平衡数据挖掘通常表现出有偏性,即正类样本(通常是更重要的一类)的分类和预测性能差于负类样本的分类和预测性能,为此提出一种不平衡数据的分类方法。该方法对不同类引入不同的惩罚参数来灵活控制两类错分率的上界,通过一个超球面将两类数据以最大分离比率分离,从而提高不平衡数据对正类分类和预测的性能。实验结果表明,该方法可以有效提高不平衡数据的分类性能。  相似文献   

17.
Abstract

The rapid growth of the information technology accelerates organizations to generate vast volumes of high-velocity data streams. The concept drift is a crucial issue, and discovering the sequential patterns over data streams are more challenging. The ensemble classifiers incrementally learn the data for providing quick reaction to the concept drifts. The ensemble classifiers have to process both the gradual and sudden concept drifts that happen in the real-time data streams. Thus, a novel ensemble classifier is essential that significantly reacting to various types of concept drifts quickly and maintaining the classification accuracy. This work proposes the stream data mining on the fly using an adaptive online learning rule (SOAR) model to handle both the gradual and sudden pattern changes and improves mining accuracy. Adding the number of classifiers fails because the ensemble tends to include redundant classifiers instead of high-quality ones. Thus, the SOAR includes different diversity levels of classifiers in the ensemble to provide fast recovery from both the concept drifts. Moreover, the SOAR synthesizes the essential features of the block and online-based ensemble and updates the weight of each classifier, regarding its quality. It facilitates adaptive windowing to handle both gradual and sudden concept drifts. To reduce the computational cost and analyze the data stream quickly, the SOAR caches the occurred primitive patterns into a bitmap with the internal relationship. Finally, the experimental results show that the SOAR performs better classification and accuracy over data streams.  相似文献   

18.
复杂数据流中所存在的概念漂移及不平衡问题降低了分类器的性能。传统的批量学习算法需要考虑内存以及运行时间等因素,在快速到达的海量数据流中性能并不突出,并且其中还包含着大量的漂移及类失衡现象,利用在线集成算法处理复杂数据流问题已经成为数据挖掘领域重要的研究课题。从集成策略的角度对bagging、boosting、stacking集成方法的在线版本进行了介绍与总结,并对比了不同模型之间的性能。首次对复杂数据流的在线集成分类算法进行了详细的总结与分析,从主动检测和被动自适应两个方面对概念漂移数据流检测与分类算法进行了介绍,从数据预处理和代价敏感两个方面介绍不平衡数据流,并分析了代表性算法的时空效率,之后对使用相同数据集的算法性能进行了对比。最后,针对复杂数据流在线集成分类研究领域的挑战提出了下一步研究方向。  相似文献   

19.
张玉红  陈伟  胡学钢 《计算机科学》2016,43(12):179-182, 194
现实生活中网络监控、网络评论以及微博等应用领域涌现了大量文本数据流,这些数据的不完全标记和频繁概念漂移给已有的数据流分类方法带来了挑战。为此,面向不完全标记的文本数据流提出了一种自适应的数据流分类算法。该算法以一个标记数据块作为起始数据块,对未标记数据块首先提取标记数据块与未标记数据块之间的特征集,并利用特征在两个数据块间的相似度进行概念漂移检测,最后计算未标记数据中特征的极性并对数据进行预测。实验表明了算法在分类精度上的优越性,尤其在标记信息较少和概念漂移较为频繁时。  相似文献   

20.
近年来,数据流分类问题已经逐渐成为数据挖掘领域的一个研究热点,然而传统的数据流分类算法大多只能处理数据项已知并且为精确值的数据流,无法有效地应用于现实应用中普遍存在的不确定数据流。为建立适应数据不确定性的分类模型,提高不确定数据流分类准确率,提出一种针对不确定数据流的集成分类算法,该算法将不确定数据用区间及其概率分布函数表示,用C4.5决策树分类方法和朴素贝叶斯分类方法训练基分类器,在合理处理数据流中不确定性的同时,还能有效解决数据流中隐含的概念漂移问题。实验结果表明,所提算法在处理不确定数据流的分类时具有较好的鲁棒性,并且具有较高的分类准确率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号