首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Learning in the Presence of Concept Drift and Hidden Contexts   总被引:43,自引:0,他引:43  
Widmer  Gerhard  Kubat  Miroslav 《Machine Learning》1996,23(1):69-101
  相似文献   

2.
Extracting Hidden Context   总被引:4,自引:0,他引:4  
Concept drift due to hidden changes in context complicates learning in many domains including financial prediction, medical diagnosis, and communication network performance. Existing machine learning approaches to this problem use an incremental learning, on-line paradigm. Batch, off-line learners tend to be ineffective in domains with hidden changes in context as they assume that the training set is homogeneous. An off-line, meta-learning approach for the identification of hidden context is presented. The new approach uses an existing batch learner and the process of contextual clustering to identify stable hidden contexts and the associated context specific, locally stable concepts. The approach is broadly applicable to the extraction of context reflected in time and spatial attributes. Several algorithms for the approach are presented and evaluated. A successful application of the approach to a complex flight simulator control task is also presented.  相似文献   

3.
Tracking Drifting Concepts By Minimizing Disagreements   总被引:3,自引:0,他引:3  
In this paper we consider the problem of tracking a subset of a domain (called thetarget) which changes gradually over time. A single (unknown) probability distribution over the domain is used to generate random examples for the learning algorithm and measure the speed at which the target changes. Clearly, the more rapidly the target moves, the harder it is for the algorithm to maintain a good approximation of the target. Therefore we evaluate algorithms based on how much movement of the target can be tolerated between examples while predicting with accuracy . Furthermore, the complexity of the classH of possible targets, as measured byd, its VC-dimension, also effects the difficulty of tracking the target concept. We show that if the problem of minimizing the number of disagreements with a sample from among concepts in a classH can be approximated to within a factork, then there is a simple tracking algorithm forH which can achieve a probability of making a mistake if the target movement rate is at most a constant times 2/(k(d +k) ln 1/), whered is the Vapnik-Chervonenkis dimension ofH. Also, we show that ifH is properly PAC-learnable, then there is an efficient (randomized) algorithm that with high probability approximately minimizes disagreements to within a factor of 7d + 1, yielding an efficient tracking algorithm forH which tolerates drift rates up to a constant times 2/(d 2 ln 1/). In addition, we prove complementary results for the classes of halfspaces and axisaligned hyperrectangles showing that the maximum rate of drift that any algorithm (even with unlimited computational power) can tolerate is a constant times 2/d.  相似文献   

4.
Selecting Examples for Partial Memory Learning   总被引:9,自引:0,他引:9  
  相似文献   

5.
随着大数据时代的到来,数据流分类被应用于诸多领域,如:垃圾邮件过滤、市场预测及天气预报等.重现概念是这些应用领域的重要特点之一.针对重现概念的学习与分类问题中的“负迁移”和概念漂移检测的滞后性,提出了一种基于在线迁移学习的重现概念漂移数据流分类算法——RC-OTL.RC-OTL在检测到概念漂移时存储刚学习的一个基分类器,然后计算最近的样本与存储的各历史分类器之间的领域相似度,以选择最适合对后续样本进行学习的源分类器,从而改善从源领域到目标领域的知识迁移.另外,RC-OTL还在概念漂移检测之前根据分类准确率选择合适的分类器对后续样本分类.初步的理论分析解释了RC-OTL为什么能有效克服“负迁移”,实验结果进一步表明:RC-OTL的确能有效提高分类准确率,并且在遭遇概念漂移后能更快地适应后续样本.  相似文献   

6.
In their unmodified form, lazy-learning algorithms may have difficulty learning and tracking time-varying input/output function maps such as those that occur in concept shift. Extensions of these algorithms, such as Time-Windowed forgetting (TWF), can permit learning of time-varying mappings by deleting older exemplars, but have decreased classification accuracy when the input-space sampling distribution of the learning set is time-varying. Additionally, TWF suffers from lower asymptotic classification accuracy than equivalent non-forgetting algorithms when the input sampling distributions are stationary. Other shift-sensitive algorithms, such as Locally-Weighted forgetting (LWF) avoid the negative effects of time-varying sampling distributions, but still have lower asymptotic classification in non-varying cases. We introduce Prediction Error Context Switching (PECS) which allows lazy-learning algorithms to have good classification accuracy in conditions having a time-varying function mapping and input sampling distributions, while still maintaining their asymptotic classification accuracy in static tasks. PECS works by selecting and re-activating previously stored instances based on their most recent consistency record. The classification accuracy and active learning set sizes for the above algorithms are compared in a set of learning tasks that illustrate the differing time-varying conditions described above. The results show that the PECS algorithm has the best overall classification accuracy over these differing time-varying conditions, while still having asymptotic classification accuracy competitive with unmodified lazy-learners intended for static environments.  相似文献   

7.
We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts for each segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single segment case, the additional loss is proportional to log n, where n is the number of experts and the constant of proportionality depends on the loss function. Our algorithms do not produce the best partition; however the loss bound shows that our predictions are close to those of the best partition. When the number of segments is k+1 and the sequence is of length &ell, we can bound the additional loss of our algorithm over the best partition by . For the case when the loss per trial is bounded by one, we obtain an algorithm whose additional loss over the loss of the best partition is independent of the length of the sequence. The additional loss becomes , where L is the loss of the best partitionwith k+1 segments. Our algorithms for tracking the predictions of the best expert aresimple adaptations of Vovk's original algorithm for the single best expert case. As in the original algorithms, we keep one weight per expert, and spend O(1) time per weight in each trial.  相似文献   

8.
复杂数据流中所存在的概念漂移及不平衡问题降低了分类器的性能。传统的批量学习算法需要考虑内存以及运行时间等因素,在快速到达的海量数据流中性能并不突出,并且其中还包含着大量的漂移及类失衡现象,利用在线集成算法处理复杂数据流问题已经成为数据挖掘领域重要的研究课题。从集成策略的角度对bagging、boosting、stacking集成方法的在线版本进行了介绍与总结,并对比了不同模型之间的性能。首次对复杂数据流的在线集成分类算法进行了详细的总结与分析,从主动检测和被动自适应两个方面对概念漂移数据流检测与分类算法进行了介绍,从数据预处理和代价敏感两个方面介绍不平衡数据流,并分析了代表性算法的时空效率,之后对使用相同数据集的算法性能进行了对比。最后,针对复杂数据流在线集成分类研究领域的挑战提出了下一步研究方向。  相似文献   

9.
概念格是知识处理与分析中的一个有力工具,对它进行约简可以提高效率简化问题。该文在协调决策形式背景下给出了协调集及属性特征的判定定理,并在此基础上得到了一种求属性约简的方法。  相似文献   

10.
A Critical View of Context   总被引:3,自引:0,他引:3  
  相似文献   

11.
本文从语言使用的局部性原理出发,引入动态语境网的概念,提出了在汉语音节→汉字转换中的动态语境学习方法,有效地提高了音节汉字的转换正确率。  相似文献   

12.
详细介绍了国内外集成分类算法,对集成分类算法的两个部分(基分类器组合和动态更新集成模型)进行了详细综述,明确区分不同集成算法的优缺点,对比算法和实验数据集。并且提出进一步的研究方向和考虑的解决办法。  相似文献   

13.
由于在信用卡欺诈分析等领域的广泛应用,学者们开始关注概念漂移数据流分类问题.现有算法通常假设数据一旦分类后类标已知,利用所有待分类实例的真实类别来检测数据流是否发生概念漂移以及调整分类模型.然而,由于标记实例需要耗费大量的时间和精力,该解决方案在实际应用中无法实现.据此,提出一种基于KNNModel和增量贝叶斯的概念漂移检测算法KnnM-IB.新算法在具有KNNModel算法分类被模型簇覆盖的实例分类精度高、速度快优点的同时,利用增量贝叶斯算法对难处理样本进行分类,从而保证了分类效果.算法同时利用可变滑动窗口大小的变化以及主动学习标记的少量样本进行概念漂移检测.当数据流稳定时,半监督学习被用于扩大标记实例的数量以对模型进行更新,因而更符合实际应用的要求.实验结果表明,该方法能够在对数据流进行有效分类的同时检测数据流概念漂移及相应地更新模型.  相似文献   

14.
Auer  Peter  Warmuth  Manfred K. 《Machine Learning》1998,32(2):127-150
Littlestone developed a simple deterministic on-line learning algorithm for learning k-literal disjunctions. This algorithm (called ) keeps one weight for each of then variables and does multiplicative updates to its weights. We develop a randomized version of and prove bounds for an adaptation of the algorithm for the case when the disjunction may change over time. In this case a possible target disjunction schedule is a sequence of disjunctions (one per trial) and the shift size is the total number of literals that are added/removed from the disjunctions as one progresses through the sequence.We develop an algorithm that predicts nearly as well as the best disjunction schedule for an arbitrary sequence of examples. This algorithm that allows us to track the predictions of the best disjunction is hardly more complex than the original version. However, the amortized analysis needed for obtaining worst-case mistake bounds requires new techniques. In some cases our lower bounds show that the upper bounds of our algorithm have the right constant in front of the leading term in the mistake bound and almost the right constant in front of the second leading term. Computer experiments support our theoretical findings.  相似文献   

15.
提出了一种基于上下文传输的FMIPv6安全快速切换机制。使用上下文传输协议传送移动节点MN的身份认证信息,是新接入网络在家乡网络不参与的情况下快速认证MN。并采用签名证书和对称密钥机制实现上下文传输的安全性和可靠性。文章通过安全性分析,说明此方法的安全性和可行性。  相似文献   

16.
目前数据流分类算法大多是基于类分布这一理想状态,然而在真实数据流环境中数据分布往往是不均衡的,并且数据流中往往伴随着概念漂移。针对数据流中的不均衡问题和概念漂移问题,提出了一种新的基于集成学习的不均衡数据流分类算法。首先为了解决数据流的不均衡问题,在训练模型前加入混合采样方法平衡数据集,然后采用基分类器加权和淘汰策略处理概念漂移问题,从而提高分类器的分类性能。最后与经典数据流分类算法在人工数据集和真实数据集上进行对比实验,实验结果表明,本文提出的算法在含有概念漂移和不均衡的数据流环境中,其整体分类性能优于其他算法的。  相似文献   

17.
通过对数据流分类中的概念漂移问题的研究,提出了一种在线装袋(Online Bagging)算法的改进算法——自适应抽样参数的在线装袋算法AdBagging(adaptive lambda bagging)。利用在分类过程中出现的误分样本数量来调整Online Bagging算法中的泊松(Poisson)分布的抽样参数,从而可以动态调整新样本在学习器中的权重,即对于数据流中的误分类样本给予较高的学习权重因子,而对于正确分类的样本给予较低的学习权重因子,同时结合样本出现的时间顺序调整权重因子,使得集成分类器可以动态调整其多样性(adversity)。该算法具有OnlineBagging算法的高效简洁优点,并能解决数据流中具有概念漂移的问题,人工数据集和实际数据集上的实验结果表明了该算法的有效性。  相似文献   

18.
    
Processing data streams requires new demands not existent on static environments. In online learning, the probability distribution of the data can often change over time (concept drift). The prequential assessment methodology is commonly used to evaluate the performance of classifiers in data streams with stationary and non‐stationary distributions. It is based on the premise that the purpose of statistical inference is to make sequential probability forecasts for future observations, rather than to express information about the past accuracy achieved. This article empirically evaluates the prequential methodology considering its three common strategies used to update the prediction model, namely, Basic Window, Sliding Window, and Fading Factors. Specifically, it aims to identify which of these variations is the most accurate for the experimental evaluation of the past results in scenarios where concept drifts occur, with greater interest in the accuracy observed within the total data flow. The prequential accuracy of the three variations and the real accuracy obtained in the learning process of each dataset are the basis for this evaluation. The results of the carried‐out experiments suggest that the use of Prequential with the Sliding Window variation is the best alternative.  相似文献   

19.
利用移动IPv6快速切换的特点和上下文切换技术相结合,综合快速切换和层次化移动管理的优势,通过对移动IPv6快速切换模型和信令交互的修改,以及对邻居发现协议的扩展,提出一套基于移动IPv6的服务质量上下文转移方案QoSCT.该方案引入功能实体切换指示节点(HDP),搜索最适合的接入路由器并指导相应邻接路由器传递实时业务流的QoS上下文;利用快速切换方案的链路层触发机制作为上下文切换的触发点,在移动节点完成切换的同时完成QoS上下文的转换,避免了移动节点盲目切换和资源浪费.理论分析和仿真试验表明,QoSCT方案可以显著降低实时业务切换时的延迟抖动,实现移动节点的无缝切换.  相似文献   

20.
We have combined competitive and Hebbian learning in a neural network designed to learn and recall complex spatiotemporal sequences. In such sequences, a particular item may occur more than once or the sequence may share states with another sequence. Processing of repeated/shared states is a hard problem that occurs very often in the domain of robotics. The proposed model consists of two groups of synaptic weights: competitive interlayer and Hebbian intralayer connections, which are responsible for encoding respectively the spatial and temporal features of the input sequence. Three additional mechanisms allow the network to deal with shared states: context units, neurons disabled from learning, and redundancy used to encode sequence states. The network operates by determining the current and the next state of the learned sequences. The model is simulated over various sets of robot trajectories in order to evaluate its storage and retrieval abilities; its sequence sampling effects; its robustness to noise and its tolerance to fault.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号