首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Along with the increase of data and information, incremental learning ability turns out to be more and more important for machine learning approaches. The online algorithms try not to remember irrelevant information instead of synthesizing all available information (as opposed to classic batch learning algorithms). Today, combining classifiers is proposed as a new road for the improvement of the classification accuracy. However, most ensemble algorithms operate in batch mode. For this reason, we propose an incremental ensemble that combines five classifiers that can operate incrementally: the Naive Bayes, the Averaged One-Dependence Estimators (AODE), the 3-Nearest Neighbors, the Non-Nested Generalised Exemplars (NNGE) and the Kstar algorithms using the voting methodology. We performed a large-scale comparison of the proposed ensemble with other state-of-the-art algorithms on several datasets and the proposed method produce better accuracy in most cases.  相似文献   

2.
The ability to predict a student’s performance could be useful in a great number of different ways associated with university-level distance learning. Students’ marks in a few written assignments can constitute the training set for a supervised machine learning algorithm. Along with the explosive increase of data and information, incremental learning ability has become more and more important for machine learning approaches. The online algorithms try to forget irrelevant information instead of synthesizing all available information (as opposed to classic batch learning algorithms). Nowadays, combining classifiers is proposed as a new direction for the improvement of the classification accuracy. However, most ensemble algorithms operate in batch mode. Therefore a better proposal is an online ensemble of classifiers that combines an incremental version of Naive Bayes, the 1-NN and the WINNOW algorithms using the voting methodology. Among other significant conclusions it was found that the proposed algorithm is the most appropriate to be used for the construction of a software support tool.  相似文献   

3.
针对典型的支持向量机增量学习算法对有用信息的丢失和现有支持向量机增量学习算法单纯追求分类器精准性的客观性,将三支决策损失函数的主观性引入支持向量机增量学习算法中,提出了一种基于三支决策的支持向量机增量学习方法.首先采用特征距离与中心距离的比值来计算三支决策中的条件概率;然后把三支决策中的边界域作为边界向量加入到原支持向量和新增样本中一起训练;最后,通过仿真实验证明,该方法不仅充分利用有用信息提高了分类准确性,而且在一定程度上修正了现有支持向量机增量学习算法的客观性,并解决了三支决策中条件概率的计算问题.  相似文献   

4.
Incremental learning has been used extensively for data stream classification. Most attention on the data stream classification paid on non-evolutionary methods. In this paper, we introduce new incremental learning algorithms based on harmony search. We first propose a new classification algorithm for the classification of batch data called harmony-based classifier and then give its incremental version for classification of data streams called incremental harmony-based classifier. Finally, we improve it to reduce its computational overhead in absence of drifts and increase its robustness in presence of noise. This improved version is called improved incremental harmony-based classifier. The proposed methods are evaluated on some real world and synthetic data sets. Experimental results show that the proposed batch classifier outperforms some batch classifiers and also the proposed incremental methods can effectively address the issues usually encountered in the data stream environments. Improved incremental harmony-based classifier has significantly better speed and accuracy on capturing concept drifts than the non-incremental harmony based method and its accuracy is comparable to non-evolutionary algorithms. The experimental results also show the robustness of improved incremental harmony-based classifier.  相似文献   

5.
袁飞云 《计算机应用》2013,33(7):1976-1979
针对基于码书模型的图像分类方法忽略图像的拓扑信息及增量学习导致分类精度有限的问题,提出了基于自组织增量神经网络(SOINN)的码书产生方法。首先回顾了常见的码书编码方式;其次改进了基本的码书模型,利用SOINN自动产生聚类数目和保留数据拓扑结构的两项能力,寻找更有效的单词和设计更有效的编码方式,产生更合适的码书。实验结果显示在不同样本数和不同规模码书下分类精确度相对同类算法有最高将近1%的提升。该结果表明基于SOINN的码书产生方法显著提高了图像分类算法的精度,该方法还可以更高效、更准确地运用于各种图像分类任务。  相似文献   

6.
Traditional nonlinear manifold learning methods have achieved great success in dimensionality reduction and feature extraction, most of which are batch modes. However, if new samples are observed, the batch methods need to be calculated repeatedly, which is computationally intensive, especially when the number or dimension of the input samples are large. This paper presents incremental learning algorithms for Laplacian eigenmaps, which computes the low-dimensional representation of data set by optimally preserving local neighborhood information in a certain sense. Sub-manifold analysis algorithm together with an alternative formulation of linear incremental method is proposed to learn the new samples incrementally. The locally linear reconstruction mechanism is introduced to update the existing samples’ embedding results. The algorithms are easy to be implemented and the computation procedure is simple. Simulation results testify the efficiency and accuracy of the proposed algorithms.  相似文献   

7.
针对关节式目标变化对子空间描述造成的影响,本文提出了一种基于增量学习的关节式目标跟踪算法.该算法通过引入图像分割方法与快速傅里叶变换可有效消除背景像素对目标描述造成的影响以及目标区域前景目标位置对不准造成的误差,同时应用局部二值模式增加目标描述中像素点间的几何位置信息,应用基于增量学习的方法实现目标特征的在线更新,最终为跟踪算法提供较为精确的目标描述.实验结果表明,本文提出的关节式目标跟踪算法具有较好的目标跟踪效果.  相似文献   

8.
张明洋  闻英友  杨晓陶  赵宏 《控制与决策》2017,32(10):1887-1893
针对在线序贯极限学习机(OS-ELM)对增量数据学习效率低、准确性差的问题, 提出一种基于增量加权平均的在线序贯极限学习机(WOS-ELM)算法.将算法的原始数据训练模型残差与增量数据训练模型残差进行加权作为代价函数,推导出用于均衡原始数据与增量数据的训练模型,利用原始数据来弱化增量数据的波动,使在线极限学习机具有较好的稳定性,从而提高算法的学习效率和准确性. 仿真实验结果表明, 所提出的WOS-ELM算法对增量数据具有较好的预测精度和泛化能力.  相似文献   

9.
Most data-mining algorithms assume static behavior of the incoming data. In the real world, the situation is different and most continuously collected data streams are generated by dynamic processes, which may change over time, in some cases even drastically. The change in the underlying concept, also known as concept drift, causes the data-mining model generated from past examples to become less accurate and relevant for classifying the current data. Most online learning algorithms deal with concept drift by generating a new model every time a concept drift is detected. On one hand, this solution ensures accurate and relevant models at all times, thus implying an increase in the classification accuracy. On the other hand, this approach suffers from a major drawback, which is the high computational cost of generating new models. The problem is getting worse when a concept drift is detected more frequently and, hence, a compromise in terms of computational effort and accuracy is needed. This work describes a series of incremental algorithms that are shown empirically to produce more accurate classification models than the batch algorithms in the presence of a concept drift while being computationally cheaper than existing incremental methods. The proposed incremental algorithms are based on an advanced decision-tree learning methodology called “Info-Fuzzy Network” (IFN), which is capable to induce compact and accurate classification models. The algorithms are evaluated on real-world streams of traffic and intrusion-detection data.  相似文献   

10.
增量学习利用增量数据中的有用信息通过修正分类参数来更新分类模型,而朴素贝叶斯算法具有利用先验信息以及增量信息的特性,因此朴素贝叶斯算法是增量学习算法设计的最佳选择。三支决策是一种符合人类认知模式的决策理论,具有主观的特性。将三支决策思想融入朴素贝叶斯增量学习中,提出一种基于三支决策的朴素贝叶斯增量学习算法。基于朴素贝叶斯算法构造了一个称为分类确信度的概念,结合代价函数,用以确定三支决策理论中的正域、负域和边界域。利用三个域中的有用信息构造基于三支决策的朴素贝叶斯增量学习算法。实验结果显示,在阈值[α]和[β]选择合适的情况下,基于该方法的分类准确性和召回率均有明显的提高。  相似文献   

11.
由于在信用卡欺诈分析等领域的广泛应用,学者们开始关注概念漂移数据流分类问题.现有算法通常假设数据一旦分类后类标已知,利用所有待分类实例的真实类别来检测数据流是否发生概念漂移以及调整分类模型.然而,由于标记实例需要耗费大量的时间和精力,该解决方案在实际应用中无法实现.据此,提出一种基于KNNModel和增量贝叶斯的概念漂移检测算法KnnM-IB.新算法在具有KNNModel算法分类被模型簇覆盖的实例分类精度高、速度快优点的同时,利用增量贝叶斯算法对难处理样本进行分类,从而保证了分类效果.算法同时利用可变滑动窗口大小的变化以及主动学习标记的少量样本进行概念漂移检测.当数据流稳定时,半监督学习被用于扩大标记实例的数量以对模型进行更新,因而更符合实际应用的要求.实验结果表明,该方法能够在对数据流进行有效分类的同时检测数据流概念漂移及相应地更新模型.  相似文献   

12.
数据复杂度是衡量数据类别可分性的指标,能够对数据预处理、分类器选择等环节提供有效的指导信息。但现有的数据复杂度计算方法不具备增量学习的功能,从而限制了它们的实际应用。研究结果表明可以通过计算充分统计量来实现增量学习功能的4个数据复杂度指标。实验结果验证,这些数据复杂度指标增量学习的有效性。  相似文献   

13.
Incremental learning has been widely addressed in the machine learning literature to cope with learning tasks where the learning environment is ever changing or training samples become available over time. However, most research work explores incremental learning with statistical algorithms or neural networks, rather than evolutionary algorithms. The work in this paper employs genetic algorithms (GAs) as basic learning algorithms for incremental learning within one or more classifier agents in a multiagent environment. Four new approaches with different initialization schemes are proposed. They keep the old solutions and use an "integration" operation to integrate them with new elements to accommodate new attributes, while biased mutation and crossover operations are adopted to further evolve a reinforced solution. The simulation results on benchmark classification data sets show that the proposed approaches can deal with the arrival of new input attributes and integrate them with the original input space. It is also shown that the proposed approaches can be successfully used for incremental learning and improve classification rates as compared to the retraining GA. Possible applications for continuous incremental training and feature selection are also discussed.  相似文献   

14.
15.
基于样本密度和分类误差率的增量学习矢量量化算法研究   总被引:1,自引:0,他引:1  
李娟  王宇平 《自动化学报》2015,41(6):1187-1200
作为一种简单而成熟的分类方法, K最近邻(K nearest neighbor, KNN)算法在数据挖掘、模式识别等领域获得了广泛的应用, 但仍存在计算量大、高空间消耗、运行时间长等问题. 针对这些问题, 本文在增量学习型矢量量化(Incremental learning vector quantization, ILVQ)的单层竞争学习基础上, 融合样本密度和分类误差率的邻域思想, 提出了一种新的增量学习型矢量量化方法, 通过竞争学习策略对代表点邻域实现自适应增删、合并、分裂等操作, 快速获取原始数据集的原型集, 进而在保障分类精度基础上, 达到对大规模数据的高压缩效应. 此外, 对传统近邻分类算法进行了改进, 将原型近邻集的样本密度和分类误差率纳入到近邻判决准则中. 所提出算法通过单遍扫描学习训练集可快速生成有效的代表原型集, 具有较好的通用性. 实验结果表明, 该方法同其他算法相比较, 不仅可以保持甚至提高分类的准确性和压缩比, 且具有快速分类的优势.  相似文献   

16.
针对增量学习模型在更新阶段的识别效果不稳定的问题,提出一种基于目标均衡度量的核增量学习方法。通过设置经验风险均值最小化的优化目标项,设计了均衡度量训练数据个数的优化目标函数,以及在增量学习训练条件下的最优求解方案;再结合基于重要性分析的新增数据有效选择策略,最终构建出了一种轻量型的增量学习分类模型。在跌倒检测公开数据集上的实验结果显示:当已有代表性方法的识别精度下滑至60%以下时,所提方法仍能保持95%以上的精度,同时模型更新的计算消耗仅为3 ms。实验结果表明,所提算法在显著提高增量学习模型更新阶段识别能力稳定性的同时,大大降低了时间消耗,可有效实现云服务平台中关于可穿戴设备终端的智能应用。  相似文献   

17.
为有效使用大量未标注的图像进行分类,提出一种基于半监督学习的图像分类方法。通过共同的隐含话题桥接少量已标注的图像和大量未标注的图像,利用已标注图像的Must-link约束和Cannot-link约束提高未标注图像分类的精度。实验结果表明,该方法有效提高Caltech-101数据集和7类图像集约10%的分类精度。此外,针对目前绝大部分半监督图像分类方法不具备增量学习能力这一缺点,提出该方法的增量学习模型。实验结果表明,增量学习模型相比无增量学习模型提高近90%的计算效率。关键词半监督学习,图像分类,增量学习中图法分类号TP391。41IncrementalImageClassificationMethodBasedonSemi-SupervisedLearningLIANGPeng1,2,LIShao-Fa2,QINJiang-Wei2,LUOJian-Gao31(SchoolofComputerScienceandEngineering,GuangdongPolytechnicNormalUniversity,Guangzhou510665)2(SchoolofComputerScienceandEngineering,SouthChinaUniversityofTechnology,Guangzhou510006)3(DepartmentofComputer,GuangdongAIBPolytechnicCollege,Guangzhou510507)ABSTRACTInordertouselargenumbersofunlabeledimageseffectively,animageclassificationmethodisproposedbasedonsemi-supervisedlearning。Theproposedmethodbridgesalargeamountofunlabeledimagesandlimitednumbersoflabeledimagesbyexploitingthecommontopics。Theclassificationaccuracyisimprovedbyusingthemust-linkconstraintandcannot-linkconstraintoflabeledimages。TheexperimentalresultsonCaltech-101and7-classesimagedatasetdemonstratethattheclassificationaccuracyimprovesabout10%bytheproposedmethod。Furthermore,duetothepresentsemi-supervisedimageclassificationmethodslackingofincrementallearningability,anincrementalimplementationofourmethodisproposed。Comparingwithnon-incrementallearningmodelinliterature,theincrementallearningmethodimprovesthecomputationefficiencyofnearly90%。  相似文献   

18.
陈晓琪  谢振平  刘渊  詹千熠 《软件学报》2021,32(12):3884-3900
数据采样是快速提取大规模数据集中有用信息的重要手段,为更好地应对越来越大规模的数据高效处理要求,借助近邻传播算法的优异性能,通过引入分层增量处理和样本点动态赋权策略,实现了一种能够非常有效地平衡处理效率和采样质量的新方法.其中的分层增量处理策略考虑将原始的大规模数据集进行分批处理后再综合;而样本点动态赋权则考虑在近邻传播过程中对样本点进行合理的动态赋权,以获得采样的数据空间上更好的全局一致性.实验中,分别使用人工数据集、UCI标准数据集和图像数据集进行性能分析,结果表明:新方法与现有相关方法在采样划分质量上可达到同等水平,而计算效率则可实现大幅提升.进一步将新方法应用于深度学习的数据增强任务中,相应的实验结果表明:在原始数据增强方法上结合进高效增量采样处理后,在保持总训练数据集规模的情况下,所获得的模型性能可实现显著的提升.  相似文献   

19.
李成  赵海琳 《测控技术》2018,37(11):50-54
属性约简是粗糙集理论在模式识别中一项重要的应用,传统的属性约简算法只适合处理静态的信息系统,而处理不断动态更新的信息系统面临着巨大的挑战。对于不完备信息系统,提出一种增量式的属性约简算法。在不完备信息系统下引入粗糙集理论中关于正区域的概念,针对不完备信息系统中属性增加的情形,提出了基于正区域的增量式属性约简算法。实验结果表明了所提出的增量式属性约简算法比非增量式的算法具有更高的效率,同时比其他同类型的算法具有更高的优越性。  相似文献   

20.
王玲  穆志纯  郭辉 《计算机工程》2007,33(10):19-21
针对生产实际中数据批量增加的情况,为了提高所建立的模型准确性和模型更新问题,提出了一种基于支持向量回归的批处理增量学习方法。算法通过对钢材力学性能预报建模的工业实例进行研究,结果表明,与传统的支持向量机增量学习算法相比,提高了模型的精度,具有良好的应用潜力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号