共查询到20条相似文献,搜索用时 15 毫秒
1.
现有的类增量学习方法多是采用存储数据或者扩展网络结构,但受内存资源限制不能有效缓解灾难性遗忘问题。针对这一问题,创新地提出基于脑启发生成式重放方法。首先,通过VAE-ACGAN模拟记忆自组织系统,提高生成伪样本的质量;再引入共享参数模块和私有参数模块,保护已提取的特征;最后,针对生成器中的潜在变量使用高斯混合模型,采样特定重放伪样本。在MNIST、Permuted MNIST和CIFAR-10数据集上的实验结果表明,所提方法的分类准确率分别为92.91%、91.44%和40.58%,显著优于其他类增量学习方法。此外,在MNIST数据集上,反向迁移和正向迁移指标达到了3.32%和0.83%,证明该方法实现任务的稳定性和可塑性之间的权衡,有效地防止了灾难性遗忘。 相似文献
2.
3.
4.
5.
提出了一种改进的支持向量机增量学习算法。分析了新样本加入后,原样本和新样本中哪些样本可能转化为新支持向量。基于分析结论提出了一种改进的学习算法。该算法舍弃了对最终分类无用的样本,并保留了有用的样本。对标准数据集的实验结果表明,该算法在保证分类准确度的同时大大减少了训练时间。 相似文献
6.
增量式学习中,当向决策表中增加一个新例子时,为了获得极小决策规则集,一般方法是对决策表中的所有数据重新计算。但这种方法显然效率很低,而且也是不必要的。论文从粗集理论出发,提出了一种最小重新计算的标准,并在此基础上,给出了一个增量式学习的改进算法。该算法在一定程度上优于传统的增量式学习算法。 相似文献
7.
一种SVM增量学习淘汰算法 总被引:1,自引:1,他引:1
基于SVM寻优问题的KKT条件和样本之间的关系,分析了样本增加后支持向量集的变化情况,支持向量在增量学习中的活动规律,提出了一种新的支持向量机增量学习遗忘机制--计数器淘汰算法.该算法只需设定一个参数,即可对训练数据进行有效的遗忘淘汰.通过对标准数据集的实验结果表明,使用该方法进行增量学习在保证训练精度的同时,能有效地提高训练速度并降低存储空间的占用. 相似文献
8.
9.
10.
In this paper we propose a new probability update rule and sampling procedure for population-based incremental learning. These proposed methods are based on the concept of opposition as a means for controlling the amount of diversity within a given sample population. We prove that under this scheme we are able to asymptotically guarantee a higher diversity, which allows for a greater exploration of the search space. The presented probabilistic algorithm is specifically for applications in the binary domain. The benchmark data used for the experiments are commonly used deceptive and attractor basin functions as well as 10 common travelling salesman problem instances. Our experimental results focus on the effect of parameters and problem size on the accuracy of the algorithm as well as on a comparison to traditional population-based incremental learning. We show that the new algorithm is able to effectively utilize the increased diversity of opposition which leads to significantly improved results over traditional population-based incremental learning. 相似文献
11.
12.
介绍了支持向量机,报告了支持向量机增量学习算法的研究现状,分析了支持向量集在加入新样本后支持向量和非支持向量的转化情况.针对淘汰机制效率不高的问题,提出了一种改进的SVM增量学习淘汰算法--二次淘汰算法.该算法经过两次有效的淘汰,对分类无用的样本进行舍弃,使得新的增量训练在淘汰后的有效数据集进行,而无需在复杂难处理的整个训练数据集中进行,从而显著减少了后继训练时间.理论分析和实验结果表明,该算法能在保证分类精度的同时有效地提高训练速度. 相似文献
13.
基于特征子空间的目标跟踪方法能适应目标状态的变化,并对光照等外部环境不敏感,但通常假定特征子空间的基向量固定,这样不仅需要离线训练,而且在目标姿态发生较大改变时,跟踪精度会降低。提出一种基于增量学习的Rao-Blackwellized粒子滤波算法,通过在线学习获得特征子空间的基向量,并用解析的方法对目标在子空间的投影参数进行在线更新。实验表明,新算法在目标有较大形变、姿态变化和光照等条件变化时,能保持较高跟踪精度,具有较强的鲁棒性。 相似文献
14.
Experimental study on population-based incremental learning algorithms for dynamic optimization problems 总被引:4,自引:4,他引:4
Shengxiang Yang Xin Yao 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2005,9(11):815-834
Evolutionary algorithms have been widely used for stationary optimization problems. However, the environments of real world problems are often dynamic. This seriously challenges traditional evolutionary algorithms. In this paper, the application of population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic problems is investigated. Inspired by the complementarity mechanism in nature a Dual PBIL is proposed, which operates on two probability vectors that are dual to each other with respect to the central point in the genotype space. A diversity maintaining technique of combining the central probability vector into PBIL is also proposed to improve PBILs adaptability in dynamic environments. In this paper, a new dynamic problem generator that can create required dynamics from any binary-encoded stationary problem is also formalized. Using this generator, a series of dynamic problems were systematically constructed from several benchmark stationary problems and an experimental study was carried out to compare the performance of several PBIL algorithms and two variants of standard genetic algorithm. Based on the experimental results, we carried out algorithm performance analysis regarding the weakness and strength of studied PBIL algorithms and identified several potential improvements to PBIL for dynamic optimization problems.
相似文献
Xin YaoEmail: |
15.
为解决分类器学习新样本知识的问题,提出一种基于近邻算法的增量学习算法。该算法以最近邻算法为基础,首先计算新样本与标准样本之间的匹配度,找到最佳匹配样本和次佳匹配样本,然后通过与匹配度阈值进行比较来决定是类内学习还是类别学习。算法采用UCI中的标准数据集进行实验并应用于车辆识别仿真,其结果验证了该算法的有效性。实验进一步研究了匹配度阈值的选择和初始化样本数量选取对分类正确率影响。 相似文献
16.
17.
This work considers scalable incremental extreme learning machine (I‐ELM) algorithms, which could be suitable for big data regression. During the training of I‐ELMs, the hidden neurons are presented one by one, and the weights are based solely on simple direct summations, which can be most efficiently mapped on parallel environments. Existing incremental versions of ELMs are the I‐ELM, enhanced incremental ELM (EI‐ELM), and convex incremental ELM (CI‐ELM). We study the enhanced and convex incremental ELM (ECI‐ELM) algorithm, which is a combination of the last 2 versions. The main findings are that ECI‐ELM is fast, accurate, and fully scalable when it operates in a parallel system of distributed memory workstations. Experimental simulations on several benchmark data sets demonstrate that the ECI‐ELM is the most accurate among the existing I‐ELM, EI‐ELM, and CI‐ELM algorithms. We also analyze the convergence as a function of the hidden neurons and demonstrate that ECI‐ELM has the lowest error rate curve and converges much faster than the other algorithms in all of the data sets. The parallel simulations also reveal that the data parallel training of the ECI‐ELM can guarantee simplicity and straightforward mappings and can deliver speedups and scale‐ups very close to linear. 相似文献
18.
在许多实际的数据挖掘应用场景,如网络入侵检测、Twitter垃圾邮件检测、计算机辅助诊断等中,与目标域分布不同但相关的源域普遍存在. 一般情况下,在源域和目标域中都有大量未标记样本,对其中的每个样本都进行标记是件困难的、昂贵的、耗时的事,有时也没必要. 因此,充分挖掘源域和目标域中标记和未标记样本来解决目标域中的分类任务非常重要且有意义. 结合归纳迁移学习和半监督学习,提出一种名为Co-Transfer的半监督归纳迁移学习框架. Co-Transfer首先生成3个TrAdaBoost分类器用于实现从原始源域到原始目标域的迁移学习,同时生成另外3个TrAdaBoost分类器用于实现从原始目标域到原始源域的迁移学习. 这2组分类器都使用从原始源域和原始目标域的原有标记样本的有放回抽样来训练. 在Co-Transfer的每一轮迭代中,每组TrAdaBoost分类器使用新的训练集更新,其中一部分训练样本是原有的标记样本,一部分是由本组TrAdaBoost分类器标记的样本,还有一部分则由另一组TrAdaBoost分类器标记. 迭代终止后,把从原始源域到原始目标域的3个TrAdaBoost分类器的集成作为原始目标域分类器. 在UCI数据集和文本分类数据集上的实验结果表明,Co-Transfer可以有效地学习源域和目标域的标记和未标记样本从而提升泛化性能.
相似文献19.
针对软测量建模的特点以及建模过程中存在的主要问题,提出了基于 AdaBoost RT 集成学习方法的软测量建模方法,并根据 AdaBoost RT 算法固有的不足和软测量模型在线更新所面临的困难,提出了自适应修改阈值 和增添增量学习性能的改进方法.使用该建模方法对宝钢300 t LF 精炼炉建立钢水温度软测量模型,并使用实际生产数据对模型进行了检验.检验结果表明,该模型具有较好的预测精度,能够很好地实现在线更新. 相似文献
20.
Based on the previous work on ridgelet neural network, which employs the ridgelet function as the activation function in a feedforward neural network, in this paper we proposed a single-hidden-layer regularization ridgelet network (SLRRN). An extra regular item indicating the prior knowledge of the problem to be solved is added in the cost functional to obtain better generalization performance, and a simple and efficient method named cost functional minimized extreme and incremental learning (CFM-EIL) algorithm is proposed. In CFM-EIL based SLRRN (CFM-EIL-SLRRN), the ridgelet hidden neurons together with their parameters are tuned incrementally and analytically; thus it can significantly reduce the computational complexity of gradient based or other iterative algorithms. Some simulation experiments about time-series forecasting are taken, and several commonly used regression ways are considered under the same condition to give a comparison result. The results show the superiority of the proposed CFM-EIL-SLRRN to its counterparts in forecasting. 相似文献