首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 312 毫秒
1.
丁一 《计算机仿真》2007,24(6):142-145
人工神经网络集成技术是神经计算技术的一个研究热点,在许多领域中已经有了成熟的应用.神经网络集成是用有限个神经网络对同一个问题进行学习,集成在某输入示例下的输出由构成集成的各神经网络在该示例下的输出共同决定.负相关学习法是一种神经网络集成的训练方法,它鼓励集成中的不同个体网络学习训练集的不同部分,以使整个集成能更好地学习整个训练数据.改进的负相关学习法是在误差函数中使用一个带冲量的BP算法,给合了原始负相关学习法和带冲量的BP算法的优点,使改进的算法成为泛化能力强、学习速度快的批量学习算法.  相似文献   

2.
基于免疫聚类的思想,提出了一种神经网络集成方法。采用轮盘赌选择方法重复地从各免疫聚类中的抽取样本以构成神经网络集成中各个体神经网络的训练样本集,神经网络集成的输出采用相对多数投票法。将基于免疫聚类的神经网络集成应用于中医舌诊诊断,以肝病病证诊断进行仿真。实验结果表明:基于免疫聚类的神经网络集成比基于Bagging算法的神经网络集成能有效地提高其泛化能力。因此,基于免疫聚类的神经网络集成算法的研究是可行的、有效的。  相似文献   

3.
为解决传统数据驱动的洪水预报方法预报误差较大以及传统集成学习预报方法各个子网络间无法交互的问题,本文在单个模型预测基础上,选取异构的BP、CNN、LSTM神经网络,建立基于负相关学习的神经网络集成洪水预报模型,通过显式地添加正则化项对模型进行整体的误差-方差分解和分歧分解,使集成神经网络中各子网络之间并不完全独立,以保证集成模型的多样性,从而提高最终模型的预测准确率。在安徽屯溪流域的实验表明,基于负相关学习的模型可以有效地对洪水过程进行预报,与传统使用单个模型相比预测结果精度更高。  相似文献   

4.
选择性支持向量机集成算法   总被引:1,自引:0,他引:1  
陈涛 《计算机工程与设计》2011,32(5):1807-1809,1819
为有效提升支持向量机泛化性能,提出了基于差分进化算法和负相关学习的选择性支持向量机集成。通过bootstrap技术产生并训练得到多个独立子SVM,基于负相关学习理论构造适应度函数,既提高子SVM的泛化性能,又增大其之间差异度。利用差分进化算法计算各子SVM在加权平均中的最优权重,选择权值大于一定阈值的部分SVM进行加权集成。实验结果表明,该算法是一种有效的集成方法,能进一步提高SVM的泛化性能。  相似文献   

5.
传统DSS与一直无法较好地解决实际决策问题,特别是难以对复杂环境和复杂问题进行有效的决策和求解。集成学习通过重复采样可产生个体学习器之间差异度,从而提高个体学习器的泛化能力。神经网络集成学习方法简单效果明显,可显著提高系统的泛化能力。该文将神经网络集成技术应用到DSS中,对基于神经网络集成方法的智能决策支持系统体系进行了构建。  相似文献   

6.
一种异构神经网络集成协同构造算法   总被引:4,自引:0,他引:4  
提出一种异构神经网络集成的协同构造算法(HNNECC)。首先利用进化规划同时进化网络拓扑结构和连接权值,生成多个异构最优网络,然后对异构网络进行组合.在构造神经网络集成的过程中通过协同合作,保持各网络间的负相关。从而在提高成员网络精度的同时增加各成员网络之间的差异度.利用统计学习理论对算法进行分析,表明该方法具有很好的泛化性能.分别在四个数据集上进行了实验,相对于单个网络,本文方法可提高性能17%到85%,亦优于Bagging等传统固定结构的神经网络集成方法。  相似文献   

7.
负荷预测是电力规划的基础,传统的神经网络顶测方法存在对初始网络权值设置敏感、收敛的速度慢、容易陷入局部极小值等缺点.文中引入遗传算法先对神经网络的初始值进行优化,再通过神经网络进行学习和训练,得出的结果再经Bagging方法集成,目的是提高其准确率.通过Matlab仿真进行实验,结果表明,基于Bagging算法集成遗传神经网络,能够克服传统BP神经网络的缺点,可较快收敛又不易陷入到局部极值中,具有较强的泛化能力,同时也大大提高了网络的预测精度.  相似文献   

8.
多模态粒子群集成神经网络   总被引:3,自引:0,他引:3  
提出一种基于多模态粒子群算法的神经网络集成方法,在网络训练每个迭代周期内利用改进的快速聚类算法在权值搜索空间上动态地把搜索粒子分为若干类,求得每一类的最优粒子,然后计算最优个体两两之间的输出空间相异度,合并相异度过低的两类粒子,最终形成不但权值空间相异、而且输出空间也相异的若干类粒子,每类粒子负责一个成员网络权值的搜索,其中最优粒子对应于一个成员网络,所有类的最优粒子组成神经网络集成,成员网络的个数是由算法自动确定的.算法控制网络多样性的方法更直接、更有效.与负相关神经网络集成、bagging和boosting方法比较,实验结果表明,此算法较好地提高了神经网络集成的泛化能力.  相似文献   

9.
以进行模拟电路故障诊断为主要目的,针对单神经网络故障字典法在进行复杂电路系统故障诊断时,对多故障和多任务诊断的不足之处,讨论了基于多故障的神经网络集成技术,采用集成多神经网络来提高诊断速度和精度,提出了集成多神经网络故障字典法来解决多故障任务,对基于层次分类模型的多重结构神经网络进行了研究,给出了两种对故障定位的统一融合算法,克服了采用单神经网络多故障时学习速度慢,出现新故障的网络要重新进行学习等缺点.并给出了应用实例.  相似文献   

10.
神经网络集成通过训练多个神经网络并将其结论进行适当的合成,可以显著地提高学习系统的泛化能力.然而,设计一个好的神经网络集成必须在个体准确性与彼此差异性之间取得一个平衡.本文提出了一种改进的神经网络集成构造方法--基于噪声传播的神经网络集成算法(NSENN).  相似文献   

11.
A constructive algorithm for training cooperative neural network ensembles   总被引:13,自引:0,他引:13  
Presents a constructive algorithm for training cooperative neural-network ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on both accuracy and diversity among individual NNs in an ensemble. In order to maintain accuracy among individual NNs, the number of hidden nodes in individual NNs are also determined by a constructive approach. Incremental training based on negative correlation is used in CNNE to train individual NNs for different numbers of training epochs. The use of negative correlation learning and different training epochs for training individual NNs reflect CNNEs emphasis on diversity among individual NNs in an ensemble. CNNE has been tested extensively on a number of benchmark problems in machine learning and neural networks, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, soybean, and Mackey-Glass time series prediction problems. The experimental results show that CNNE can produce NN ensembles with good generalization ability.  相似文献   

12.
Evolutionary ensembles with negative correlation learning   总被引:3,自引:0,他引:3  
Based on negative correlation learning and evolutionary learning, this paper presents evolutionary ensembles with negative correlation learning (EENCL) to address the issues of automatic determination of the number of individual neural networks (NNs) in an ensemble and the exploitation of the interaction between individual NN design and combination. The idea of EENCL is to encourage different individual NNs in the ensemble to learn different parts or aspects of the training data so that the ensemble can learn better the entire training data. The cooperation and specialization among different individual NNs are considered during the individual NN design. This provides an opportunity for different NNs to interact with each other and to specialize. Experiments on two real-world problems demonstrate that EENCL can produce NN ensembles with good generalization ability.  相似文献   

13.
Bagging and boosting negatively correlated neural networks.   总被引:2,自引:0,他引:2  
In this paper, we propose two cooperative ensemble learning algorithms, i.e., NegBagg and NegBoost, for designing neural network (NN) ensembles. The proposed algorithms incrementally train different individual NNs in an ensemble using the negative correlation learning algorithm. Bagging and boosting algorithms are used in NegBagg and NegBoost, respectively, to create different training sets for different NNs in the ensemble. The idea behind using negative correlation learning in conjunction with the bagging/boosting algorithm is to facilitate interaction and cooperation among NNs during their training. Both NegBagg and NegBoost use a constructive approach to automatically determine the number of hidden neurons for NNs. NegBoost also uses the constructive approach to automatically determine the number of NNs for the ensemble. The two algorithms have been tested on a number of benchmark problems in machine learning and NNs, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, satellite, soybean, and waveform problems. The experimental results show that NegBagg and NegBoost require a small number of training epochs to produce compact NN ensembles with good generalization.  相似文献   

14.
This paper presents a new algorithm for designing neural network ensembles for classification problems with noise. The idea behind this new algorithm is to encourage different individual networks in an ensemble to learn different parts or aspects of the training data so that the whole ensemble can learn the whole training data better. Negatively correlated neural networks are trained with a novel correlation penalty term in the error function to encourage such specialization. In our algorithm, individual networks are trained simultaneously rather than independently or sequentially. This provides an opportunity for different networks to interact with each other and to specialize. Experiments on two real-world problems demonstrate that the new algorithm can produce neural network ensembles with good generalization ability. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan January 19–21, 1998  相似文献   

15.
This paper presents a new cooperative ensemble learning system (CELS) for designing neural network ensembles. The idea behind CELS is to encourage different individual networks in an ensemble to learn different parts or aspects of a training data so that the ensemble can learn the whole training data better. In CELS, the individual networks are trained simultaneously rather than independently or sequentially. This provides an opportunity for the individual networks to interact with each other and to specialize. CELS can create negatively correlated neural networks using a correlation penalty term in the error function to encourage such specialization. This paper analyzes CELS in terms of bias-variance-covariance tradeoff. CELS has also been tested on the Mackey-Glass time series prediction problem and the Australian credit card assessment problem. The experimental results show that CELS can produce neural network ensembles with good generalization ability.  相似文献   

16.
首先利用粒子群算法和投影寻踪技术构造神经网络的学习矩阵,基于负相关学习的样本重构方法生成神经网络集成个体,进一步用粒子群算法和投影寻踪回归方法对集成个体集成,生成神经网络集成的输出结论,建立基于粒子群算法-投影寻踪的样本重构神经网络集成模型。该方法应用于广西全区的月降水量预报,结果表明该方法在降水预报中能有效从众多天气因子中构造神经网络的学习矩阵,而且集成学习预测精度高、稳定性好,具有一定的推广能力。  相似文献   

17.
针对标准的BP神经网络仅从预测误差负梯度方向修正权值和阈值,学习过程收敛缓慢,并且容易陷入局部最小值,导致泛化能力不足的问题,提出了一种基于学习经验变学习速率改进的RPROP方法作为BP神经网络权值和阈值更新方法,并与主成分分析法(Principal Component Analysis,PCA)相结合,形成了PCA-改进神经网络算法。同时,采用Matlab软件对四类音乐信号进行分类实验。实验结果表明,改进算法比标准算法的稳定识别率提高2.6%,当稳定识别率达到90%时,用时节省75%,表明该算法可以加快网络的收敛过程,提高泛化能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号