首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a new cooperative ensemble learning system (CELS) for designing neural network ensembles. The idea behind CELS is to encourage different individual networks in an ensemble to learn different parts or aspects of a training data so that the ensemble can learn the whole training data better. In CELS, the individual networks are trained simultaneously rather than independently or sequentially. This provides an opportunity for the individual networks to interact with each other and to specialize. CELS can create negatively correlated neural networks using a correlation penalty term in the error function to encourage such specialization. This paper analyzes CELS in terms of bias-variance-covariance tradeoff. CELS has also been tested on the Mackey-Glass time series prediction problem and the Australian credit card assessment problem. The experimental results show that CELS can produce neural network ensembles with good generalization ability.  相似文献   

2.
丁一 《计算机仿真》2007,24(6):142-145
人工神经网络集成技术是神经计算技术的一个研究热点,在许多领域中已经有了成熟的应用.神经网络集成是用有限个神经网络对同一个问题进行学习,集成在某输入示例下的输出由构成集成的各神经网络在该示例下的输出共同决定.负相关学习法是一种神经网络集成的训练方法,它鼓励集成中的不同个体网络学习训练集的不同部分,以使整个集成能更好地学习整个训练数据.改进的负相关学习法是在误差函数中使用一个带冲量的BP算法,给合了原始负相关学习法和带冲量的BP算法的优点,使改进的算法成为泛化能力强、学习速度快的批量学习算法.  相似文献   

3.
Evolutionary ensembles with negative correlation learning   总被引:3,自引:0,他引:3  
Based on negative correlation learning and evolutionary learning, this paper presents evolutionary ensembles with negative correlation learning (EENCL) to address the issues of automatic determination of the number of individual neural networks (NNs) in an ensemble and the exploitation of the interaction between individual NN design and combination. The idea of EENCL is to encourage different individual NNs in the ensemble to learn different parts or aspects of the training data so that the ensemble can learn better the entire training data. The cooperation and specialization among different individual NNs are considered during the individual NN design. This provides an opportunity for different NNs to interact with each other and to specialize. Experiments on two real-world problems demonstrate that EENCL can produce NN ensembles with good generalization ability.  相似文献   

4.
A constructive algorithm for training cooperative neural network ensembles   总被引:13,自引:0,他引:13  
Presents a constructive algorithm for training cooperative neural-network ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on both accuracy and diversity among individual NNs in an ensemble. In order to maintain accuracy among individual NNs, the number of hidden nodes in individual NNs are also determined by a constructive approach. Incremental training based on negative correlation is used in CNNE to train individual NNs for different numbers of training epochs. The use of negative correlation learning and different training epochs for training individual NNs reflect CNNEs emphasis on diversity among individual NNs in an ensemble. CNNE has been tested extensively on a number of benchmark problems in machine learning and neural networks, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, soybean, and Mackey-Glass time series prediction problems. The experimental results show that CNNE can produce NN ensembles with good generalization ability.  相似文献   

5.
神经网络集成在图书剔旧分类中的应用   总被引:4,自引:0,他引:4       下载免费PDF全文
徐敏 《计算机工程》2006,32(20):210-212
在分析图书剔旧工作的基础上,指出用智能的方法解决图书剔旧问题的必要性。提出了可以用神经网络集成技术来解决该问题,并给出一种动态构建神经网络集成的方法,该方法在训练神经网络集成成员网络时不仅调整网络的连接权值,而且动态地构建神经网络集成中各成员神经网络的结构,从而在提高单个网络精度的同时,增加了各网络成员之间的差异度,减小了集成的泛化误差。实验证明该方法可以有效地用于图书剔旧分类。  相似文献   

6.
分析了神经网络集成泛化误差、个体神经网络泛化误差、个体神经网络差异度之间的关系,提出了一种个体神经网络主动学习方法.个体神经网络同时交互训练,既满足了个体神经网络的精度要求,又满足了个体神经网络的差异性要求.另外,给出了一种个体神经网络选择性集成方法,对个体神经网络加入偏置量,增加了个体神经网络的可选数量,降低了神经网络集成的泛化误差.理论分析和实验结果表明,使用这种个体神经网络训练方法、个体神经网络选择性集成方法能够构建有效的神经网络集成系统.  相似文献   

7.
Ke  Minlong  Fernanda L.  Xin   《Neurocomputing》2009,72(13-15):2796
Negative correlation learning (NCL) is a successful approach to constructing neural network ensembles. In batch learning mode, NCL outperforms many other ensemble learning approaches. Recently, NCL has also shown to be a potentially powerful approach to incremental learning, while the advantages of NCL have not yet been fully exploited. In this paper, we propose a selective NCL (SNCL) algorithm for incremental learning. Concretely, every time a new training data set is presented, the previously trained neural network ensemble is cloned. Then the cloned ensemble is trained on the new data set. After that, the new ensemble is combined with the previous ensemble and a selection process is applied to prune the whole ensemble to a fixed size. This paper is an extended version of our preliminary paper on SNCL. Compared to the previous work, this paper presents a deeper investigation into SNCL, considering different objective functions for the selection process and comparing SNCL to other NCL-based incremental learning algorithms on two more real world bioinformatics data sets. Experimental results demonstrate the advantage of SNCL. Further, comparisons between SNCL and other existing incremental learning algorithms, such Learn++ and ARTMAP, are also presented.  相似文献   

8.
基于免疫聚类的思想,提出了一种神经网络集成方法。采用轮盘赌选择方法重复地从各免疫聚类中的抽取样本以构成神经网络集成中各个体神经网络的训练样本集,神经网络集成的输出采用相对多数投票法。将基于免疫聚类的神经网络集成应用于中医舌诊诊断,以肝病病证诊断进行仿真。实验结果表明:基于免疫聚类的神经网络集成比基于Bagging算法的神经网络集成能有效地提高其泛化能力。因此,基于免疫聚类的神经网络集成算法的研究是可行的、有效的。  相似文献   

9.
基于个体选择的动态权重神经网络集成方法研究   总被引:1,自引:0,他引:1  
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已成为机器学习和神经计算领域的一个研究热点。该文针对回归分析问题提出了一种结合应用遗传算法进行个体选择和动态确定结果合成权重的神经网络集成构造方法。在训练出个体神经网络之后,应用遗传算法对个体网络进行选择,然后根据被选择的各个体网络在输入空间上对训练样本的预测误差,应用广义回归网络来动态地确定各个体网络在特定输入空间上的合成权重。实验结果表明,与仅应用个体网络选择或动态确定权重的方法相比,该集成方法基本上能取得更好地预测精度和相近的稳定性。  相似文献   

10.
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已成为机器学习和神经计算领域的一个研究热点。针对回归分析问题提出了一种动态确定结果合成权重的神经网络集成构造方法,在训练出个体神经网络之后,根据各个体网络在输入空间上对训练样本的预测误差,应用广义回归网络来动态地确定各个体网络在特定输入空间上的权重。实验结果表明,与传统的简单平均和加权平均方法相比,本集成方法能取得更好的预测精度。  相似文献   

11.
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已成为机器学习和神经计算领域的一个研究热点。针对回归分析问题提出了一种动态确定结果合成权重的神经网络集成构造方法,在训练出个体神经网络之后,根据各个体网络在输入空间上对训练样本的预测误差,应用广义回归网络来动态地确定各个体网络在特定输入空间上的权重。实验结果表明,与传统的简单平均和加权平均方法相比,本集成方法能取得更好的预测精度。  相似文献   

12.
针对在建立物流中心选址模型中,单个人工神经网络模型难以确定参数、容易产生“过拟合”等问题,提出一种神经网络二次集成模型,利用Bootstrap可重复采样技术得到不同的训练集来训练产生不同的个体神经网络,采用粒子群优化算法结合个体输出获得神经网络集成,并在此基础上将集成视为个体再次结合。实验结果表明,该模型易于设计且能够提高泛化能力。  相似文献   

13.
提出了一种基于k均值聚类和BP神经网络集成的语音识别方法,该方法以神经网络集成模型为基础,利用k均值聚类算法选择部分有差异性的个体神经网络再进行集成学习,既克服了单个BP网络模型容易局部收敛和不稳定性的缺点,又解决了传统集成方法训练时间长和个体网络差异性不明显的问题。通过对非特定人孤立词的语音识别的实验,证实了该方法的有效性。  相似文献   

14.
传统的神经网络集成中各子网络之间的相关性较大,从而影响集成的泛化能力.为此,提出用负相关学习算法来训练神经网络集成,以增加子网络间的差异度,从而提高集成的泛化能力.并将基于负相关学习法的神经网络集成应用于中医舌诊诊断,以肝病病证诊断进行仿真.实验结果表明:基于负相关学习法的神经网络集成比单个子网和传统神经网络集成更能有效地提高其泛化能力.因此,基于负相关神经网络集成算法的研究是可行的、有效的.  相似文献   

15.
Abstract: Neural network ensembles (sometimes referred to as committees or classifier ensembles) are effective techniques to improve the generalization of a neural network system. Combining a set of neural network classifiers whose error distributions are diverse can generate better results than any single classifier. In this paper, some methods for creating ensembles are reviewed, including the following approaches: methods of selecting diverse training data from the original source data set, constructing different neural network models, selecting ensemble nets from ensemble candidates and combining ensemble members' results. In addition, new results on ensemble combination methods are reported.  相似文献   

16.
一种基于聚类技术的选择性神经网络集成方法   总被引:11,自引:0,他引:11  
神经网络集成是一种很流行的学习方法,通过组合每个神经网络的输出生成最后的预测、为了提高集成方法的有效性,不仅要求集成中的个体神经网络具有很高的正确率,而且要求这些网络在输入空间产生不相关的错误.然而,在现有的众多集成方法中,大都采用将训练的所有神经网络直接进行组合以形成集成,实际上生成的这些神经网络可能具有一定的相关性.为了进一步提高神经网络间的差异性,一种基于聚类技术的选择性神经网络集成方法CLU_ENN被提出.在获得个体神经网络后,并不直接对这些神经网络集成,而是先应用聚类算法对这些神经网络模型聚类以获得差异较大的部分神经网络;然后由部分神经网络构成集成;最后,通过实验研究了CLU_ENN集成方法,与传统的集成方法Bagging相比,该方法取得了更好的效果。  相似文献   

17.
徐敏 《计算机工程》2012,38(6):198-200
提出一种基于人工示例训练的神经网络集成入侵检测方法。使用不同的训练数据集训练不同的成员网络,以此提高成员网络之间的差异度。在保证成员网络个数的基础上,选择差异度较大的成员网络构成集成,以提高系统的整体性能。实验结果表明,与当前流行的集成算法相比,该方法在保证较高入侵检测率的前提下,可保持较低的误检率,并对未知入侵也具有较高的检测率。  相似文献   

18.
《Applied Soft Computing》2007,7(1):353-363
For a supervised learning method, the quality of the training data or the training supervisor is very important in generating reliable neural networks. However, for real world problems, it is not always easy to obtain high quality training data sets. In this research, we propose a learning method for a neural network ensemble model that can be trained with an imperfect training data set, which is a data set containing erroneous training samples. With a competitive training mechanism, the ensemble is able to exclude erroneous samples from the training process, thus generating a reliable neural network. Through the experiment, we show that the proposed model is able to tolerate the existence of erroneous training samples in generating a reliable neural network.The ability of the neural network to tolerate the existence of erroneous samples in the training data lessens the costly task of analyzing and arranging the training data, thus increasing the usability of the neural networks for real world problems.  相似文献   

19.
Extreme learning machine (ELM) is a single-hidden layer feed-forward neural network with an efficient learning algorithm. Conventionally an ELM is trained using all the data based on the least square solution, and thus it may suffer from overfitting. In this paper, we present a new method of data and feature mixed ensemble based extreme learning machine (DFEN-ELM). DFEN-ELM combines data ensemble and feature subspace ensemble to tackle the overfitting problem and it takes advantage of the fast speed of ELM when building ensembles of classifiers. Both one-class and two-class ensemble based ELM have been studied. Experiments were conducted on computed tomography (CT) data for liver tumor detection and segmentation as well as magnetic resonance imaging (MRI) data for rodent brain segmentation. To improve the ensembles with new training data, sequential kernel learning is adopted further in the experiments on CT data for speedy retraining and iteratively enhancing the image segmentation performance. Experiment results on different testing cases and various testing datasets demonstrate that DFEN-ELM is a robust and efficient algorithm for medical object detection and segmentation.  相似文献   

20.
This paper presents a cooperative coevolutive approach for designing neural network ensembles. Cooperative coevolution is a recent paradigm in evolutionary computation that allows the effective modeling of cooperative environments. Although theoretically, a single neural network with a sufficient number of neurons in the hidden layer would suffice to solve any problem, in practice many real-world problems are too hard to construct the appropriate network that solve them. In such problems, neural network ensembles are a successful alternative. Nevertheless, the design of neural network ensembles is a complex task. In this paper, we propose a general framework for designing neural network ensembles by means of cooperative coevolution. The proposed model has two main objectives: first, the improvement of the combination of the trained individual networks; second, the cooperative evolution of such networks, encouraging collaboration among them, instead of a separate training of each network. In order to favor the cooperation of the networks, each network is evaluated throughout the evolutionary process using a multiobjective method. For each network, different objectives are defined, considering not only its performance in the given problem, but also its cooperation with the rest of the networks. In addition, a population of ensembles is evolved, improving the combination of networks and obtaining subsets of networks to form ensembles that perform better than the combination of all the evolved networks. The proposed model is applied to ten real-world classification problems of a very different nature from the UCI machine learning repository and proben1 benchmark set. In all of them the performance of the model is better than the performance of standard ensembles in terms of generalization error. Moreover, the size of the obtained ensembles is also smaller.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号