共查询到20条相似文献,搜索用时 585 毫秒
1.
This paper presents a new algorithm for designing neural network ensembles for classification problems with noise. The idea
behind this new algorithm is to encourage different individual networks in an ensemble to learn different parts or aspects
of the training data so that the whole ensemble can learn the whole training data better. Negatively correlated neural networks
are trained with a novel correlation penalty term in the error function to encourage such specialization. In our algorithm,
individual networks are trained simultaneously rather than independently or sequentially. This provides an opportunity for
different networks to interact with each other and to specialize. Experiments on two real-world problems demonstrate that
the new algorithm can produce neural network ensembles with good generalization ability.
This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan January
19–21, 1998 相似文献
2.
Evolutionary ensembles with negative correlation learning 总被引:3,自引:0,他引:3
Based on negative correlation learning and evolutionary learning, this paper presents evolutionary ensembles with negative correlation learning (EENCL) to address the issues of automatic determination of the number of individual neural networks (NNs) in an ensemble and the exploitation of the interaction between individual NN design and combination. The idea of EENCL is to encourage different individual NNs in the ensemble to learn different parts or aspects of the training data so that the ensemble can learn better the entire training data. The cooperation and specialization among different individual NNs are considered during the individual NN design. This provides an opportunity for different NNs to interact with each other and to specialize. Experiments on two real-world problems demonstrate that EENCL can produce NN ensembles with good generalization ability. 相似文献
3.
人工神经网络集成技术是神经计算技术的一个研究热点,在许多领域中已经有了成熟的应用.神经网络集成是用有限个神经网络对同一个问题进行学习,集成在某输入示例下的输出由构成集成的各神经网络在该示例下的输出共同决定.负相关学习法是一种神经网络集成的训练方法,它鼓励集成中的不同个体网络学习训练集的不同部分,以使整个集成能更好地学习整个训练数据.改进的负相关学习法是在误差函数中使用一个带冲量的BP算法,给合了原始负相关学习法和带冲量的BP算法的优点,使改进的算法成为泛化能力强、学习速度快的批量学习算法. 相似文献
4.
在分析图书剔旧工作的基础上,指出用智能的方法解决图书剔旧问题的必要性。提出了可以用神经网络集成技术来解决该问题,并给出一种动态构建神经网络集成的方法,该方法在训练神经网络集成成员网络时不仅调整网络的连接权值,而且动态地构建神经网络集成中各成员神经网络的结构,从而在提高单个网络精度的同时,增加了各网络成员之间的差异度,减小了集成的泛化误差。实验证明该方法可以有效地用于图书剔旧分类。 相似文献
5.
6.
Presents a constructive algorithm for training cooperative neural-network ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on both accuracy and diversity among individual NNs in an ensemble. In order to maintain accuracy among individual NNs, the number of hidden nodes in individual NNs are also determined by a constructive approach. Incremental training based on negative correlation is used in CNNE to train individual NNs for different numbers of training epochs. The use of negative correlation learning and different training epochs for training individual NNs reflect CNNEs emphasis on diversity among individual NNs in an ensemble. CNNE has been tested extensively on a number of benchmark problems in machine learning and neural networks, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, soybean, and Mackey-Glass time series prediction problems. The experimental results show that CNNE can produce NN ensembles with good generalization ability. 相似文献
7.
8.
Neural network ensembles: combining multiple models for enhanced performance using a multistage approach 总被引:1,自引:0,他引:1
Abstract: Neural network ensembles (sometimes referred to as committees or classifier ensembles) are effective techniques to improve the generalization of a neural network system. Combining a set of neural network classifiers whose error distributions are diverse can generate better results than any single classifier. In this paper, some methods for creating ensembles are reviewed, including the following approaches: methods of selecting diverse training data from the original source data set, constructing different neural network models, selecting ensemble nets from ensemble candidates and combining ensemble members' results. In addition, new results on ensemble combination methods are reported. 相似文献
9.
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已成为机器学习和神经计算领域的一个研究热点。针对回归分析问题提出了一种动态确定结果合成权重的神经网络集成构造方法,在训练出个体神经网络之后,根据各个体网络在输入空间上对训练样本的预测误差,应用广义回归网络来动态地确定各个体网络在特定输入空间上的权重。实验结果表明,与传统的简单平均和加权平均方法相比,本集成方法能取得更好的预测精度。 相似文献
10.
基于广义回归网络的动态权重回归型神经网络集成方法研究 总被引:2,自引:0,他引:2
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已成为机器学习和神经计算领域的一个研究热点。针对回归分析问题提出了一种动态确定结果合成权重的神经网络集成构造方法,在训练出个体神经网络之后,根据各个体网络在输入空间上对训练样本的预测误差,应用广义回归网络来动态地确定各个体网络在特定输入空间上的权重。实验结果表明,与传统的简单平均和加权平均方法相比,本集成方法能取得更好的预测精度。 相似文献
11.
12.
基于个体选择的动态权重神经网络集成方法研究 总被引:1,自引:0,他引:1
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已成为机器学习和神经计算领域的一个研究热点。该文针对回归分析问题提出了一种结合应用遗传算法进行个体选择和动态确定结果合成权重的神经网络集成构造方法。在训练出个体神经网络之后,应用遗传算法对个体网络进行选择,然后根据被选择的各个体网络在输入空间上对训练样本的预测误差,应用广义回归网络来动态地确定各个体网络在特定输入空间上的合成权重。实验结果表明,与仅应用个体网络选择或动态确定权重的方法相比,该集成方法基本上能取得更好地预测精度和相近的稳定性。 相似文献
13.
传统的神经网络集成中各子网络之间的相关性较大,从而影响集成的泛化能力.为此,提出用负相关学习算法来训练神经网络集成,以增加子网络间的差异度,从而提高集成的泛化能力.并将基于负相关学习法的神经网络集成应用于中医舌诊诊断,以肝病病证诊断进行仿真.实验结果表明:基于负相关学习法的神经网络集成比单个子网和传统神经网络集成更能有效地提高其泛化能力.因此,基于负相关神经网络集成算法的研究是可行的、有效的. 相似文献
14.
分析了神经网络集成泛化误差、个体神经网络泛化误差、个体神经网络差异度之间的关系,提出了一种个体神经网络主动学习方法.个体神经网络同时交互训练,既满足了个体神经网络的精度要求,又满足了个体神经网络的差异性要求.另外,给出了一种个体神经网络选择性集成方法,对个体神经网络加入偏置量,增加了个体神经网络的可选数量,降低了神经网络集成的泛化误差.理论分析和实验结果表明,使用这种个体神经网络训练方法、个体神经网络选择性集成方法能够构建有效的神经网络集成系统. 相似文献
15.
Negative Correlation Learning (NCL) has been successfully applied to construct neural network ensembles. It encourages the
neural networks that compose the ensemble to be different from each other and, at the same time, accurate. The difference
among the neural networks that compose an ensemble is a desirable feature to perform incremental learning, for some of the
neural networks can be able to adapt faster and better to new data than the others. So, NCL is a potentially powerful approach
to incremental learning. With this in mind, this paper presents an analysis of NCL, aiming at determining its weak and strong
points to incremental learning. The analysis shows that it is possible to use NCL to overcome catastrophic forgetting, an
important problem related to incremental learning. However, when catastrophic forgetting is very low, no advantage of using
more than one neural network of the ensemble to learn new data is taken and the test error is high. When all the neural networks
are used to learn new data, some of them can indeed adapt better than the others, but a higher catastrophic forgetting is
obtained. In this way, it is important to find a trade-off between overcoming catastrophic forgetting and using an entire
ensemble to learn new data. The NCL results are comparable with other approaches which were specifically designed to incremental
learning. Thus, the study presented in this work reveals encouraging results with negative correlation in incremental learning,
showing that NCL is a promising approach to incremental learning.
相似文献
Xin YaoEmail: |
16.
Cooperative coevolution of artificial neural network ensembles for pattern classification 总被引:4,自引:0,他引:4
Garcia-Pedrajas N. Hervas-Martinez C. Ortiz-Boyer D. 《Evolutionary Computation, IEEE Transactions on》2005,9(3):271-302
This paper presents a cooperative coevolutive approach for designing neural network ensembles. Cooperative coevolution is a recent paradigm in evolutionary computation that allows the effective modeling of cooperative environments. Although theoretically, a single neural network with a sufficient number of neurons in the hidden layer would suffice to solve any problem, in practice many real-world problems are too hard to construct the appropriate network that solve them. In such problems, neural network ensembles are a successful alternative. Nevertheless, the design of neural network ensembles is a complex task. In this paper, we propose a general framework for designing neural network ensembles by means of cooperative coevolution. The proposed model has two main objectives: first, the improvement of the combination of the trained individual networks; second, the cooperative evolution of such networks, encouraging collaboration among them, instead of a separate training of each network. In order to favor the cooperation of the networks, each network is evaluated throughout the evolutionary process using a multiobjective method. For each network, different objectives are defined, considering not only its performance in the given problem, but also its cooperation with the rest of the networks. In addition, a population of ensembles is evolved, improving the combination of networks and obtaining subsets of networks to form ensembles that perform better than the combination of all the evolved networks. The proposed model is applied to ten real-world classification problems of a very different nature from the UCI machine learning repository and proben1 benchmark set. In all of them the performance of the model is better than the performance of standard ensembles in terms of generalization error. Moreover, the size of the obtained ensembles is also smaller. 相似文献
17.
提出一种基于人工示例训练的神经网络集成入侵检测方法。使用不同的训练数据集训练不同的成员网络,以此提高成员网络之间的差异度。在保证成员网络个数的基础上,选择差异度较大的成员网络构成集成,以提高系统的整体性能。实验结果表明,与当前流行的集成算法相比,该方法在保证较高入侵检测率的前提下,可保持较低的误检率,并对未知入侵也具有较高的检测率。 相似文献
18.
Monotonicity and concavity play important roles in human cognition, reasoning, and decision making. This paper shows that neural networks can learn monotonic-concave interval concepts based on real-world data, Traditionally, the training of neural networks has been based only on raw data. In cases where the training samples carry statistical fluctuations, the products of the training have often suffered. This paper suggests that global knowledge about monotonicity and concavity of a problem domain can be incorporated in neural network training. This paper proposes a learning scheme for the back-propagation layered neural networks in learning monotonic-concave interval concepts and provides an example to show its application. 相似文献
19.
20.
神经网络集成通过训练多个神经网络并将其结论进行适当的合成,可以显著地提高学习系统的泛化能力.然而,设计一个好的神经网络集成必须在个体准确性与彼此差异性之间取得一个平衡.本文提出了一种改进的神经网络集成构造方法--基于噪声传播的神经网络集成算法(NSENN). 相似文献