首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 146 毫秒
1.
神经网络集成在图书剔旧分类中的应用   总被引:4,自引:0,他引:4       下载免费PDF全文
徐敏 《计算机工程》2006,32(20):210-212
在分析图书剔旧工作的基础上,指出用智能的方法解决图书剔旧问题的必要性。提出了可以用神经网络集成技术来解决该问题,并给出一种动态构建神经网络集成的方法,该方法在训练神经网络集成成员网络时不仅调整网络的连接权值,而且动态地构建神经网络集成中各成员神经网络的结构,从而在提高单个网络精度的同时,增加了各网络成员之间的差异度,减小了集成的泛化误差。实验证明该方法可以有效地用于图书剔旧分类。  相似文献   

2.
分析了神经网络集成泛化误差、个体神经网络泛化误差、个体神经网络差异度之间的关系,提出了一种个体神经网络主动学习方法.个体神经网络同时交互训练,既满足了个体神经网络的精度要求,又满足了个体神经网络的差异性要求.另外,给出了一种个体神经网络选择性集成方法,对个体神经网络加入偏置量,增加了个体神经网络的可选数量,降低了神经网络集成的泛化误差.理论分析和实验结果表明,使用这种个体神经网络训练方法、个体神经网络选择性集成方法能够构建有效的神经网络集成系统.  相似文献   

3.
为降低集成特征选择方法的计算复杂性,提出了一种基于粗糙集约简的神经网络集成分类方法。该方法首先通过结合遗传算法求约简和重采样技术的动态约简技术,获得稳定的、泛化能力较强的属性约简集;然后,基于不同约简设计BP网络作为待集成的基分类器,并依据选择性集成思想,通过一定的搜索策略,找到具有最佳泛化性能的集成网络;最后通过多数投票法实现神经网络集成分类。该方法在某地区Landsat 7波段遥感图像的分类实验中得到了验证,由于通过粗糙集约简,过滤掉了大量分类性能欠佳的特征子集,和传统的集成特征选择方法相比,该方法时间开销少,计算复杂性低,具有满意的分类性能。  相似文献   

4.
并行学习神经网络集成方法   总被引:23,自引:0,他引:23  
该文分析了神经网络集成中成员神经网络的泛化误差、成员神经网络之间的差异度对神经网络集成泛化误差的影响,提出了一种并行学习神经网络集成方法;对参与集成的成员神经网络,给出了一种并行训练方法,不仅满足了成员网络本身的精度要求,还满足了它与其余成员网络的差异性要求;另外,给出了一种并行确定集成成员神经网络权重方法.实验结果表明,使用该文的成员神经网络训练方法、成员神经网络集成方法能够构建有效的神经网络集成系统.  相似文献   

5.
为降低集成特征选择方法的计算复杂性,提出了一种基于粗糙集约简的神经网络集成分类方法。该方法首先通过结合遗传算法求约简和重采样技术的动态约简技术,获得稳定的、泛化能力较强的属性约简集;然后,基于不同约简设计BP网络作为待集成的基分类器,并依据选择性集成思想,通过一定的搜索策略,找到具有最佳泛化性能的集成网络;最后通过多数投票法实现神经网络集成分类。该方法在某地区Landsat 7波段遥感图像的分类实验中得到了验证,由于通过粗糙集约简,过滤掉了大量分类性能欠佳的特征子集,和传统的集成特征选择方法相比,该方法时  相似文献   

6.
张全平  吴耿锋 《计算机工程》2008,34(23):199-201
提出基于人工免疫网络的神经网络集成方法AINEN。在用Bagging生成神经网络集成之后,将人工免疫网络的原理应用到神经网络集成,组成了一个从微观上看是一个一个的神经网络,而从宏观上看是一个大的人工免疫网络的集成。通过在微观层次上提高神经网络集成的个体之间的异构度,在宏观层次上提高免疫网络的适应度,从而降低集成的泛化误差。AINEN与GASEN方法在标准数据集上进行的实验表明,AINEN能取得更小的泛化误差。  相似文献   

7.
基于核独立分量分析的模糊核聚类神经网络集成方法*   总被引:2,自引:0,他引:2  
提出一种基于核独立分量分析的模糊核聚类神经网络集成方法。该方法首先采用核独立分量分析对高维数据进行特征提取;随后用模糊核聚类算法根据相互独立训练出的多个神经网络个体在验证集上的输出对其进行分类,并计算每一类中的所有个体在验证集上的泛化误差;然后取其中平均泛化误差最小的神经网络个体作为这一类的代表;最后经相对多数投票法得到集成的最终输出。实验结果表明,与其他集成方法相比,该方法具有较高的精确度和稳定性。  相似文献   

8.
基于个体选择的动态权重神经网络集成方法研究   总被引:1,自引:0,他引:1  
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已成为机器学习和神经计算领域的一个研究热点。该文针对回归分析问题提出了一种结合应用遗传算法进行个体选择和动态确定结果合成权重的神经网络集成构造方法。在训练出个体神经网络之后,应用遗传算法对个体网络进行选择,然后根据被选择的各个体网络在输入空间上对训练样本的预测误差,应用广义回归网络来动态地确定各个体网络在特定输入空间上的合成权重。实验结果表明,与仅应用个体网络选择或动态确定权重的方法相比,该集成方法基本上能取得更好地预测精度和相近的稳定性。  相似文献   

9.
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已成为机器学习和神经计算领域的一个研究热点。针对回归分析问题提出了一种动态确定结果合成权重的神经网络集成构造方法,在训练出个体神经网络之后,根据各个体网络在输入空间上对训练样本的预测误差,应用广义回归网络来动态地确定各个体网络在特定输入空间上的权重。实验结果表明,与传统的简单平均和加权平均方法相比,本集成方法能取得更好的预测精度。  相似文献   

10.
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已成为机器学习和神经计算领域的一个研究热点。针对回归分析问题提出了一种动态确定结果合成权重的神经网络集成构造方法,在训练出个体神经网络之后,根据各个体网络在输入空间上对训练样本的预测误差,应用广义回归网络来动态地确定各个体网络在特定输入空间上的权重。实验结果表明,与传统的简单平均和加权平均方法相比,本集成方法能取得更好的预测精度。  相似文献   

11.
Bagging and boosting negatively correlated neural networks.   总被引:2,自引:0,他引:2  
In this paper, we propose two cooperative ensemble learning algorithms, i.e., NegBagg and NegBoost, for designing neural network (NN) ensembles. The proposed algorithms incrementally train different individual NNs in an ensemble using the negative correlation learning algorithm. Bagging and boosting algorithms are used in NegBagg and NegBoost, respectively, to create different training sets for different NNs in the ensemble. The idea behind using negative correlation learning in conjunction with the bagging/boosting algorithm is to facilitate interaction and cooperation among NNs during their training. Both NegBagg and NegBoost use a constructive approach to automatically determine the number of hidden neurons for NNs. NegBoost also uses the constructive approach to automatically determine the number of NNs for the ensemble. The two algorithms have been tested on a number of benchmark problems in machine learning and NNs, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, satellite, soybean, and waveform problems. The experimental results show that NegBagg and NegBoost require a small number of training epochs to produce compact NN ensembles with good generalization.  相似文献   

12.
A constructive algorithm for training cooperative neural network ensembles   总被引:13,自引:0,他引:13  
Presents a constructive algorithm for training cooperative neural-network ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on both accuracy and diversity among individual NNs in an ensemble. In order to maintain accuracy among individual NNs, the number of hidden nodes in individual NNs are also determined by a constructive approach. Incremental training based on negative correlation is used in CNNE to train individual NNs for different numbers of training epochs. The use of negative correlation learning and different training epochs for training individual NNs reflect CNNEs emphasis on diversity among individual NNs in an ensemble. CNNE has been tested extensively on a number of benchmark problems in machine learning and neural networks, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, soybean, and Mackey-Glass time series prediction problems. The experimental results show that CNNE can produce NN ensembles with good generalization ability.  相似文献   

13.
In multivariate statistical process control (MSPC), most multivariate quality control charts are shown to be effective in detecting out-of-control signals based upon an overall statistic. But these charts do not relieve the need for pinpointing source(s) of the out-of-control signals. Neural networks (NNs) have excellent noise tolerance and high pattern identification capability in real time, which have been applied successfully in MSPC. This study proposed a selective NN ensemble approach DPSOEN, where several selected NNs are jointly used to classify source(s) of out-of-control signals in multivariate processes. The immediate location of the abnormal source(s) can greatly narrow down the set of possible assignable causes, facilitating rapid analysis and corrective action by quality operators. The performance of DPSOEN is analyzed in multivariate processes. It shows improved generalization performance that outperforms those of single NNs and Ensemble All approach. The investigation proposed a heuristic approach for applying the DPSOEN-based model as an effective and useful tool to identify abnormal source(s) in bivariate statistical process control (SPC) with potential application for MSPC in general.  相似文献   

14.
Both theoretical and experimental studies have shown that combining accurate neural networks (NNs) in the ensemble with negative error correlation greatly improves their generalization abilities. Negative correlation learning (NCL) and mixture of experts (ME), two popular combining methods, each employ different special error functions for the simultaneous training of NNs to produce negatively correlated NNs. In this paper, we review the properties of the NCL and ME methods, discussing their advantages and disadvantages. Characterization of both methods showed that they have different but complementary features, so if a hybrid system can be designed to include features of both NCL and ME, it may be better than each of its basis approaches. In this study, two approaches are proposed to combine the features of both methods in order to solve the weaknesses of one method with the strength of the other method, i.e., gated-NCL (G-NCL) and mixture of negatively correlated experts (MNCE). In the first approach, G-NCL, a dynamic combiner of ME is used to combine the outputs of base experts in the NCL method. The suggested combiner method provides an efficient tool to evaluate and combine the NCL experts by the weights estimated dynamically from the inputs based on the different competences of each expert regarding different parts of the problem. In the second approach, MNCE, the capability of a control parameter for NCL is incorporated in the error function of ME, which enables the training algorithm of ME to efficiently adjust the measure of negative correlation between the experts. This control parameter can be regarded as a regularization term added to the error function of ME to establish better balance in bias–variance–covariance trade-offs and thus improves the generalization ability. The two proposed hybrid ensemble methods, G-NCL and MNCE, are compared with their constituent methods, ME and NCL, in solving several benchmark problems. The experimental results show that our proposed methods preserve the advantages and alleviate the disadvantages of their basis approaches, offering significantly improved performance over the original methods.  相似文献   

15.
Neural-Based Learning Classifier Systems   总被引:1,自引:0,他引:1  
UCS is a supervised learning classifier system that was introduced in 2003 for classification in data mining tasks. The representation of a rule in UCS as a univariate classification rule is straightforward for a human to understand. However, the system may require a large number of rules to cover the input space. Artificial neural networks (NNs), on the other hand, normally provide a more compact representation. However, it is not a straightforward task to understand the network. In this paper, we propose a novel way to incorporate NNs into UCS. The approach offers a good compromise between compactness, expressiveness, and accuracy. By using a simple artificial NN as the classifier's action, we obtain a more compact population size, better generalization, and the same or better accuracy while maintaining a reasonable level of expressiveness. We also apply negative correlation learning (NCL) during the training of the resultant NN ensemble. NCL is shown to improve the generalization of the ensemble.  相似文献   

16.
Evolutionary ensembles with negative correlation learning   总被引:3,自引:0,他引:3  
Based on negative correlation learning and evolutionary learning, this paper presents evolutionary ensembles with negative correlation learning (EENCL) to address the issues of automatic determination of the number of individual neural networks (NNs) in an ensemble and the exploitation of the interaction between individual NN design and combination. The idea of EENCL is to encourage different individual NNs in the ensemble to learn different parts or aspects of the training data so that the ensemble can learn better the entire training data. The cooperation and specialization among different individual NNs are considered during the individual NN design. This provides an opportunity for different NNs to interact with each other and to specialize. Experiments on two real-world problems demonstrate that EENCL can produce NN ensembles with good generalization ability.  相似文献   

17.
基于聚类分析的综合神经网络集成算法   总被引:3,自引:2,他引:1  
齐新战  刘丙杰  冀海燕 《计算机仿真》2010,27(1):166-169,192
研究神经网络集成是一种有效实用的分类方法,权值是影响神经网络集成性能的重要因素。为了克服神经网络集成固定权值的缺陷,提出一种基于聚类分析的综合神经网络集成算法。算法首先将样本分类,每类样本中加入其他样本类一定数量的中心样本,不同的神经网络学习不同类的样本。根据输入数据与样本类别之间的相关程度自适应调整集成权值。算法不仅用于自适应调整集成权值,而且是一种产生个体神经网络的训练方法。四个数据集上的仿真试验证实了算法的有效性。  相似文献   

18.
基于混沌PSO算法的选择性神经网络集成方法   总被引:1,自引:0,他引:1  
田雨波  李正强  朱人杰 《计算机应用》2008,28(11):2844-2846
提出基于十进制粒子群优化算法(DePSO)和二进制PSO算法(BiPSO)的选择性神经网络集成(NNE)方法,通过PSO算法合理选择组成神经网络集成的各个神经网络,使个体间保持较大的差异度,减小"多维共线性"和样本噪声的影响。为有效保证PSO算法的粒子多样性,在迭代过程中加入混沌变异。试验表明,混沌PSO算法是组合优化权值的有效方法,同已有方法比较可以有效提高神经网络集成的泛化能力。  相似文献   

19.
Neural networks for advanced control of robot manipulators   总被引:7,自引:0,他引:7  
Presents an approach and a systematic design methodology to adaptive motion control based on neural networks (NNs) for high-performance robot manipulators, for which stability conditions and performance evaluation are given. The neurocontroller includes a linear combination of a set of off-line trained NNs, and an update law of the linear combination coefficients to adjust robot dynamics and payload uncertain parameters. A procedure is presented to select the learning conditions for each NN in the bank. The proposed scheme, based on fixed NNs, is computationally more efficient than the case of using the learning capabilities of the neural network to be adapted, as that used in feedback architectures that need to propagate back control errors through the model to adjust the neurocontroller. A practical stability result for the neurocontrol system is given. That is, we prove that the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the NN bank and the design parameters of the controller. In addition, a robust adaptive controller to NN learning errors is proposed, using a sign or saturation switching function in the control law, which leads to global asymptotic stability and zero convergence of control errors. Simulation results showing the practical feasibility and performance of the proposed approach to robotics are given.  相似文献   

20.
泛化能力是机器学习关心的一个根本问题,采用集成学习技术可以有效地提高泛化能力.本文提出了一种将支持向量机(Support Vector Machine, SVM)进行选择性集成回归的方法.通过引入三个阈值,可以选择合适的子SVM,从而进一步提高了整个集成学习的效率.实验结果表明,本文提出的选择性集成方法可以在一定程度上解决SVM的模型选择问题和大规模数据集的学习问题,与传统的集成方法Bagging相比具有更高的泛化能力.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号