首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对BP神经网络存在易陷入局部极小值、收敛速度慢等问题,提出用遗传算法优化BP神经网络并用于房价预测。采用BP神经网络建立房价预测模型。利用遗传算法对BP神经网络的初始权值和阈值进行优化。选取1998年2011年贵阳市的房价及其主要影响因素作为实验数据,分别对传统的BP神经网络和经过遗传算法优化后的BP神经网络进行训练和仿真实验,结果表明,与传统的BP神经网络预测模型相比,经过遗传算法优化后的BP神经网络预测模型能加快网络的收敛速度,提高房价的预测精度。  相似文献   

2.
BP算法的改进及其在Matlab上的实现   总被引:2,自引:0,他引:2  
针对BP算法这种当前前馈神经网络训练中应用最多的算法,在介绍BP神经网络的基础上,对标准的BP网络训练算法存在的收敛速度慢、易陷入局部极值的严重缺点,提出了几种学习算法上的改进;进而介绍了改进蹬算法在Matlab神经网络工具箱中的函数实现。最后应用实例利用Matlab神经网络工具箱对标准BP算法及改进的算法进行语言编程、仿真。仿真结果显示,改进后的算法在极值、收敛速度上得到了很大的改善。  相似文献   

3.
An optimizing BP neural network algorithm based on genetic algorithm   总被引:4,自引:0,他引:4  
A back-propagation (BP) neural network has good self-learning, self-adapting and generalization ability, but it may easily get stuck in a local minimum, and has a poor rate of convergence. Therefore, a method to optimize a BP algorithm based on a genetic algorithm (GA) is proposed to speed the training of BP, and to overcome BP’s disadvantage of being easily stuck in a local minimum. The UCI data set is used here for experimental analysis and the experimental result shows that, compared with the BP algorithm and a method that only uses GA to learn the connection weights, our method that combines GA and BP to train the neural network works better; is less easily stuck in a local minimum; the trained network has a better generalization ability; and it has a good stabilization performance.  相似文献   

4.
针对遗传算法、粒子群算法等BP网络的学习算法对高维复杂问题仍易早熟收敛,且无法保证收敛到最优解。把量子粒子群算法应用于BP网络的学习中,并把改进BP网络用于入侵检测。通过KDD99CUP数据集分别对基于不同学习算法的BP网络进行了实验比较,结果表明:该算法的收敛速度较快,可在一定程度上提高入侵检测系统的准确率和降低的误报率。  相似文献   

5.
Fast Learning Algorithms for Feedforward Neural Networks   总被引:7,自引:0,他引:7  
In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data.  相似文献   

6.
朱庆保 《计算机工程》2000,26(12):31-32,71
该文提出了一种用神经网络学习非线性函数时的自适应控制BP训练算法,该算法根据当前收敛误差及收敛误差变化率自适应确定学习控制值,并针对某些学习应用提出了一种两阶段自适应控制逐一训练算法。通过仿真实验证明,这种方法比传统方法收敛快,学习精度高,最后,给出了一算法的仿真实例。  相似文献   

7.
Error back-propagation (BP) is one of the most popular ideas used in learning algorithms for multilayer neural networks. In BP algorithms, there are two types of learning schemes, online learning and batch learning. The online BP has been applied to various problems in practice, because of its simplicity of implementation. However, efficient implementation of the online BP usually requires an ad hoc rule for determining the learning rate of the algorithm. In this paper, we propose a new learning algorithm called SPM, which is derived from the successive projection method for solving a system of nonlinear inequalities. Although SPM can be regarded as a modification of online BP, the former algorithm determines the learning rate (step-size) adoptively based on the output for each input pattern. SPM may also be considered a modification of the globally guided back-propagation (GGBP) proposed by Tang and Koehler. Although no theoretical proof of the convergence for SPM is given, some simulation results on pattern classification problems indicate that SPM is more effective and robust than the standard online BP and GGBP  相似文献   

8.
介绍了BP神经网络的基本结构及原理,分析了其收敛慢的原因。为加快其收敛速度,结合带动量梯度下降法提出一种新的算法(PBBP),用多个学习速率不同但结构相同的网络进行并行训练,在每次迭代后都根据误差找出处于最佳状态的网络,并使其它网络的训练参数作适当变化再进行下一次迭代,直到整个网络的误差减小到允许范围内或达到训练次数要求,加快了其收敛速度,能够很好地脱离平坦区。通过在Matlab里编程进行仿真实验证明,该算法是可行的。  相似文献   

9.
The slow convergence of back-propagation neural network (BPNN) has become a challenge in data-mining and knowledge discovery applications due to the drawbacks of the gradient descent (GD) optimization method, which is widely adopted in BPNN learning. To solve this problem, some standard optimization techniques such as conjugate-gradient and Newton method have been proposed to improve the convergence rate of BP learning algorithm. This paper presents a heuristic method that adds an adaptive smoothing momentum term to original BP learning algorithm to speedup the convergence. In this improved BP learning algorithm, adaptive smoothing technique is used to adjust the momentums of weight updating formula automatically in terms of “3 σ limits theory.” Using the adaptive smoothing momentum terms, the improved BP learning algorithm can make the network training and convergence process faster, and the network’s generalization performance stronger than the standard BP learning algorithm can do. In order to verify the effectiveness of the proposed BP learning algorithm, three typical foreign exchange rates, British pound (GBP), Euro (EUR), and Japanese yen (JPY), are chosen as the forecasting targets for illustration purpose. Experimental results from homogeneous algorithm comparisons reveal that the proposed BP learning algorithm outperforms the other comparable BP algorithms in performance and convergence rate. Furthermore, empirical results from heterogeneous model comparisons also show the effectiveness of the proposed BP learning algorithm.  相似文献   

10.
提出一种量子BP网络模型及改进学习算法,该BP网络模型首先基于量子学中一位相移门和两位受控非门的通用性,构造出一种量子神经元,然后由该量子神经元构造隐含层,采用梯度下降法进行学习。输出层采用传统神经元构造,采用基于改进的带动量自适应学习率梯度下降法学习。在UCI两个数据集上采用该模型及算法,实验结果表明该方法比传统的BP网络具有较好的收敛速度和正确率。  相似文献   

11.
Particle swarm optimization (PSO) is a heuristic optimization technique based on swarm intelligence that is inspired by the behavior of bird flocking. The canonical PSO has the disadvantage of premature convergence. Several improved PSO versions do well in keeping the diversity of the particles during the searching process, but at the expense of rapid convergence. This paper proposes an example-based learning PSO (ELPSO) to overcome these shortcomings by keeping a balance between swarm diversity and convergence speed. Inspired by a social phenomenon that multiple good examples can guide a crowd towards making progress, ELPSO uses an example set of multiple global best particles to update the positions of the particles. In this study, the particles of the example set were selected from the best particles and updated by the better particles in the first-in-first-out order in each iteration. The particles in the example set are different, and are usually of high quality in terms of the target optimization function. ELPSO has better diversity and convergence speed than single-gbest and non-gbest PSO algorithms, which is proved by mathematical and numerical results. Finally, computational experiments on benchmark problems show that ELPSO outperforms all of the tested PSO algorithms in terms of both solution quality and convergence time.  相似文献   

12.
BP网络的PID型二阶快速学习算法   总被引:12,自引:0,他引:12  
本文利用PID控制的思想,提出了BP网络的一种二阶快速学习算法,给出了学习因子选择的必要条件与较佳区域,并结合一非线性正弦函数进行了仿真研究.结果表明,较之标标准BP学习算法,利用此法可使学习收敛速度提高22倍左右.  相似文献   

13.
GA-BP学习算法往往会出现收敛速度慢,可能陷入局部极值的现象。针对以上问题,选取了自适应GA-BP(AGA-BP)算法,并在GA-BP算法和AGA-BP算法的基础上添加跳跃基因,称之为JG-GA-BP算法和JG-AGA-BP算法,用于解决分类问题。算法在遗传算法的基础上增加了跳跃基因算子,用于优化BP神经网络的结构参数,从而建立相应的神经网络拓扑模型。为验证添加跳跃基因后的学习算法的分类效果,将JG-AGA-BP算法、JG-GA-BP算法、AGA-BP算法和GA-BP算法的性能进行比较。以随机数、iris、wine、鲍鱼数据集的分类实验为例,研究结果显示出添加了跳跃基因的GA-BP算法的准确率和收敛速度都有一定程度的提高。  相似文献   

14.
In the conventional backpropagation (BP) learning algorithm used for the training of the connecting weights of the artificial neural network (ANN), a fixed slope−based sigmoidal activation function is used. This limitation leads to slower training of the network because only the weights of different layers are adjusted using the conventional BP algorithm. To accelerate the rate of convergence during the training phase of the ANN, in addition to updates of weights, the slope of the sigmoid function associated with artificial neuron can also be adjusted by using a newly developed learning rule. To achieve this objective, in this paper, new BP learning rules for slope adjustment of the activation function associated with the neurons have been derived. The combined rules both for connecting weights and slopes of sigmoid functions are then applied to the ANN structure to achieve faster training. In addition, two benchmark problems: classification and nonlinear system identification are solved using the trained ANN. The results of simulation-based experiments demonstrate that, in general, the proposed new BP learning rules for slope and weight adjustments of ANN provide superior convergence performance during the training phase as well as improved performance in terms of root mean square error and mean absolute deviation for classification and nonlinear system identification problems.  相似文献   

15.
为克服传统BP神经网络(BP Neural Network,BPNN)在销售预测中,预测精度低、收敛速度慢的缺点.提出了一种基于改进免疫遗传算法(Improved Immune Genetic Algorithm,IIGA)优化BP神经网络的销售预测模型.改进的免疫遗传算法提出了新的种群初始化方式、抗体浓度的调节机制及自适应交叉算子、变异算子的设计方法,有效的提高了IIGA的收敛能力和寻优能力.并用IIGA优化BPNN的初始权值和阈值,改善网络参数的随机性导致BPNN输出不稳定和易陷入局部极值的缺点.以某钢铁企业的历史销售数据为例进行实证研究,利用Matlab分别构建BP、IGA-BP和IIGA-BP神经网络预测模型进行仿真对比分析.实验证明,IIGA-BP神经网络预测模型较BP神经网络预测模型预测精度提高了23.82%,较IGA-BP神经网络预测模型预测精度提高了22.02%.IIGA-BP神经网络模型对钢材销售预测的泛化性能更好,预测效果更稳定误差基本保持在[0.25,0.25]之间,预测精度大幅度提高,为企业销售预测提供了一种较为有效的方法.  相似文献   

16.
小波神经网络(WNN)是将小波理论和神经网络理论结合起来的一种神经网络,有较强的函数学习能力和推广能力及广阔的应用前景。采用基于WNN的BP权值平衡算法对多传感器测量的结果进行特征级的数据融合,融合结果提供给决策级判断。该融合算法避免了BP网络收敛速度慢,易产生局部最优解等缺点,提高了学习的速度、精度。仿真结果表明了该方法的有效性。  相似文献   

17.
合理的区域物流中心选址是加速区域物流网络升级优化,促进经济持续、健康、稳定发展的基础。文中运用模拟退火算法改进BP学习算法构成一种新的优化算法,通过学习和迭代求出问题的解。首先,运用精确的数学模型描述BP学习算法,并通过图形阐明模拟退火算法改进BP算法的流程;然后,针对改进后的算法规划了6个选址步骤;最后,通过具体选址实例,验证改进算法和步骤的有效性。文中研究的算法在收敛稳定性、收敛速度、初值敏感性等方面具有良好的效果,表现出高效、实用、简洁的特性。  相似文献   

18.
针对K-means算法依赖于初始聚类中心和易陷入局部最优解的缺陷,提出一种改进的求解聚类问题的差分进化算法。将改进的差分进化算法和K-means迭代相结合,使算法对初始聚类中心的敏感性和陷入局部最优解的可能性降低,提高了算法的稳定性。通过将反向学习技术引入到框架中来指导搜索新的空间,提高了算法的全局寻优能力。为了提高算法效率,根据聚类问题编码的特点设计了一种整理算子来消除冗余以及调整了差分进化算法的种群更新策略。最后在迭代过程中不断引入随机个体,增强了种群的多样性。与K-means和几个进化聚类算法进行比较,实验结果表明,该算法不仅能有效抑制早熟收敛,而且具有较强的稳定性,较好的聚类效果。  相似文献   

19.
提出了一种新的基于遗传算法和误差反向传播的双权值神经网络学习算法,同时确定核心权值、方向权值以及幂参数、学习率等参数,通过适当地调节这些参数,可以实现尽可能多种不同超曲面的特性以及起到加快收敛的效果。并通过对实际的模式分类问题的仿真,将文中的方法与带动量项BP算法、CSFN等算法进行了比较,验证了其有效性。实验结果表明所提出的方法具有分类准确率高、收敛速度快的优点。  相似文献   

20.
The backpropogation (BP) neural networks have been widely applied in scientific research and engineering. The success of the application, however, relies upon the convergence of the training procedure involved in the neural network learning. We settle down the convergence analysis issue through proving two fundamental theorems on the convergence of the online BP training procedure. One theorem claims that under mild conditions, the gradient sequence of the error function will converge to zero (the weak convergence), and another theorem concludes the convergence of the weight sequence defined by the procedure to a fixed value at which the error function attains its minimum (the strong convergence). The weak convergence theorem sharpens and generalizes the existing convergence analysis conducted before, while the strong convergence theorem provides new analysis results on convergence of the online BP training procedure. The results obtained reveal that with any analytic sigmoid activation function, the online BP training procedure is always convergent, which then underlies successful application of the BP neural networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号