首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对PID控制中的参数整定的难点及基本BP算法收敛速度慢、易陷入局部极值的问题,提出利用PSO算法的全局寻优能力和较强的收敛性来改进BP网络的权值调整新方法,从而对PID控制的比例、积分、微分进行优化控制。该方法是在基本BP算法的误差反向传播的基础上,使粒子位置的更新对应BP网络的权值和阈值的调整,既充分利用了PSO算法的全局寻优性又较好地保持了BP算法本身的反向传播特点。仿真结果表明基于PSO算法的BP神经网络的PID优化控制具有较好的性能和自学习、自适应性。  相似文献   

2.
基于改进的QPSO训练BP网络的网络流量预测*   总被引:2,自引:0,他引:2  
为了提高网络流量预测的精度,采用一种改进的QPSO算法训练BP神经网络对网络流量数据的时间序列进行建模预测。针对标准的QPSO算法不可避免地出现早熟的不足,提出一种新的基于参数自适应的QPSO算法,较好地避免了粒子群的早熟,提高了算法的全局收敛性能。仿真实验结果表明,与PSO训练的BP网络、QPSO训练的BP网络作为预测模型相比,该模型具有更高的预测精度及很好的稳定性。  相似文献   

3.
为了提高夜间条件下车牌识别准确率,提出了一种基于改进BP神经网络的车牌识别算法.为了改善夜间环境下车牌图像的质量和清晰度,在图像预处理过程中采用了图像平滑处理增强技术;利用图像边缘检测技术实现了对图像正确定位,然后通过统计车牌图像白色像素个数的方法对字符分割;在此基础上,使用基于附加动量法和自适应学习速率改进的BP神经网络方法精确识别车牌.实验结果表明,该方法对夜间车牌的分割和识别是有效的.  相似文献   

4.

为提高待生催化剂碳含量预测的准确性, 提出一种基于改进的教学算法(MTLBO) 来优化BP 神经网络的预测模型. 针对基础教学算法全局搜索能力差的问题, 在教师阶段前后增加了预习和复习过程, 并在学生阶段采用量子方式进行更新. 测试结果表明, 该改进能够提高教学算法全局探索和局部改良能力, 利用改进教学算法可优化BP神经网络的权值和阈值, 并进行待生催化剂碳含量预测. 仿真结果表明, 改进后预测模型的预测精度和泛化能力均有一定程度的提高.

  相似文献   

5.
针对通用BP网络对于高纬度、大数据量训练收敛困难的问题,在使用动量因子、自适应调整学习速率等方法的基础,引入约束聚类,构造集成神经网络,以提高网络的训练速度及诊断效果;首先,采用约束聚类算法将训练样本集划分为若干个规模相当的子样本集,分别训练生成相应子网络;此外,在诊断过程中除各子网络的输出变量外,还加入了诊断数据相对各子训练样本集的隶属度因子;最后通过一个实际电路板25维采样数据、38类故障的BP网络诊断实例验证了算法的可行性。  相似文献   

6.
为克服BP算法易陷入局部最小的缺点,同时为减少样本数据维数,提出一种基于主成分分析(PCA)的遗传神经网络方法。通过降维和去相关加快收敛速度,采用改进的遗传算法优化神经网络权值,利用自适应学习速率动量梯度下降算法对神经网络进行训练。MATLAB仿真实验结果表明,该方法在准确性和收敛性方面都优于BP算法,应用于入侵检测系统中的检测率和误报率明显优于传统方法。  相似文献   

7.
基于改进BP神经网络的函数逼近性能对比研究   总被引:1,自引:0,他引:1  
为了正确反映实际应用中经常采用的6种典型BP神经网络的改进算法的非线性函数逼近能力,本文从数学角度详细阐述这6种典型BP神经网络的改进算法的学习过程,简要地介绍MATLAB工具箱中设计BP网络的训练函数,最后在MATLAB环境下设计具体的网络来对指定的非线性函数进行逼近实验,并对这6种典型BP神经网络的改进算法的性能差异进行对比。仿真结果表明,对于中小规模网络而言,LM优化算法逼近性能最佳,其次是拟牛顿算法、共轭梯度法、弹性BP算法、自适应学习速率算法和动量BP算法。  相似文献   

8.
鉴于在低温状态下锂电池实时容量可估计性难度高, 低温环境下瞬时电流电压对瞬态电池容量变化影响效果大. 对Dense全连接层为主体的深度前馈BP网络模型进行了研究, 进行了不同添加层对模型预测值与实际值的影响分析, 采用了[11-9-12]的3层隐藏层BP网络模型以达到较高的精度, 采用了基于SGD扩展的使用动量和自适应学习率来加快收敛速度Nadam优化算法以及Log-cosh损失函数优化模型, 并且采用正则化方法降低过拟合,提高网络泛化能力. 基于HPPC工况下0度低温实验测试数据进行模型的训练以及测试, 经实验测试实现了在不同电压电流条件下所预测的soc误差在0.04左右.  相似文献   

9.
同时使用动量和自适应步长技巧的自适应矩估计(Adaptive Moment Estimation,Adam)型算法广泛应用于深度学习中.针对此方法不能同时在理论和实验上达到最优这一问题,文中结合AdaBelief灵活调整步长提高实验性能的技巧,以及仅采用指数移动平均(Exponential Moving Average,EMA)策略调整步长的Heavy-Ball动量方法加速收敛的优点,提出基于AdaBelief的Heavy-Ball动量方法.借鉴AdaBelief和Heavy-Ball动量方法收敛性分析的技巧,巧妙选取时变步长、动量系数,并利用添加动量项和自适应矩阵的方法,证明文中方法对于非光滑一般凸优化问题具有最优的个体收敛速率.最后,在凸优化问题和深度神经网络上的实验验证理论分析的正确性,并且证实文中方法可在理论上达到最优收敛性的同时提高性能.  相似文献   

10.
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.  相似文献   

11.
Dynamic learning rate optimization of the backpropagation algorithm   总被引:12,自引:0,他引:12  
It has been observed by many authors that the backpropagation (BP) error surfaces usually consist of a large amount of flat regions as well as extremely steep regions. As such, the BP algorithm with a fixed learning rate will have low efficiency. This paper considers dynamic learning rate optimization of the BP algorithm using derivative information. An efficient method of deriving the first and second derivatives of the objective function with respect to the learning rate is explored, which does not involve explicit calculation of second-order derivatives in weight space, but rather uses the information gathered from the forward and backward propagation, Several learning rate optimization approaches are subsequently established based on linear expansion of the actual outputs and line searches with acceptable descent value and Newton-like methods, respectively. Simultaneous determination of the optimal learning rate and momentum is also introduced by showing the equivalence between the momentum version BP and the conjugate gradient method. Since these approaches are constructed by simple manipulations of the obtained derivatives, the computational and storage burden scale with the network size exactly like the standard BP algorithm, and the convergence of the BP algorithm is accelerated with in a remarkable reduction (typically by factor 10 to 50, depending upon network architectures and applications) in the running time for the overall learning process. Numerous computer simulation results are provided to support the present approaches.  相似文献   

12.
根据实际应用中神经网络训练样本通常具有内在特征和规律性,提出一种基于样本自组织聚类的BP神经网络预测模型。通过自组织竞争网络的聚类特征,改善样本训练对BP网络性能的影响。BP神经网络采用收敛速度较快和误差精度较高的动量—自适应学习速率调整算法。并通过基于这种模型的空气质量预测实验,表明基于样本自组织聚类的BP神经网络预测模型首先会提高收敛速度,其次会减少陷入局部最小的可能,提高预测精度。  相似文献   

13.
This study mainly focuses on the development of intelligent forecasting structures via a similar time method with historical load change rates for the hourly, daily and monthly load forecasting simultaneously based on the basic frameworks of fuzzy neural network (FNN) and particle swarm optimization (PSO). In the regulative aspect of network parameters, conventional back-propagation (BP) and PSO tuning algorithms are used, and varied learning rates are designed in the sense of discrete-time Lyapunov stability theory. The performance comparisons of different intelligent forecasting structures including neural network (NN) structure with BP tuning algorithm (NN-BP), FNN structure with BP tuning algorithm (FNN-BP), FNN structure with BP tuning algorithm and varied learning rates (FNN-BP-V), FNN structure with PSO tuning algorithm (FNN-PSO) and newly-designed adaptive PSO (APSO) structure are verified by numerical simulations. In order to verify the effectiveness of the superior APSO forecasting structure in practical energy-saving load regulation, the load forecasting during every 15 min is also given, and its result is used to manipulate the scheduled unloading control of a real case in Taiwan campus.  相似文献   

14.
针对平衡优化器算法存在的收敛精度低和易陷入局部停滞的问题,提出一种基于自适应交叉与协方差学习的改进平衡优化器算法。首先,构建外部存档来保留历史优势个体,增加种群多样性,以提高算法的全局寻优能力。其次,引入自适应交叉概率来平衡算法的全局探索能力和局部开发能力,以提高算法的寻优精度和鲁棒性。最后,采用协方差学习策略,充分利用浓度向量之间的关系来增强种群间信息交流,以避免算法陷入局部停滞。通过对CEC2019测试函数进行仿真实验,并将改进算法与反向传播(back propagation,BP)神经网络相结合用于预测新疆玛纳斯河的径流情况,实验结果表明,改进算法在收敛精度和鲁棒性方面有显著提升,且大幅提高了BP神经网络的径流预测效果。  相似文献   

15.
提出了一种新的基于遗传算法和误差反向传播的双权值神经网络学习算法,同时确定核心权值、方向权值以及幂参数、学习率等参数,通过适当地调节这些参数,可以实现尽可能多种不同超曲面的特性以及起到加快收敛的效果。并通过对实际的模式分类问题的仿真,将文中的方法与带动量项BP算法、CSFN等算法进行了比较,验证了其有效性。实验结果表明所提出的方法具有分类准确率高、收敛速度快的优点。  相似文献   

16.
为提高热轧生产过程中板带凸度的预测精度,提出了一种将粒子群优化算法(particle swarm optimization, PSO)、支持向量回归(support vector regression, SVR)和BP神经网络(back propagation neural network, BPNN)相结合的板带凸度预测模型。采用PSO算法优化SVR模型的参数,建立了PSO-SVR板带凸度预测模型,提出采用BPNN建立板带凸度偏差模型与PSO-SVR板带凸度模型相结合的方法对板带凸度进行预测。采用现场数据对模型的预测精度进行验证,并采用统计指标评价模型的综合性能。仿真结果表明,与PSO-SVR、SVR、BPNN和GA-SVR模型进行比较,PSO-SVR+BPNN模型具有较高的学习能力和泛化能力,并且比GA-SVR模型运算时间短。  相似文献   

17.
基于混沌优化的非线性预测控制器   总被引:2,自引:2,他引:2  
针对非线性系统的控制问题,本文将神经网络辨识、混沌优化和预测控制思想有机结合,提出了一种新型非线性预测控制器.该控制器以神经网络作为预测模型,混沌优化算法作为滚动优化策略,避免了非线性预测控制中复杂的梯度计算和矩阵求逆问题.另外在训练神经网络过程中,采用了带混沌机制的自适应学习率的BP算法,以提高神经网络的收敛能力和收敛速度.仿真研究说明了该非线性预测控制器的有效性及实时性.  相似文献   

18.
针对传统BP神经网络的随机初始权值和阈值易导致网络学习速度慢、容易陷入局部解及运算精度低等缺陷,提出基于改进二进制萤火虫算法(IBGSO)的BP神经网络并行集成学习算法.首先构建以高斯变异函数作为概率映射函数的IBGSO,并从理论上分析算法的有效性.然后结合IBGSO与BP神经网络构建并行集成学习算法,并将算法应用于农业干旱灾害评估中.实验表明,相比传统算法,文中算法在计算速度及精度方面更优,可以提高旱情等级评估的准确性.  相似文献   

19.
Hybrid back-propagation training with evolutionary strategies   总被引:1,自引:0,他引:1  
This work presents a hybrid algorithm for neural network training that combines the back-propagation (BP) method with an evolutionary algorithm. In the proposed approach, BP updates the network connection weights, and a ( \(1+1\) ) Evolutionary Strategy (ES) adaptively modifies the main learning parameters. The algorithm can incorporate different BP variants, such as gradient descent with adaptive learning rate (GDA), in which case the learning rate is dynamically adjusted by the stochastic ( \(1+1\) )-ES as well as the deterministic adaptive rules of GDA; a combined optimization strategy known as memetic search. The proposal is tested on three different domains, time series prediction, classification and biometric recognition, using several problem instances. Experimental results show that the hybrid algorithm can substantially improve upon the standard BP methods. In conclusion, the proposed approach provides a simple extension to basic BP training that improves performance and lessens the need for parameter tuning in real-world problems.  相似文献   

20.
董静芳  杨慧 《计算机工程》2005,31(Z1):154-156
分别从BP网络的学习步长,学习速率自适应调整算法的参数,动量法和自适应学习速率结合起来算法的参数3方面讨论了改进BP参数对网络识别能力的影响;在确定BP网络的隐含层节点个数的过程中提出了BP神经网络自适应学习算法,使得隐层节点的选取动态实现。仿真实验表明,该改进是可行的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号