首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes a new error function at hidden layers to speed up the training of multilayer perceptrons (MLPs). With this new hidden error function, the layer-by-layer (LBL) algorithm approximately converges to the error backpropagation algorithm with optimum learning rates. Especially, the optimum learning rate for a hidden weight vector appears approximately as a multiplication of two optimum factors, one for minimizing the new hidden error function and the other for assigning hidden targets. Effectiveness of the proposed error function was demonstrated for handwritten digit recognition and isolated-word recognition tasks. Very fast learning convergence was obtained for MLPs without the stalling problem experienced in conventional LBL algorithms.  相似文献   

2.
Although the backpropagation (BP) scheme is widely used as a learning algorithm for multilayered neural networks, the learning speed of the BP algorithm to obtain acceptable errors is unsatisfactory in spite of some improvements such as introduction of a momentum factor and an adaptive learning rate in the weight adjustment. To solve this problem, a fast learning algorithm based on the extended Kalman filter (EKF) is presented and fortunately its computational complexity has been reduced by some simplifications. In general, however, the Kalman filtering algorithm is well known to be sensitive to the nature of noises which is generally assumed to be Gaussian. In addition, the H(infinity) theory suggests that the maximum energy gain of the Kalman algorithm from disturbances to the estimation error has no upper bound. Therefore, the EKF-based learning algorithms should be improved to enhance the robustness to variations in the initial values of link weights and thresholds as well as to the nature of noises. The paper proposes H(infinity)-learning as a novel learning rule and to derive new globally and locally optimized learning algorithms based on H (infinity)-learning. Their learning behavior is analyzed from various points of view using computer simulations. The derived algorithms are also compared, in performance and computational cost, with the conventional BP and EKF learning algorithms.  相似文献   

3.
模糊逻辑系统的GA+BP混合学习算法   总被引:7,自引:0,他引:7  
提出一种在GA中融入BP算法的混合学习算法以实现模糊逻辑系统的自学习,利用遗传算法的全局最优性在大范围内搜索可能的极值,而用BP算法的误差梯度下降特性在极值点附近的快速搜索,从而达到了全局最优与快速搜索的有机结合,仿真结果表明,这种混合算法的学习效率无论是相对于GA还是BP均有显著提高。  相似文献   

4.
The slow convergence of back-propagation neural network (BPNN) has become a challenge in data-mining and knowledge discovery applications due to the drawbacks of the gradient descent (GD) optimization method, which is widely adopted in BPNN learning. To solve this problem, some standard optimization techniques such as conjugate-gradient and Newton method have been proposed to improve the convergence rate of BP learning algorithm. This paper presents a heuristic method that adds an adaptive smoothing momentum term to original BP learning algorithm to speedup the convergence. In this improved BP learning algorithm, adaptive smoothing technique is used to adjust the momentums of weight updating formula automatically in terms of “3 σ limits theory.” Using the adaptive smoothing momentum terms, the improved BP learning algorithm can make the network training and convergence process faster, and the network’s generalization performance stronger than the standard BP learning algorithm can do. In order to verify the effectiveness of the proposed BP learning algorithm, three typical foreign exchange rates, British pound (GBP), Euro (EUR), and Japanese yen (JPY), are chosen as the forecasting targets for illustration purpose. Experimental results from homogeneous algorithm comparisons reveal that the proposed BP learning algorithm outperforms the other comparable BP algorithms in performance and convergence rate. Furthermore, empirical results from heterogeneous model comparisons also show the effectiveness of the proposed BP learning algorithm.  相似文献   

5.
In this paper, fuzzy inference models for pattern classifications have been developed and fuzzy inference networks based on these models are proposed. Most of the existing fuzzy rule-based systems have difficulties in deriving inference rules and membership functions directly from training data. Rules and membership functions are obtained from experts. Some approaches use backpropagation (BP) type learning algorithms to learn the parameters of membership functions from training data. However, BP algorithms take a long time to converge and they require an advanced setting of the number of inference rules. The work to determine the number of inference rules demands lots of experiences from the designer. In this paper, self-organizing learning algorithms are proposed for the fuzzy inference networks. In the proposed learning algorithms, the number of inference rules and the membership functions in the inference rules will be automatically determined during the training procedure. The learning speed is fast. The proposed fuzzy inference network (FIN) classifiers possess both the structure and the learning ability of neural networks, and the fuzzy classification ability of fuzzy algorithms. Simulation results on fuzzy classification of two-dimensional data are presented and compared with those of the fuzzy ARTMAP. The proposed fuzzy inference networks perform better than the fuzzy ARTMAP and need less training samples.  相似文献   

6.
提出一种用于多层前向神经网络的综合反向传播 算法。该算法使用了综合考虑了绝对误上对误差的广义指标函数,采用了在网络输出空间搜索的反传技术。  相似文献   

7.
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.  相似文献   

8.
一种基于误差放大的快速BP学习算法   总被引:6,自引:0,他引:6  
针对目前使用梯度下降原则的BP学习算法,受饱和区域影响容易出现收敛速度趋缓的问题,提出一种新的基于误差放大的快速BP学习算法以消除饱和区域对后期训练的影响.该算法通过对权值修正函数中误差项的自适应放大,使权值的修正过程不会因饱和区域的影响而趋于停滞,从而使BP学习算法能很快地收敛到期望的精度值.对3-parity问题和Soybean分类问题的仿真实验表明,与目前常用的Delta-bar-Delta方法、加入动量项方法、Prime Offset等方法相比,该方法在不增加算法的复杂度和额外的CPU机时的情况下能更快地收敛到目标精度值.  相似文献   

9.
朱慧慧  王耀南 《计算机工程》2012,38(17):182-185,188
保健酒中可见异物个体微小、形状复杂多变,不利于自动分拣。为此,提出一种基于异物几何特征和不变矩特征的神经网络复合分类方法。通过单层感知器进行一级分类以检测毛发类异物,利用BP网络对非毛发类异物进行二级分类。为提高BP网络训练速度,设计动量因子和学习速率可自适应调整的改进学习算法。实验结果表明,该分类方法识别准确度高,识别速度快。  相似文献   

10.
In the conventional backpropagation (BP) learning algorithm used for the training of the connecting weights of the artificial neural network (ANN), a fixed slope−based sigmoidal activation function is used. This limitation leads to slower training of the network because only the weights of different layers are adjusted using the conventional BP algorithm. To accelerate the rate of convergence during the training phase of the ANN, in addition to updates of weights, the slope of the sigmoid function associated with artificial neuron can also be adjusted by using a newly developed learning rule. To achieve this objective, in this paper, new BP learning rules for slope adjustment of the activation function associated with the neurons have been derived. The combined rules both for connecting weights and slopes of sigmoid functions are then applied to the ANN structure to achieve faster training. In addition, two benchmark problems: classification and nonlinear system identification are solved using the trained ANN. The results of simulation-based experiments demonstrate that, in general, the proposed new BP learning rules for slope and weight adjustments of ANN provide superior convergence performance during the training phase as well as improved performance in terms of root mean square error and mean absolute deviation for classification and nonlinear system identification problems.  相似文献   

11.
This work presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.  相似文献   

12.
一种基于模拟退火的自适应算法   总被引:2,自引:0,他引:2  
针对常规BP算法收敛速度慢和易陷入局部极小的问题,文章提出了一种新的BP算法———SASSFBP算法。该算法根据训练中最近两个梯度的符号及其相对大小来动态地改变权步因子,提高了神经网络的收敛速度,并同时结合模拟退火算法来避免陷入局部极小。仿真实验结果表明:SASSFBP算法在收敛速度与运算精度,以及避免陷入局部极小的能力等方面均明显优于常规的BP算法。  相似文献   

13.
姜雷  李新 《计算机时代》2010,(12):29-30
在标准BP神经网络的训练中,将误差函数作为权值调整的依据,使用固定学习率计算权值,这样的结果往往使网络的学习速度过慢甚至无法收敛。对此,从网络收敛的稳定性和速度的角度出发,分析了误差函数和权值修改函数,对算法中学习率的作用进行了具体的讨论,提出了一种根据误差变化对学习率进行动态调整的方法。该方法简单实用,能有效防止网络训练时出现发散,提高网络的收敛速度和稳定性。  相似文献   

14.
For a recurrent neural network (RNN), its transient response is a critical issue, especially for real-time signal processing applications. The conventional RNN training algorithms, such as backpropagation through time (BPTT) and real-time recurrent learning (RTRL), have not adequately addressed this problem because they suffer from low convergence speed. While increasing the learning rate may help to improve the performance of the RNN, it can result in unstable training in terms of weight divergence. Therefore, an optimal tradeoff between RNN training speed and weight convergence is desired. In this paper, a robust adaptive gradient-descent (RAGD) training algorithm of RNN is developed based on a novel RNN hybrid training concept. It switches the training patterns between standard real-time online backpropagation (BP) and RTRL according to the derived convergence and stability conditions. The weight convergence and $L_2$-stability of the algorithm are derived via the conic sector theorem. The optimized adaptive learning maximizes the training speed of the RNN for each weight update without violating the stability and convergence criteria. Computer simulations are carried out to demonstrate the applicability of the theoretical results.   相似文献   

15.
一种新型自适应神经网络回归估计算法   总被引:5,自引:0,他引:5  
结合自适应谐振理论和域理论的优点,针对回归估计问题的特性,提出了一种新型神经网络回归估计算法FTART3.该算法学习速度快、归纳能力强,不仅具有增量学习能力,还克服了BP类算法需要人为设置隐层神经元的缺陷,直线、下弦、二维墨西哥草帽、三维墨西哥草帽等4个实验表明,FTART3在函数近似效果和训练时间代价上都优于目前常用于因归估计问题的BP类算法。  相似文献   

16.
针对传统BP神经网络训练速度慢,误差大且易陷入局部极小值的缺点,设计了一种改进的复合误差函数来代替传统的全局均方误差函数以提高其学习率,同时采用了改进的分层动态调整不同学习率的新BP神经网络对路面裂缝图片进行分类。实验结果表明,与传统方法相比,改进后的算法在检测精度和速度上有了明显的提高。  相似文献   

17.
Conventional gradient descent learning algorithms for soft computing systems have the learning speed bottleneck problem and the local minima problem. To effectively solve the two problems, the n-variable constructive granular system with high-speed granular constructive learning is proposed based on granular computing and soft computing, and proved to be a universal approximator. The fast granular constructive learning algorithm can highly speed up granular knowledge discovery by directly calculating all parameters of the n-variable constructive granular system using training data, and then construct the n-variable constructive granular system with any required accuracy using a small number of granular rules. Predictive granular knowledge discovery simulation results indicate that the direct-calculation-based granular constructive algorithm is better than the conventional gradient descent learning algorithm in terms of learning speed, learning error, and prediction error.  相似文献   

18.
A fast learning algorithm for training multilayer feedforward neural networks (FNN) by using a fading memory extended Kalman filter (FMEKF) is presented first, along with a technique using a self-adjusting time-varying forgetting factor. Then a U-D factorization-based FMEKF is proposed to further improve the learning rate and accuracy of the FNN. In comparison with the backpropagation (BP) and existing EKF-based learning algorithms, the proposed U-D factorization-based FMEKF algorithm provides much more accurate learning results, using fewer hidden nodes. It has improved convergence rate and numerical stability (robustness). In addition, it is less sensitive to start-up parameters (e.g., initial weights and covariance matrix) and the randomness in the observed data. It also has good generalization ability and needs less training time to achieve a specified learning accuracy. Simulation results in modeling and identification of nonlinear dynamic systems are given to show the effectiveness and efficiency of the proposed algorithm.  相似文献   

19.
Training of recurrent neural networks (RNNs) introduces considerable computational complexities due to the need for gradient evaluations. How to get fast convergence speed and low computational complexity remains a challenging and open topic. Besides, the transient response of learning process of RNNs is a critical issue, especially for online applications. Conventional RNN training algorithms such as the backpropagation through time and real-time recurrent learning have not adequately satisfied these requirements because they often suffer from slow convergence speed. If a large learning rate is chosen to improve performance, the training process may become unstable in terms of weight divergence. In this paper, a novel training algorithm of RNN, named robust recurrent simultaneous perturbation stochastic approximation (RRSPSA), is developed with a specially designed recurrent hybrid adaptive parameter and adaptive learning rates. RRSPSA is a powerful novel twin-engine simultaneous perturbation stochastic approximation (SPSA) type of RNN training algorithm. It utilizes three specially designed adaptive parameters to maximize training speed for a recurrent training signal while exhibiting certain weight convergence properties with only two objective function measurements as the original SPSA algorithm. The RRSPSA is proved with guaranteed weight convergence and system stability in the sense of Lyapunov function. Computer simulations were carried out to demonstrate applicability of the theoretical results.  相似文献   

20.
丁一 《计算机仿真》2007,24(6):142-145
人工神经网络集成技术是神经计算技术的一个研究热点,在许多领域中已经有了成熟的应用.神经网络集成是用有限个神经网络对同一个问题进行学习,集成在某输入示例下的输出由构成集成的各神经网络在该示例下的输出共同决定.负相关学习法是一种神经网络集成的训练方法,它鼓励集成中的不同个体网络学习训练集的不同部分,以使整个集成能更好地学习整个训练数据.改进的负相关学习法是在误差函数中使用一个带冲量的BP算法,给合了原始负相关学习法和带冲量的BP算法的优点,使改进的算法成为泛化能力强、学习速度快的批量学习算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号