首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Control system implementation is one of the major difficulties in rehabilitation robot design. A newly developed adaptive impedance controller based on evolutionary dynamic fuzzy neural network (EDRFNN) is presented, where the desired impedance between robot and impaired limb can be regulated in real time according to the impaired limb??s physical recovery condition. Firstly, the impaired limb??s damping and stiffness parameters for evaluating its physical recovery condition are online estimated by using a slide average least squares (SALS)identification algorithm. Then, hybrid learning algorithms for EDRFNN impedance controller are proposed, which comprise genetic algorithm (GA), hybrid evolutionary programming (HEP) and dynamic back-propagation (BP) learning algorithm. GA and HEP are used to off-line optimize DRFNN parameters so as to get suboptimal impedance control parameters. Dynamic BP learning algorithm is further online fine-tuned based on the error gradient descent method. Moreover, the convergence of a closed loop system is proven using the discrete-type Lyapunov function to guarantee the global convergence of tracking error. Finally, simulation results show that the proposed controller provides good dynamic control performance and robustness with regard to the change of the impaired limb??s physical condition.  相似文献   

2.
Parameter Incremental Learning Algorithm for Neural Networks   总被引:1,自引:0,他引:1  
In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable  相似文献   

3.
This article presents the hardware implementation of the floating-point processor (FPP) to develop the radial basis function (RBF) neural network for the general purpose of pattern recognition and nonlinear control. The floating-point processor is designed on a field programmable gate array (FPGA) chip to execute nonlinear functions required in the parallel calculation of the back-propagation algorithm. Internal weights of the RBF network are updated by the online learning back-propagation algorithm. The on-line learning process of the RBF chip is compared numerically with the results of the RBF neural network learning process written in the MATLAB program. The performance of the designed RBF neural chip is tested for the real-time pattern classification of the XOR logic. Performances are evaluated by comparing results from the MATLAB through extensive experimental studies.  相似文献   

4.
This work presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.  相似文献   

5.
The slow convergence of back-propagation neural network (BPNN) has become a challenge in data-mining and knowledge discovery applications due to the drawbacks of the gradient descent (GD) optimization method, which is widely adopted in BPNN learning. To solve this problem, some standard optimization techniques such as conjugate-gradient and Newton method have been proposed to improve the convergence rate of BP learning algorithm. This paper presents a heuristic method that adds an adaptive smoothing momentum term to original BP learning algorithm to speedup the convergence. In this improved BP learning algorithm, adaptive smoothing technique is used to adjust the momentums of weight updating formula automatically in terms of “3 σ limits theory.” Using the adaptive smoothing momentum terms, the improved BP learning algorithm can make the network training and convergence process faster, and the network’s generalization performance stronger than the standard BP learning algorithm can do. In order to verify the effectiveness of the proposed BP learning algorithm, three typical foreign exchange rates, British pound (GBP), Euro (EUR), and Japanese yen (JPY), are chosen as the forecasting targets for illustration purpose. Experimental results from homogeneous algorithm comparisons reveal that the proposed BP learning algorithm outperforms the other comparable BP algorithms in performance and convergence rate. Furthermore, empirical results from heterogeneous model comparisons also show the effectiveness of the proposed BP learning algorithm.  相似文献   

6.
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.  相似文献   

7.
Input data representation is highly decisive in neural learning in terms of convergence. In this paper, within an analytical and statistical framework, the effect of the distribution characteristics of the input pattern vectors on the performance of the back-propagation (BP) algorithm is established for a function approximation problem, where parameters of an articulatory speech synthesizer are estimated from acoustic input data. The aim is to determine the optimum statistical characteristics of the acoustic input patterns in order to improve neural learning. Improvement is obtained through a modification of the statistical characteristics of the input data, which reduces effectively the occurrence of node saturation in the hidden layer.  相似文献   

8.
深度学习应用技术研究   总被引:2,自引:0,他引:2  
本文针对深度学习应用技术进行了研究性综述。详细阐述了RBM(Restricted Boltzmann Machine)逐层预训练后再用BP(back-propagation)微调的深度学习贪婪层训练方法,对比分析了BP算法中三种梯度下降的方式,建议在线学习系统,采用随机梯度下降,静态离线学习系统采用随机小批量梯度下降;归纳总结了深度学习深层结构特征,并推荐了目前最受欢迎的5层深度网络结构设计方法。分析了前馈神经网络非线性激活函数的必要性及常用的激活函数优点,并推荐ReLU (rectified linear units)激活函数。最后简要概括了深度CNNs(Convolutional Neural Networks), 深度RNNs(recurrent neural networks), LSTM(long short-termmemory networks)等新型深度网络的特点及应用场景,并归纳总结了当前深度学习可能的发展方向。  相似文献   

9.
Implementing online natural gradient learning: problems and solutions   总被引:1,自引:0,他引:1  
The online natural gradient learning is an efficient algorithm to resolve the slow learning speed and poor performance of the standard gradient descent method. However, there are several problems to implement this algorithm. In this paper, we proposed a new algorithm to solve these problems and then compared the new algorithm with other known algorithms for online learning, including Almeida-Langlois-Amaral-Plakhov algorithm (ALAP), Vario-/spl eta/, local adaptive learning rate and learning with momentum etc., using sample data sets from Proben1 and normalized handwritten digits, automatically scanned from envelopes by the U.S. Postal Services. The strong and weak points of these algorithms were analyzed and tested empirically. We found out that using the online training error as the criterion to determine whether the learning rate should be changed or not is not appropriate and our new algorithm has better performance than other existing online algorithms.  相似文献   

10.
提出了一种利用遗传算法优化前向神经网络的结构和正则项系数的混合学习算法.将该方法与附加动量的BP算法、固定正则项系数神经网络方法进行比较.数值结果显示该方法具有精度高、学习收敛速度快和泛化能力高等优点.  相似文献   

11.
Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called ;weight perturbation') is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented.  相似文献   

12.
神经网络中克服局部最小的BP—EP混合算法   总被引:4,自引:0,他引:4  
人工神经网络在很多领域有着成功的应用,神经网络有许多学习算法,BP算法是前向多层神经网络的典型算法,但BP算法有时会陷入局部最小解,进化规划(EP)是一种随机优化技术,它可以发现全局成解,当网络学习过程陷入局部最小时,利用EP确定BP算法中的学习速率,使学习过程逸出局部最小,结合具体例子给出了算法实现的具体操作步骤和实验结果。  相似文献   

13.
Some recent research reports that a dendritic neuron model (DNM) can achieve better performance than traditional artificial neuron networks (ANNs) on classification, prediction, and other problems when its parameters are well-tuned by a learning algorithm. However, the back-propagation algorithm (BP), as a mostly used learning algorithm, intrinsically suffers from defects of slow convergence and easily dropping into local minima. Therefore, more and more research adopts non-BP learning algorithms to train ANNs. In this paper, a dynamic scale-free network-based differential evolution (DSNDE) is developed by considering the demands of convergent speed and the ability to jump out of local minima. The performance of a DSNDE trained DNM is tested on 14 benchmark datasets and a photovoltaic power forecasting problem. Nine meta-heuristic algorithms are applied into comparison, including the champion of the 2017 IEEE Congress on Evolutionary Computation (CEC2017) benchmark competition effective butterfly optimizer with covariance matrix adapted retreat phase (EBOwithCMAR). The experimental results reveal that DSNDE achieves better performance than its peers.   相似文献   

14.
Although the backpropagation (BP) scheme is widely used as a learning algorithm for multilayered neural networks, the learning speed of the BP algorithm to obtain acceptable errors is unsatisfactory in spite of some improvements such as introduction of a momentum factor and an adaptive learning rate in the weight adjustment. To solve this problem, a fast learning algorithm based on the extended Kalman filter (EKF) is presented and fortunately its computational complexity has been reduced by some simplifications. In general, however, the Kalman filtering algorithm is well known to be sensitive to the nature of noises which is generally assumed to be Gaussian. In addition, the H(infinity) theory suggests that the maximum energy gain of the Kalman algorithm from disturbances to the estimation error has no upper bound. Therefore, the EKF-based learning algorithms should be improved to enhance the robustness to variations in the initial values of link weights and thresholds as well as to the nature of noises. The paper proposes H(infinity)-learning as a novel learning rule and to derive new globally and locally optimized learning algorithms based on H (infinity)-learning. Their learning behavior is analyzed from various points of view using computer simulations. The derived algorithms are also compared, in performance and computational cost, with the conventional BP and EKF learning algorithms.  相似文献   

15.
Fast Learning Algorithms for Feedforward Neural Networks   总被引:7,自引:0,他引:7  
In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data.  相似文献   

16.
Hybrid back-propagation training with evolutionary strategies   总被引:1,自引:0,他引:1  
This work presents a hybrid algorithm for neural network training that combines the back-propagation (BP) method with an evolutionary algorithm. In the proposed approach, BP updates the network connection weights, and a ( \(1+1\) ) Evolutionary Strategy (ES) adaptively modifies the main learning parameters. The algorithm can incorporate different BP variants, such as gradient descent with adaptive learning rate (GDA), in which case the learning rate is dynamically adjusted by the stochastic ( \(1+1\) )-ES as well as the deterministic adaptive rules of GDA; a combined optimization strategy known as memetic search. The proposal is tested on three different domains, time series prediction, classification and biometric recognition, using several problem instances. Experimental results show that the hybrid algorithm can substantially improve upon the standard BP methods. In conclusion, the proposed approach provides a simple extension to basic BP training that improves performance and lessens the need for parameter tuning in real-world problems.  相似文献   

17.
MATLAB7.0中改进BP网络的实现   总被引:5,自引:0,他引:5  
BP神经网络广泛应用于非线性建模,模式识别等方面,针对BP网络收敛速度慢,计算量大等缺点,介绍了BP神经网络改进训练算法,利用MATLAB7.0中的神经网络工具箱提供的丰富的训练函数,对几种典型的BP网络训练算法的训练速度进行比较,给出了应用实例和注意事项。  相似文献   

18.
Feedforward neural networks (FNNs) have been proposed to solve complex problems in pattern recognition and classification and function approximation. Despite the general success of learning methods for FNNs, such as the backpropagation (BP) algorithm, second-order optimization algorithms and layer-wise learning algorithms, several drawbacks remain to be overcome. In particular, two major drawbacks are convergence to a local minima and long learning time. We propose an efficient learning method for a FNN that combines the BP strategy and optimization layer by layer. More precisely, we construct the layer-wise optimization method using the Taylor series expansion of nonlinear operators describing a FNN and propose to update weights of each layer by the BP-based Kaczmarz iterative procedure. The experimental results show that the new learning algorithm is stable, it reduces the learning time and demonstrates improvement of generalization results in comparison with other well-known methods.  相似文献   

19.
针对BP神经网络学习算法收敛速度慢、易陷入假饱和状态的问题,提出了一种快速收敛的BP算法。该算法通过修改激励函数的导数,放大误差信号来提高收敛性。给出了改进算法的收敛性分析并在实验仿真中将改进算法同时与标准BP算法和NG等人的改进算法进行比较。仿真结果表明,该算法在收敛速度方面大大优于另外两种算法,有效地提高了BP算法的全局收敛能力。  相似文献   

20.
Training neural networks (NNs) is a complex task of great importance in the supervised learning area. However, performance of the NNs is mostly dependent on the success of training process, and therefore the training algorithm. This paper addresses the application of harmony search algorithms for the supervised training of feed-forward (FF) type NNs, which are frequently used for classification problems. In this paper, five different variants of harmony search algorithm are studied by giving special attention to Self-adaptive Global Best Harmony Search (SGHS) algorithm. A structure suitable to data representation of NNs is adapted to SGHS algorithm. The technique is empirically tested and verified by training NNs on six benchmark classification problems and a real-world problem. Among these benchmark problems two of them have binary classes and remaining four are n-ary classification problems. Real-world problem is related to the classification of most frequently encountered quality defect in a major textile company in Turkey. Overall training time, sum of squared errors, training and testing accuracies of SGHS algorithm, is compared with the other harmony search algorithms and the most widely used standard back-propagation (BP) algorithm. The experiments presented that the SGHS algorithm lends itself very well to the training of NNs and also highly competitive with the compared methods in terms of classification accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号