首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A general backpropagation algorithm is proposed for feedforward neural network learning with time varying inputs. The Lyapunov function approach is used to rigorously analyze the convergence of weights, with the use of the algorithm, toward minima of the error function. Sufficient conditions to guarantee the convergence of weights for time varying inputs are derived. It is shown that most commonly used backpropagation learning algorithms are special cases of the developed general algorithm.  相似文献   

2.
This paper reports on studies to overcome difficulties associated with setting the learning rates of backpropagation neural networks by using fuzzy logic. Building on previous research, a fuzzy control system is designed which is capable of dynamically adjusting the individual learning rates of both hidden and output neurons, and the momentum term within a back-propagation network. Results show that the fuzzy controller not only eliminates the effort of configuring a global learning rate, but also increases the rate of convergence in comparison with a conventional backpropagation network. Comparative studies are presented for a number of different network configurations. The paper also presents a brief overview of fuzzy logic and backpropagation learning, highlighting how the two paradigms can enhance each other.  相似文献   

3.
4.
R.  S.  N.  P. 《Neurocomputing》2009,72(16-18):3771
In a fully complex-valued feed-forward network, the convergence of the Complex-valued Back Propagation (CBP) learning algorithm depends on the choice of the activation function, learning sample distribution, minimization criterion, initial weights and the learning rate. The minimization criteria used in the existing versions of CBP learning algorithm in the literature do not approximate the phase of complex-valued output well in function approximation problems. The phase of a complex-valued output is critical in telecommunication and reconstruction and source localization problems in medical imaging applications. In this paper, the issues related to the convergence of complex-valued neural networks are clearly enumerated using a systematic sensitivity study on existing complex-valued neural networks. In addition, we also compare the performance of different types of split complex-valued neural networks. From the observations in the sensitivity analysis, we propose a new CBP learning algorithm with logarithmic performance index for a complex-valued neural network with exponential activation function. The proposed CBP learning algorithm directly minimizes both the magnitude and phase errors and also provides better convergence characteristics. Performance of the proposed scheme is evaluated using two synthetic complex-valued function approximation problems, the complex XOR problem, and a non-minimum phase equalization problem. Also, a comparative analysis on the convergence of the existing fully complex and split complex networks is presented.  相似文献   

5.
T.  S. 《Neurocomputing》2009,72(16-18):3915
The major drawbacks of backpropagation algorithm are local minima and slow convergence. This paper presents an efficient technique ANMBP for training single hidden layer neural network to improve convergence speed and to escape from local minima. The algorithm is based on modified backpropagation algorithm in neighborhood based neural network by replacing fixed learning parameters with adaptive learning parameters. The developed learning algorithm is applied to several problems. In all the problems, the proposed algorithm outperform well.  相似文献   

6.
A robust backpropagation learning algorithm for functionapproximation   总被引:3,自引:0,他引:3  
The backpropagation (BP) algorithm allows multilayer feedforward neural networks to learn input-output mappings from training samples. Due to the nonlinear modeling power of such networks, the learned mapping may interpolate all the training points. When erroneous training data are employed, the learned mapping can oscillate badly between data points. In this paper we derive a robust BP learning algorithm that is resistant to the noise effects and is capable of rejecting gross errors during the approximation process. The spirit of this algorithm comes from the pioneering work in robust statistics by Huber and Hampel. Our work is different from that of M-estimators in two aspects: 1) the shape of the objective function changes with the iteration time; and 2) the parametric form of the functional approximator is a nonlinear cascade of affine transformations. In contrast to the conventional BP algorithm, three advantages of the robust BP algorithm are: 1) it approximates an underlying mapping rather than interpolating training samples; 2) it is robust against gross errors; and 3) its rate of convergence is improved since the influence of incorrect samples is gracefully suppressed.  相似文献   

7.
Incremental backpropagation learning networks   总被引:2,自引:0,他引:2  
How to learn new knowledge without forgetting old knowledge is a key issue in designing an incremental-learning neural network. In this paper, we present a new incremental learning method for pattern recognition, called the "incremental backpropagation learning network", which employs bounded weight modification and structural adaptation learning rules and applies initial knowledge to constrain the learning process. The viability of this approach is demonstrated for classification problems including the iris and the promoter domains.  相似文献   

8.
There is no method to determine the optimal topology for multi-layer neural networks for a given problem. Usually the designer selects a topology for the network and then trains it. Since determination of the optimal topology of neural networks belongs to class of NP-hard problems, most of the existing algorithms for determination of the topology are approximate. These algorithms could be classified into four main groups: pruning algorithms, constructive algorithms, hybrid algorithms and evolutionary algorithms. These algorithms can produce near optimal solutions. Most of these algorithms use hill-climbing method and may be stuck at local minima. In this article, we first introduce a learning automaton and study its behaviour and then present an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks. The survival algorithm uses learning automata as a global search method to increase the probability of obtaining the optimal topology. The algorithm considers the problem of optimization of the topology of neural networks as object partitioning rather than searching or parameter optimization as in existing algorithms. In survival algorithm, the training begins with a large network, and then by adding and deleting hidden units, a near optimal topology will be obtained. The algorithm has been tested on a number of problems and shown through simulations that networks generated are near optimal.  相似文献   

9.
《Neurocomputing》1999,24(1-3):173-189
The real-time recurrent learning (RTRL) algorithm, which is originally proposed for training recurrent neural networks, requires a large number of iterations for convergence because a small learning rate should be used. While an obvious solution to this problem is to use a large learning rate, this could result in undesirable convergence characteristics. This paper attempts to improve the convergence capability and convergence characteristics of the RTRL algorithm by incorporating conjugate gradient computation into its learning procedure. The resulting algorithm, referred to as the conjugate gradient recurrent learning (CGRL) algorithm, is applied to train fully connected recurrent neural networks to simulate a second-order low-pass filter and to predict the chaotic intensity pulsations of NH3 laser. Results show that the CGRL algorithm exhibits substantial improvement in convergence (in terms of the reduction in mean squared error per epoch) as compared to the RTRL and batch mode RTRL algorithms.  相似文献   

10.
This paper deals with a fast and computationally simple Successive Over-relaxation Resilient Backpropagation (SORRPROP) learning algorithm which has been developed by modifying the Resilient Backpropagation (RPROP) algorithm. It uses latest computed values of weights between the hidden and output layers to update remaining weights. The modification does not add any extra computation in RPROP algorithm and maintains its computational simplicity. Classification and regression simulations examples have been used to compare the performance. From the test results for the examples undertaken it is concluded that SORRPROP has small convergence times and better performance in comparison to other first-order learning algorithms.  相似文献   

11.
Principal component analysis (PCA) by neural networks is one of the most frequently used feature extracting methods. To process huge data sets, many learning algorithms based on neural networks for PCA have been proposed. However, traditional algorithms are not globally convergent. In this paper, a new PCA learning algorithm based on cascade recursive least square (CRLS) neural network is proposed. This algorithm can guarantee the network weight vector converges to an eigenvector associated with the largest eigenvalue of the input covariance matrix globally. A rigorous mathematical proof is given. Simulation results show the effectiveness of the algorithm.  相似文献   

12.
Two engineering problems in implementing Group Technology are part family formation and part classification. Regardless of the approach adopted for the formation and classification, a critical problem is how to maintain consistency. The consistency problem can be addressed most effectively if the formation and classification is a single procedure rather than two separate procedures. A feedforward neural network using the Backpropagation learning rule is adopted to automatically generate part families during the part classification process. The spontaneous generalization capability of the neural network is utilized in classifying the parts into the families and creating new families if necessary. A heuristic algorithm using the neural network is described with an illustrative example.  相似文献   

13.
Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 156–167, July–August 1994.  相似文献   

14.
限定记忆的前向神经网络在线学习算法研究   总被引:3,自引:0,他引:3  
从理论上分析了隐含层激励函数满足Mercer条件的前向神经网络的数学本质,给出了网络学习的指导方向.提出3种网络在线学习算法,它们通过动态调整网络结构和权值来提高网络在线预测性能.算法完全符合统计学习理论提出的结构风险最小化原则,具有较快的学习收敛速度和良好的抗噪声能力.最后通过具体数值实验验证了上述算法的可行性和优越性.  相似文献   

15.
The aim of the paper is to endow a well-known structure for processing time-dependent information, synaptic delay-based ANNs, with a reliable and easy to implement algorithm suitable for training temporal decision processes. In fact, we extend the backpropagation algorithm to discrete-time feedforward networks that include adaptable internal time delays in the synapses. The structure of the network is similar to the one presented by Day and Davenport (1993), that is, in addition to the weights modeling the transmission capabilities of the synaptic connections, we model their length by means of a parameter that indicates the delay a discrete-event suffers when going from the origin neuron to the target neuron through a synaptic connection. Like the weights, these delays are also trainable, and a training algorithm can be derived that is almost as simple as the backpropagation algorithm, and which is really an extension of it. We present examples of the application of these networks and algorithm to the prediction of time series and to the recognition of patterns in electrocardiographic signals. In the first case, we employ the temporal reasoning characteristics of these networks for the prediction of future values in a benchmark example of a time series: the one governed by the Mackey-Glass chaotic equation. In the second case, we provide a real life example. The problem consists in identifying different types of beats through two levels of temporal processing, one relating the morphological features which make up the beat in time and another one that relates the positions of beats in time, that is, considers rhythm characteristics of the ECG signal. In order to do this, the network receives the signal sequentially, no windowing, segmentation, or thresholding are applied  相似文献   

16.
一种模糊神经网络的快速参数学习算法   总被引:9,自引:0,他引:9  
提出了一种新的模糊神经网络的快速参数学习算法, 采用一些特殊的处理, 可以用递推最小二乘法(RLS)来调整所有的参数. 以前的学习算法在调整模糊隶属度函数的中心和宽度的时候, 用的是梯度下降法, 具有容易陷入局部最小值点、收敛速度慢等缺点, 而本算法则可以克服这些缺点, 最后通过仿真验证了算法的有效性.  相似文献   

17.
Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 131–142, November–December, 1993.  相似文献   

18.
《Information Fusion》2008,9(1):41-55
Ensemble methods for classification and regression have focused a great deal of attention in recent years. They have shown, both theoretically and empirically, that they are able to perform substantially better than single models in a wide range of tasks. We have adapted an ensemble method to the problem of predicting future values of time series using recurrent neural networks (RNNs) as base learners. The improvement is made by combining a large number of RNNs, each of which is generated by training on a different set of examples. This algorithm is based on the boosting algorithm where difficult points of the time series are concentrated on during the learning process however, unlike the original algorithm, we introduce a new parameter for tuning the boosting influence on available examples. We test our boosting algorithm for RNNs on single-step-ahead and multi-step-ahead prediction problems. The results are then compared to other regression methods, including those of different local approaches. The overall results obtained through our ensemble method are more accurate than those obtained through the standard method, backpropagation through time, on these datasets and perform significantly better even when long-range dependencies play an important role.  相似文献   

19.
Manufacturing features recognition using backpropagation neural networks   总被引:3,自引:0,他引:3  
A backpropagation neural network (BPN) is applied to the problem of feature recognition from a boundary representation (B-rep) solid model to facilitate process planning of manufactured products. It is based on the use of the face complexity code to represent the features and a neural network for the analysis of the recognition. The face complexity code is a measure of the face complexity of a feature based on the convexity or concavity of the surrounding geometry. The codes for various features are fed to the network for analysis. A backpropagation network is implemented for recognition of features and tested on published results to measure its performance. Any two or more features having significant differences in face complexity codes were used as exemplars for training the network. A new feature presented to the network is associated with one of the existing clusters, if they are similar, or the network creates a new cluster, if otherwise. Experimental results show that the network was consistent in recognizing features, hence is appropriate for application to the problem of feature recognition in automated manufacturing environment.  相似文献   

20.
In this paper the optimization of type-2 fuzzy inference systems using genetic algorithms (GAs) and particle swarm optimization (PSO) is presented. The optimized type-2 fuzzy inference systems are used to estimate the type-2 fuzzy weights of backpropagation neural networks. Simulation results and a comparative study among neural networks with type-2 fuzzy weights without optimization of the type-2 fuzzy inference systems, neural networks with optimized type-2 fuzzy weights using genetic algorithms, and neural networks with optimized type-2 fuzzy weights using particle swarm optimization are presented to illustrate the advantages of the bio-inspired methods. The comparative study is based on a benchmark case of prediction, which is the Mackey-Glass time series (for τ = 17) problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号