首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
This paper puts forward a novel recurrent neural network (RNN), referred to as the context layered locally recurrent neural network (CLLRNN) for dynamic system identification. The CLLRNN is a dynamic neural network which appears in effective in the input–output identification of both linear and nonlinear dynamic systems. The CLLRNN is composed of one input layer, one or more hidden layers, one output layer, and also one context layer improving the ability of the network to capture the linear characteristics of the system being identified. Dynamic memory is provided by means of feedback connections from nodes in the first hidden layer to nodes in the context layer and in case of being two or more hidden layers, from nodes in a hidden layer to nodes in the preceding hidden layer. In addition to feedback connections, there are self-recurrent connections in all nodes of the context and hidden layers. A dynamic backpropagation algorithm with adaptive learning rate is derived to train the CLLRNN. To demonstrate the superior properties of the proposed architecture, it is applied to identify not only linear but also nonlinear dynamic systems. The efficiency of the proposed architecture is demonstrated by comparing the results to some existing recurrent networks and design configurations. In addition, performance of the CLLRNN is analyzed through an experimental application to a dc motor connected to a load to show practicability and effectiveness of the proposed neural network. Results of the experimental application are presented to make a quantitative comparison with an existing recurrent network in the literature.  相似文献   

2.
This paper reports on studies to overcome difficulties associated with setting the learning rates of backpropagation neural networks by using fuzzy logic. Building on previous research, a fuzzy control system is designed which is capable of dynamically adjusting the individual learning rates of both hidden and output neurons, and the momentum term within a back-propagation network. Results show that the fuzzy controller not only eliminates the effort of configuring a global learning rate, but also increases the rate of convergence in comparison with a conventional backpropagation network. Comparative studies are presented for a number of different network configurations. The paper also presents a brief overview of fuzzy logic and backpropagation learning, highlighting how the two paradigms can enhance each other.  相似文献   

3.
To avoid oversized feedforward networks we propose that after Cascade-Correlation learning the network is fine-tuned with backpropagation algorithm. Our experiments show that if one uses merely Cascade-Correlation learning the network may require a large number of hidden units to reach the desired error level. However, if the network is in addition fine-tuned with backpropagation method then the desired error level can be reached with much smaller number of hidden units. It is also shown that the combined Cascade-Correlation backpropagation training is a faster scheme compared to mere backpropagation training.  相似文献   

4.
In this paper, a new efficient learning procedure for training single hidden layer feedforward network is proposed. This procedure trains the output layer and the hidden layer separately. A new optimization criterion for the hidden layer is proposed. Existing methods to find fictitious teacher signal for the output of each hidden neuron, modified standard backpropagation algorithm and the new optimization criterion are combined to train the feedforward neural networks. The effectiveness of the proposed procedure is shown by the simulation results. *The work of P. Thangavel is partially supported by UGC, Government of India sponsored project.  相似文献   

5.
A backpropagation learning algorithm for feedforward neural networks withan adaptive learning rate is derived. The algorithm is based uponminimising the instantaneous output error and does not include anysimplifications encountered in the corresponding Least Mean Square (LMS)algorithms for linear adaptive filters. The backpropagation algorithmwith an adaptive learning rate, which is derived based upon the Taylorseries expansion of the instantaneous output error, is shown to exhibitbehaviour similar to that of the Normalised LMS (NLMS) algorithm. Indeed,the derived optimal adaptive learning rate of a neural network trainedby backpropagation degenerates to the learning rate of the NLMS for a linear activation function of a neuron. By continuity, the optimal adaptive learning rate for neural networks imposes additional stabilisationeffects to the traditional backpropagation learning algorithm.  相似文献   

6.
Aflatoxin contamination in peanut crops is a problem of significant health and financial importance. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the impact of a contaminated crop and is the goal of our research. Backpropagation neural networks have been used to model problems of this type, however development of networks poses the complex problem of setting values for architectural features and backpropagation parameters. Genetic algorithms have been used in other studies to determine parameters for backpropagation neural networks. This paper describes the development of a genetic algorithm/backpropagation neural network hybrid (GA/BPN) in which a genetic algorithm is used to find architectures and backpropagation parameter values simultaneously for a backpropagation neural network that predicts aflatoxin contamination levels in peanuts based on environmental data. Learning rate, momentum, and number of hidden nodes are the parameters that are set by the genetic algorithm. A three-layer feed-forward network with logistic activation functions is used. Inputs to the network are soil temperature, drought duration, crop age, and accumulated heat units. The project showed that the GA/BPN approach automatically finds highly fit parameter sets for backpropagation neural networks for the aflatoxin problem.  相似文献   

7.
Backpropagation is extended to continuous-time feedforward networks with internal, adaptable time delays. The new technique is suitable for parallel hardware implementation, with continuous multidimensional training signals. The resulting networks can be used for signal prediction, signal production, and spatiotemporal pattern recognition tasks. Unlike conventional backpropagation networks, they can easily adapt while performing true signal prediction. Simulation results are presented for networks trained to predict future values of the Mackey-Glass chaotic signal, using its present value as an input. For this application, networks with adaptable delays had less than half the prediction error of networks with fixed delays, and about one-quarter the error of conventional networks. After training, the network can be operated in a signal production configuration, where it autonomously generates a close approximation to the Mackey-Glass signal.  相似文献   

8.
Reward-based learning in neural systems is challenging because a large number of parameters that affect network function must be optimized solely on the basis of a reward signal that indicates improved performance. Searching the parameter space for an optimal solution is particularly difficult if the network is large. We show that Hebbian forms of synaptic plasticity applied to synapses between a supervisor circuit and the network it is controlling can effectively reduce the dimension of the space of parameters being searched to support efficient reinforcement-based learning in large networks. The critical element is that the connections between the supervisor units and the network must be reciprocal. Once the appropriate connections have been set up by Hebbian plasticity, a reinforcement-based learning procedure leads to rapid learning in a function approximation task. Hebbian plasticity within the network being supervised ultimately allows the network to perform the task without input from the supervisor.  相似文献   

9.
Neural network models for a resource allocation problem   总被引:1,自引:0,他引:1  
University admissions and business personnel offices use a limited number of resources to process an ever-increasing quantity of student and employment applications. Application systems are further constrained to identify and acquire, in a limited time period, those candidates who are most likely to accept an offer of enrolment or employment. Neural networks are a new methodology to this particular domain. Various neural network architectures and learning algorithms are analyzed comparatively to determine the applicability of supervised learning neural networks to the domain problem of personnel resource allocation and to identify optimal learning strategies in this domain. This paper focuses on multilayer perceptron backpropagation, radial basis function, counterpropagation, general regression, fuzzy ARTMAP, and linear vector quantization neural networks. Each neural network predicts the probability of enrolment and nonenrolment for individual student applicants. Backpropagation networks produced the best overall performance. Network performance results are measured by the reduction in counsellors student case load and corresponding increases in student enrolment. The backpropagation neural networks achieve a 56% reduction in counsellor case load.  相似文献   

10.
Computational models in cognitive neuroscience should ideally use biological properties and powerful computational principles to produce behavior consistent with psychological findings. Error-driven backpropagation is computationally powerful and has proven useful for modeling a range of psychological data but is not biologically plausible. Several approaches to implementing backpropagation in a biologically plausible fashion converge on the idea of using bidirectional activation propagation in interactive networks to convey error signals. This article demonstrates two main points about these error-driven interactive networks: (1) they generalize poorly due to attractor dynamics that interfere with the network's ability to produce novel combinatorial representations systematically in response to novel inputs, and (2) this generalization problem can be remedied by adding two widely used mechanistic principles, inhibitory competition and Hebbian learning, that can be independently motivated for a variety of biological, psychological, and computational reasons. Simulations using the Leabra algorithm, which combines the generalized recirculation (GeneRec), biologically plausible, error-driven learning algorithm with inhibitory competition and Hebbian learning, show that these mechanisms can result in good generalization in interactive networks. These results support the general conclusion that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles.  相似文献   

11.
Curvature-driven smoothing: a learning algorithm for feedforwardnetworks   总被引:1,自引:0,他引:1  
The performance of feedforward neural networks in real applications can often be improved significantly if use is made of a priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard backpropagation algorithm cannot be applied. In this letter, we derive a computationally efficient learning algorithm, for a feedforward network of arbitrary topology, which can be used to minimize such error functions. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.  相似文献   

12.
Few algorithms for supervised training of spiking neural networks exist that can deal with patterns of multiple spikes, and their computational properties are largely unexplored. We demonstrate in a set of simulations that the ReSuMe learning algorithm can successfully be applied to layered neural networks. Input and output patterns are encoded as spike trains of multiple precisely timed spikes, and the network learns to transform the input trains into target output trains. This is done by combining the ReSuMe learning algorithm with multiplicative scaling of the connections of downstream neurons. We show in particular that layered networks with one hidden layer can learn the basic logical operations, including Exclusive-Or, while networks without hidden layer cannot, mirroring an analogous result for layered networks of rate neurons. While supervised learning in spiking neural networks is not yet fit for technical purposes, exploring computational properties of spiking neural networks advances our understanding of how computations can be done with spike trains.  相似文献   

13.
Presents a detailed performance analysis of the minimal resource allocation network (M-RAN) learning algorithm, M-RAN is a sequential learning radial basis function neural network which combines the growth criterion of the resource allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RAN. The performance of this algorithm is compared with the multilayer feedforward networks (MFNs) trained with 1) a variant of the standard backpropagation algorithm, known as RPROP and 2) the dependence identification (DI) algorithm of Moody and Antsaklis (1996) on several benchmark problems in the function approximation and pattern classification areas. For all these problems, the M-RAN algorithm is shown to realize networks with far fewer hidden neurons with better or same approximation/classification accuracy. Further, the time taken for learning (training) is also considerably shorter as M-RAN does not require repeated presentation of the training data.  相似文献   

14.
Animal learning is associated with changes in the efficacy of connections between neurons. The rules that govern this plasticity can be tested in neural networks. Rules that train neural networks to map stimuli onto outputs are given by supervised learning and reinforcement learning theories. Supervised learning is efficient but biologically implausible. In contrast, reinforcement learning is biologically plausible but comparatively inefficient. It lacks a mechanism that can identify units at early processing levels that play a decisive role in the stimulus-response mapping. Here we show that this so-called credit assignment problem can be solved by a new role for attention in learning. There are two factors in our new learning scheme that determine synaptic plasticity: (1) a reinforcement signal that is homogeneous across the network and depends on the amount of reward obtained after a trial, and (2) an attentional feedback signal from the output layer that limits plasticity to those units at earlier processing levels that are crucial for the stimulus-response mapping. The new scheme is called attention-gated reinforcement learning (AGREL). We show that it is as efficient as supervised learning in classification tasks. AGREL is biologically realistic and integrates the role of feedback connections, attention effects, synaptic plasticity, and reinforcement learning signals into a coherent framework.  相似文献   

15.
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.  相似文献   

16.
Fiori S 《Neural computation》2005,17(4):779-838
The Hebbian paradigm is perhaps the best-known unsupervised learning theory in connectionism. It has inspired wide research activity in the artificial neural network field because it embodies some interesting properties such as locality and the capability of being applicable to the basic weight-and-sum structure of neuron models. The plain Hebbian principle, however, also presents some inherent theoretical limitations that make it impractical in most cases. Therefore, modifications of the basic Hebbian learning paradigm have been proposed over the past 20 years in order to design profitable signal and data processing algorithms. Such modifications led to the principal component analysis type class of learning rules along with their nonlinear extensions. The aim of this review is primarily to present part of the existing fragmented material in the field of principal component learning within a unified view and contextually to motivate and present extensions of previous works on Hebbian learning to complex-weighted linear neural networks. This work benefits from previous studies on linear signal decomposition by artificial neural networks, nonquadratic component optimization and reconstruction error definition, neural parameters adaptation by constrained optimization of learning criteria of complex-valued arguments, and orthonormality expression via the insertion of topological elements in the networks or by modifying the network learning criterion. In particular, the learning principles considered here and their analysis concern complex-valued principal/minor component/subspace linear/nonlinear rules for complex-weighted neural structures, both feedforward and laterally connected.  相似文献   

17.
To identify on-line a quite general class of non-linear systems, this paper proposes a new stable learning law of the multilayer dynamic neural networks. A Lyapunov-like analysis is used to derive this stable learning procedure for the hidden layer as well as for the output layer. An algebraic Riccati equation is considered to construct a bound for the identification error. The suggested learning algorithm is similar to the well-known backpropagation rule of the multilayer perceptrons but with an additional term which assure the stability property of the identification error.  相似文献   

18.
提出一种资源分配网络(Resource Allocating Network, RAN)的新的学习算法,称为IRAN算法.该算法通过一个包含4部分的新颖性准则来增加网络中的隐层神经元,通过误差下降速率来删除冗余神经元并采用基于GivensQR分解的递归最小二乘算法进行输出层权值的更新.通过函数逼近领域中2个Benchmark问题的仿真结果表明,与RAN,RANEKF,MRAN算法相比,IRAN算法不但学习速度快,而且可以得到更为精简的网络结构.  相似文献   

19.
Multifeedback-Layer Neural Network   总被引:1,自引:0,他引:1  
The architecture and training procedure of a novel recurrent neural network (RNN), referred to as the multifeedback-layer neural network (MFLNN), is described in this paper. The main difference of the proposed network compared to the available RNNs is that the temporal relations are provided by means of neurons arranged in three feedback layers, not by simple feedback elements, in order to enrich the representation capabilities of the recurrent networks. The feedback layers provide local and global recurrences via nonlinear processing elements. In these feedback layers, weighted sums of the delayed outputs of the hidden and of the output layers are passed through certain activation functions and applied to the feedforward neurons via adjustable weights. Both online and offline training procedures based on the backpropagation through time (BPTT) algorithm are developed. The adjoint model of the MFLNN is built to compute the derivatives with respect to the MFLNN weights which are then used in the training procedures. The Levenberg-Marquardt (LM) method with a trust region approach is used to update the MFLNN weights. The performance of the MFLNN is demonstrated by applying to several illustrative temporal problems including chaotic time series prediction and nonlinear dynamic system identification, and it performed better than several networks available in the literature  相似文献   

20.
In this paper different structure of the neurons in the hidden layer of a feed-forward network, for forecasting of the dynamic systems, are proposed. Each neuron in the network is a combination of the sigmoidal activation function (SAF) and wavelet activation function (WAF). The output of the hidden neuron is the product of the output from these two activation functions. A delay element is used to feedback the output of the sigmoidal and the wavelet activation function to each other. This arrangement leads to proposed five possible configurations of recurrent neurons. Besides proposing these neuron models, the presented paper tries to compare the performance of wavelet function with sigmoid function. To guarantee the stability and the convergence of the learning process, upper bound for the learning rates has been investigated using the Lyapunov stability theorem. A two-phase adaptive learning rate ensures this upper bound. Universal approximation property of the feed-forward network with the proposed neurons has also been investigated. Finally, the applicability and comparison of the proposed recurrent networks has been weathered on two benchmark problem catering different types of dynamical systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号