首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We consider stochastic neural networks, the objective of which is robust prediction for spatial control. We develop neural structures and operations, in which the representations of the environment are preprocessed and provided in quantized format to the prediction layer, and in which the response of each neuron is binary. We also identify the pertinent stochastic network parameters, and subsequently develop a supervised learning algorithm for them. The on-line learning algorithm is based an the Kullback-Leibler performance criterion, it induces backpropagation, and guarantees fast convergence to the prediction probabilities induced by the environment, with probability one.  相似文献   

2.
This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.  相似文献   

3.
A backpropagation learning algorithm for feedforward neural networks withan adaptive learning rate is derived. The algorithm is based uponminimising the instantaneous output error and does not include anysimplifications encountered in the corresponding Least Mean Square (LMS)algorithms for linear adaptive filters. The backpropagation algorithmwith an adaptive learning rate, which is derived based upon the Taylorseries expansion of the instantaneous output error, is shown to exhibitbehaviour similar to that of the Normalised LMS (NLMS) algorithm. Indeed,the derived optimal adaptive learning rate of a neural network trainedby backpropagation degenerates to the learning rate of the NLMS for a linear activation function of a neuron. By continuity, the optimal adaptive learning rate for neural networks imposes additional stabilisationeffects to the traditional backpropagation learning algorithm.  相似文献   

4.
ATM communications network control by neural networks   总被引:7,自引:0,他引:7  
A learning method that uses neural networks for service quality control in the asynchronous transfer mode (ATM) communications network is described. Because the precise characteristics of the source traffic are not known and the service quality requirements change over time, building an efficient network controller which can control the network traffic is a difficult task. The proposed ATM network controller uses backpropagation neural networks for learning the relations between the offered traffic and service quality. The neural network is adaptive and easy to implement. A training data selection method called the leaky pattern table method is proposed to learn precise relations. The performance of the proposed controller is evaluated by simulation of basic call admission models.  相似文献   

5.
This paper puts forward a novel recurrent neural network (RNN), referred to as the context layered locally recurrent neural network (CLLRNN) for dynamic system identification. The CLLRNN is a dynamic neural network which appears in effective in the input–output identification of both linear and nonlinear dynamic systems. The CLLRNN is composed of one input layer, one or more hidden layers, one output layer, and also one context layer improving the ability of the network to capture the linear characteristics of the system being identified. Dynamic memory is provided by means of feedback connections from nodes in the first hidden layer to nodes in the context layer and in case of being two or more hidden layers, from nodes in a hidden layer to nodes in the preceding hidden layer. In addition to feedback connections, there are self-recurrent connections in all nodes of the context and hidden layers. A dynamic backpropagation algorithm with adaptive learning rate is derived to train the CLLRNN. To demonstrate the superior properties of the proposed architecture, it is applied to identify not only linear but also nonlinear dynamic systems. The efficiency of the proposed architecture is demonstrated by comparing the results to some existing recurrent networks and design configurations. In addition, performance of the CLLRNN is analyzed through an experimental application to a dc motor connected to a load to show practicability and effectiveness of the proposed neural network. Results of the experimental application are presented to make a quantitative comparison with an existing recurrent network in the literature.  相似文献   

6.
A formal selection and pruning technique based on the concept of local relative sensitivity index is proposed for feedforward neural networks. The mechanism of backpropagation training algorithm is revisited and the theoretical foundation of the improved selection and pruning technique is presented. This technique is based on parallel pruning of weights which are relatively redundant in a subgroup of a feedforward neural network. Comparative studies with a similar technique proposed in the literature show that the improved technique provides better pruning results in terms of reduction of model residues, improvement of generalization capability and reduction of network complexity. The effectiveness of the improved technique is demonstrated in developing neural network models of a number of nonlinear systems including three bit parity problem, Van der Pol equation, a chemical processes and two nonlinear discrete-time systems using the backpropagation training algorithm with adaptive learning rate.  相似文献   

7.
In using a neural network for an application, data representation and network structure are critical to performance. While most improvements to networks focus on these aspects, we have found that modification of the error function based on current performance can result in significant advantages. We consider here a multilayered network trained by the backpropagation error reduction rule. We also consider a specific task, namely that of direct recognition of handwriting patterns, without any feature extraction to optimise the representation used. We show that the relaxation of the definition of error improves the final performance and accelerates learning. Since the application used in this study has generic qualities, we believe that the results of this numerical experiment are pertinent to a wide class of applications.  相似文献   

8.
As a learning machine, a neural network using the backpropagation training algorithm is subject to learning bias. This results in unpredictability of boundary generation behavior in pattern recognition applications, especially in the case of small training sample size. It is suggested that in a large class of pattern recognition problems such as managerial and other problems possessing monotonicity properties, the effect of learning bias can be controlled by using multiarchitecture monotonic function neural networks  相似文献   

9.
This paper overviews the myths and misconceptions that have surrounded neural networks in recent years. Focusing on backpropagation and the Hopfield network, we discuss the problems that have plagued practical application of these techniques, and review some of the recent progress made. Both real and perceived inadequacies of backpropagation are discussed, as well as the need for an understanding of statistics and of the problem domain in order to apply and assess the neural network properly. We consider alternatives or variants to backpropagation, which overcome some of its real limitations. The Hopfield network's poor performance on the traveling salesman problem in combinatorial optimization has colored its reception by engineers; we describe both new research in this area and promising results in other practical optimization applications. Overall, it is hoped, this paper will aid in a more balanced understanding of neural networks. They seem worthy of consideration in many applications, but they do not deserve the status of a panacea – nor are they as fraught with problems as would now seem to be implied.  相似文献   

10.
Curvature-driven smoothing: a learning algorithm for feedforwardnetworks   总被引:1,自引:0,他引:1  
The performance of feedforward neural networks in real applications can often be improved significantly if use is made of a priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard backpropagation algorithm cannot be applied. In this letter, we derive a computationally efficient learning algorithm, for a feedforward network of arbitrary topology, which can be used to minimize such error functions. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.  相似文献   

11.
Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To obtain a better understanding of these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed. These measures incorporate not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal, multilayer perceptron (MLP) networks that employ the backpropagation learning algorithm on the quadratic cost function, we address a familiar misconception that single-hidden-layer sigmoidal networks are inherently nonlocal by demonstrating that given a sufficiently large number of adjustable weights, single-hidden-layer sigmoidal MLPs exist that are arbitrarily local and retain the ability to approximate any continuous function on a compact domain.  相似文献   

12.
Matlab神经网络工具箱编程和Delphi对其调用   总被引:9,自引:0,他引:9  
杨敏  沈春林 《计算机工程》2001,27(11):92-94
针对电弧炉炼钢氧化期是一个复杂的物理化学反应过程,难以建立机理模型,利用Matlab神经网工具箱建立BP神经网预测模型,并详细讨论该模型的编程设计问题,利用OLE自动化,用Delphi编写人机交互界面来指导冶炼生产。  相似文献   

13.
Multiple access interference and near-far effect cause the performance of the conventional single user detector in DS/CDMA systems to degrade. Due to the high complexity of the optimum multiuser detector, suboptimal multiuser detectors with less complexity and reasonable performance have received considerable attention. In this paper, we analyse the performance of the multilayer perceptron backpropagation neural network as a multiuser detector of CDMA signals in AWGN and multipath fading channels. Our results show significant improvement over previous research. We compare neural network performance with the other detectors, and apply different neural networks and criteria, such as decision-based, fuzzy decision, discriminative learning, minimum classification, and cross entropy neural networks, and compare their performance. We propose a modified decision-based network which significantly improves the performance.  相似文献   

14.
Learning without local minima in radial basis function networks   总被引:54,自引:0,他引:54  
Learning from examples plays a central role in artificial neural networks. The success of many learning schemes is not guaranteed, however, since algorithms like backpropagation may get stuck in local minima, thus providing suboptimal solutions. For feedforward networks, optimal learning can be achieved provided that certain conditions on the network and the learning environment are met. This principle is investigated for the case of networks using radial basis functions (RBF). It is assumed that the patterns of the learning environment are separable by hyperspheres. In that case, we prove that the attached cost function is local minima free with respect to all the weights. This provides us with some theoretical foundations for a massive application of RBF in pattern recognition.  相似文献   

15.
This paper presents an application of artificial neural networks (ANNs) for the prediction of traction force using readily available datasets experimentally obtained from a soil bin utilizing single-wheel tester. Aiming this, firstly the tests were carried out using two soil textures and two tire types as affected by velocity, slippage, tire inflation pressure, and wheel load. On this basis, the potential of neural modeling was assessed with multilayered perceptron networks using various training algorithms among which, backpropagation algorithm was compared to backpropagation with declining learning rate factor algorithm due to their primarily yielded superior performance. The results divulged that the latter one could better achieve the aim of study in terms of performance criteria. Furthermore, it was inferred that ANNs could reliably provide a promising tool for prediction of traction force and its modeling.  相似文献   

16.
In this paper we investigate multi-layer perceptron networks in the task domain of Boolean functions. We demystify the multi-layer perceptron network by showing that it just divides the input space into regions constrained by hyperplanes. We use this information to construct minimal training sets. Despite using minimal training sets, the learning time of multi-layer perceptron networks with backpropagation scales exponentially for complex Boolean functions. But modular neural networks which consist of independentky trained subnetworks scale very well. We conjecture that the next generation of neural networks will be genetic neural networks which evolve their structure. We confirm Minsky and Papert: “The future of neural networks is tied not to the search for some single, universal scheme to solve all problems at once, bu to the evolution of a many-faceted technology of network design.”  相似文献   

17.
Multilayer perceptrons trained with the backpropagation algorithm are tested in detection and classification tasks and are compared to optimal algorithms resulting from likelihood ratio tests. The focus is on the problem of one of M orthogonal signals in a Gaussian noise environment, since both the Bayesian detector and classifier are known for this problem and can provide a measure for the performance evaluation of the neural networks. Two basic situations are considered: detection and classification. For the detection part, it was observed that for the signal-known-exactly case (M=1), the performance of the neural detector converges to the performance of the ideal Bayesian decision processor, while for a higher degree of uncertainty (i.e. for a larger M), the performance of the multilayer perceptron is inferior to that of the optimal detector. For the classification case, the probability of error of the neural network is comparable to the minimum Bayesian error, which can be numerically calculated. Adding noise during the training stage of the network does not affect the performance of the neural detector; however, there is an indication that the presence of noise in the learning process of the neural classifier results in a degraded classification performance.  相似文献   

18.
Backpropagation neural networks have been applied to prediction and classification problems in many real world situations. However, a drawback of this type of neural network is that it requires a full set of input data, and real world data is seldom complete. We have investigated two ways of dealing with incomplete data — network reduction using multiple neural network classifiers, and value substitution using estimated values from predictor networks — and compared their performance with an induction method. On a thyroid disease database collected in a clinical situation, we found that the network reduction method was superior. We conclude that network reduction can be a useful method for dealing with missing values in diagnostic systems based on backpropagation neural networks.  相似文献   

19.
The supervised training of feedforward neural networks is often based on the error backpropagation algorithm. The authors consider the successive layers of a feedforward neural network as the stages of a pipeline which is used to improve the efficiency of the parallel algorithm. A simple placement rule is used to take advantage of simultaneous executions of the calculations on each layer of the network. The analytic expressions show that the parallelization is efficient. Moreover, they indicate that the performance of this implementation is almost independent of the neural network architecture. Their simplicity assures easy prediction of learning performance on a parallel machine for any neural network architecture. The experimental results are in agreement with analytical estimates.  相似文献   

20.
The use of neural network models for time series forecasting has been motivated by experimental results that indicate high capacity for function approximation with good accuracy. Generally, these models use activation functions with fixed parameters. However, it is known that the choice of activation function strongly influences the complexity and neural network performance and that a limited number of activation functions has been used in general. We describe the use of an asymmetric activation functions family with free parameter for neural networks. We prove that the activation functions family defined, satisfies the requirements of the universal approximation theorem We present a methodology for global optimization of the activation functions family with free parameter and the connections between the processing units of the neural network. The main idea is to optimize, simultaneously, the weights and activation function used in a Multilayer Perceptron (MLP), through an approach that combines the advantages of simulated annealing, tabu search and a local learning algorithm. We have chosen two local learning algorithms: the backpropagation with momentum (BPM) and Levenberg–Marquardt (LM). The overall purpose is to improve performance in time series forecasting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号