首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A general backpropagation algorithm is proposed for feedforward neural network learning with time varying inputs. The Lyapunov function approach is used to rigorously analyze the convergence of weights, with the use of the algorithm, toward minima of the error function. Sufficient conditions to guarantee the convergence of weights for time varying inputs are derived. It is shown that most commonly used backpropagation learning algorithms are special cases of the developed general algorithm.  相似文献   

2.
3.
In this paper, a constructive one-hidden-layer network is introduced where each hidden unit employs a polynomial function for its activation function that is different from other units. Specifically, both a structure level as well as a function level adaptation methodologies are utilized in constructing the network. The functional level adaptation scheme ensures that the "growing" or constructive network has different activation functions for each neuron such that the network may be able to capture the underlying input-output map more effectively. The activation functions considered consist of orthonormal Hermite polynomials. It is shown through extensive simulations that the proposed network yields improved performance when compared to networks having identical sigmoidal activation functions.  相似文献   

4.
This paper presents the use of a neural network and a decision tree, which is evolved by genetic programming (GP), in thalassaemia classification. The aim is to differentiate between thalassaemic patients, persons with thalassaemia trait and normal subjects by inspecting characteristics of red blood cells, reticulocytes and platelets. A structured representation on genetic algorithms for non-linear function fitting or STROGANOFF is the chosen architecture for genetic programming implementation. For comparison, multilayer perceptrons are explored in classification via a neural network. The classification results indicate that the performance of the GP-based decision tree is approximately equal to that of the multilayer perceptron with one hidden layer. But the multilayer perceptron with two hidden layers, which is proven to have the most suitable architecture among networks with different number of hidden layers, outperforms the GP-based decision tree. Nonetheless, the structure of the decision tree reveals that some input features have no effects on the classification performance. The results confirm that the classification accuracy of the multilayer perceptron with two hidden layers can still be maintained after the removal of the redundant input features. Detailed analysis of the classification errors of the multilayer perceptron with two hidden layers, in which a reduced feature set is used as the network input, is also included. The analysis reveals that the classification ambiguity and misclassification among persons with minor thalassaemia trait and normal subjects is the main cause of classification errors. These results suggest that a combination of a multilayer perceptron with a blood cell analysis may give rise to a guideline/hint for further investigation of thalassaemia classification.  相似文献   

5.
T.  S. 《Neurocomputing》2009,72(16-18):3915
The major drawbacks of backpropagation algorithm are local minima and slow convergence. This paper presents an efficient technique ANMBP for training single hidden layer neural network to improve convergence speed and to escape from local minima. The algorithm is based on modified backpropagation algorithm in neighborhood based neural network by replacing fixed learning parameters with adaptive learning parameters. The developed learning algorithm is applied to several problems. In all the problems, the proposed algorithm outperform well.  相似文献   

6.
This study presents an explicit demonstration on constructing a multilayer feedforward neural network to approximate polynomials and conduct polynomial fitting. Built on an algebraic analysis of sigmoidal activation functions rather than incremental training, this work reveals the capability of the “universal approximator” by relating the “soft computing tool” to an important class of conventional computing tools widely used in modeling nonlinear dynamic systems and many other scientific computing applications. The authors strive to enable physical interpretations and afford full control when applying the highly adaptive, powerful yet subjective neural network approach. This work is a part of the effort of bridging the gap between the black-box and mechanics-based parametric modeling.  相似文献   

7.
This paper reports on studies to overcome difficulties associated with setting the learning rates of backpropagation neural networks by using fuzzy logic. Building on previous research, a fuzzy control system is designed which is capable of dynamically adjusting the individual learning rates of both hidden and output neurons, and the momentum term within a back-propagation network. Results show that the fuzzy controller not only eliminates the effort of configuring a global learning rate, but also increases the rate of convergence in comparison with a conventional backpropagation network. Comparative studies are presented for a number of different network configurations. The paper also presents a brief overview of fuzzy logic and backpropagation learning, highlighting how the two paradigms can enhance each other.  相似文献   

8.
Manufacturing features recognition using backpropagation neural networks   总被引:3,自引:0,他引:3  
A backpropagation neural network (BPN) is applied to the problem of feature recognition from a boundary representation (B-rep) solid model to facilitate process planning of manufactured products. It is based on the use of the face complexity code to represent the features and a neural network for the analysis of the recognition. The face complexity code is a measure of the face complexity of a feature based on the convexity or concavity of the surrounding geometry. The codes for various features are fed to the network for analysis. A backpropagation network is implemented for recognition of features and tested on published results to measure its performance. Any two or more features having significant differences in face complexity codes were used as exemplars for training the network. A new feature presented to the network is associated with one of the existing clusters, if they are similar, or the network creates a new cluster, if otherwise. Experimental results show that the network was consistent in recognizing features, hence is appropriate for application to the problem of feature recognition in automated manufacturing environment.  相似文献   

9.
The paper demonstrates the efficient use of hybrid intelligent systems for solving the classification problem of bankruptcy. The aim of the study is to obtain classification schemes able to predict business failure. Previous attempts to form efficient classifiers for the same problem using intelligent or statistical techniques are discussed throughout the paper. The application of neural logic networks by means of genetic programming is proposed. This is an advantageous approach enabling the interpretation of the network structure through set of expert rules, which is a desirable feature for field experts. These evolutionary neural logic networks are consisted of an innovative hybrid intelligent methodology, by which evolutionary programming techniques are used for obtaining the best possible topology of a neural logic network. The genetic programming process is guided using a context-free grammar and indirect encoding of the neural logic networks into the genetic programming individuals. Indicative classification results are presented and discussed in detail in terms of both, classification accuracy and solution interpretability.  相似文献   

10.
Feedforward neural networks (FNNs) have been proposed to solve complex problems in pattern recognition and classification and function approximation. Despite the general success of learning methods for FNNs, such as the backpropagation (BP) algorithm, second-order optimization algorithms and layer-wise learning algorithms, several drawbacks remain to be overcome. In particular, two major drawbacks are convergence to a local minima and long learning time. We propose an efficient learning method for a FNN that combines the BP strategy and optimization layer by layer. More precisely, we construct the layer-wise optimization method using the Taylor series expansion of nonlinear operators describing a FNN and propose to update weights of each layer by the BP-based Kaczmarz iterative procedure. The experimental results show that the new learning algorithm is stable, it reduces the learning time and demonstrates improvement of generalization results in comparison with other well-known methods.  相似文献   

11.
Mathematical essence and structures of the feedforward neural networks are investigated in this paper. The interpolation mechanisms of the feedforward neural networks are explored. For example, the well-known result, namely, that a neural network is an universal approximator, can be concluded naturally from the interpolative representations. Finally, the learning algorithms of the feedforward neural networks are discussed.  相似文献   

12.
《国际计算机数学杂志》2012,89(1-2):201-222
Much effort has previously been spent in investigating the decision making/object identification capabilities of feedforward neural networks. In the present work we examine the less frequently investigated abilities of such networks to implement computationally useful operations in arithmetic and function evaluation. The approach taken is to employ standard training methods, such as backpropagation, to teach simple three-level networks to perform selected operations ranging from one-to-one mappings to many-to-many mappings. Examples considered cover a wide range, such as performing reciprocal arithmetic on real valued inputs, implementing particle identifier functions for identification of nuclear isotopes in scattering experiments, and locating the coordinates of a charged particle moving on a surface. All mappings are required to interpolate and extrapolate from a small sample of taught exemplars to the general continuous domain of possible inputs. A unifying principle is proposed that looks upon all such function constructions as expansions in terms of basis functions, each of which is associated with a hidden node and is parameterized by such techniques as gradient descent methods.  相似文献   

13.
This paper presents a function approximation to a general class of polynomials by using one-hidden-layer feedforward neural networks(FNNs). Both the approximations of algebraic polynomial and trigonometric polynomial functions are discussed in details. For algebraic polynomial functions, an one-hidden-layer FNN with chosen number of hidden-layer nodes and corresponding weights is established by a constructive method to approximate the polynomials to a remarkable high degree of accuracy. For trigonometric functions, an upper bound of approximation is therefore derived by the constructive FNNs. In addition, algorithmic examples are also included to confirm the accuracy performance of the constructive FNNs method. The results show that it improves efficiently the approximations of both algebraic polynomials and trigonometric polynomials. Consequently, the work is really of both theoretical and practical significance in constructing a one-hidden-layer FNNs for approximating the class of polynomials. The work also paves potentially the way for extending the neural networks to approximate a general class of complicated functions both in theory and practice.  相似文献   

14.
Saul LK  Jordan MI 《Neural computation》2000,12(6):1313-1335
We study the probabilistic generative models parameterized by feedforward neural networks. An attractor dynamics for probabilistic inference in these models is derived from a mean field approximation for large, layered sigmoidal networks. Fixed points of the dynamics correspond to solutions of the mean field equations, which relate the statistics of each unit to those of its Markov blanket. We establish global convergence of the dynamics by providing a Lyapunov function and show that the dynamics generate the signals required for unsupervised learning. Our results for feedforward networks provide a counterpart to those of Cohen-Grossberg and Hopfield for symmetric networks.  相似文献   

15.
This paper proposes a new method to model partially connected feedforward neural networks (PCFNNs) from the identified input type (IT) which refers to whether each input is coupled with or uncoupled from other inputs in generating output. The identification is done by analyzing input sensitivity changes as amplifying the magnitude of inputs. The sensitivity changes of the uncoupled inputs are not correlated with the variation on any other input, while those of the coupled inputs are correlated with the variation on any one of the coupled inputs. According to the identified ITs, a PCFNN can be structured. Each uncoupled input does not share the neurons in the hidden layer with other inputs in order to contribute to output in an independent manner, while the coupled inputs share the neurons with one another. After deriving the mathematical input sensitivity analysis for each IT, several experiments, as well as a real example (blood pressure (BP) estimation), are described to demonstrate how well our method works.  相似文献   

16.
The aim of the paper is to endow a well-known structure for processing time-dependent information, synaptic delay-based ANNs, with a reliable and easy to implement algorithm suitable for training temporal decision processes. In fact, we extend the backpropagation algorithm to discrete-time feedforward networks that include adaptable internal time delays in the synapses. The structure of the network is similar to the one presented by Day and Davenport (1993), that is, in addition to the weights modeling the transmission capabilities of the synaptic connections, we model their length by means of a parameter that indicates the delay a discrete-event suffers when going from the origin neuron to the target neuron through a synaptic connection. Like the weights, these delays are also trainable, and a training algorithm can be derived that is almost as simple as the backpropagation algorithm, and which is really an extension of it. We present examples of the application of these networks and algorithm to the prediction of time series and to the recognition of patterns in electrocardiographic signals. In the first case, we employ the temporal reasoning characteristics of these networks for the prediction of future values in a benchmark example of a time series: the one governed by the Mackey-Glass chaotic equation. In the second case, we provide a real life example. The problem consists in identifying different types of beats through two levels of temporal processing, one relating the morphological features which make up the beat in time and another one that relates the positions of beats in time, that is, considers rhythm characteristics of the ECG signal. In order to do this, the network receives the signal sequentially, no windowing, segmentation, or thresholding are applied  相似文献   

17.
On the overtraining phenomenon of backpropagation neural networks   总被引:1,自引:0,他引:1  
A very important subject for the consolidation of neural networks is the study of their capabilities. In this paper, the relationships between network size, training set size and generalization capabilities are examined. The phenomenon of overtraining in backpropagation networks is discussed and an extension to an existing algorithm is described. The extended algorithm provides a new energy function and its advantages, such as improved plasticity and performance along with its dynamic properties, are explained. The algorithm is applied to some common problems (XOR, numeric character recognition and function approximation) and simulation results are presented and discussed.  相似文献   

18.
The artificial neural networks (ANNs) have been used successfully in applications such as pattern recognition, image processing, automation and control. Majority of today's applications use backpropagate feedforward ANN. In this paper, two methods of P pattern L layer ANN learning on n × n RMESH have been presented. One required memory space of O(nL) but conceptually is simpler to develop and the other uses pipelined approach which reduces the memory requirement to O(L). Both of these algorithms take O(PL) time and are optimal for RMESH architecture.  相似文献   

19.
Inverting feedforward neural networks using linear and nonlinearprogramming   总被引:1,自引:0,他引:1  
The problem of inverting trained feedforward neural networks is to find the inputs which yield a given output. In general, this problem is an ill-posed problem. We present a method for dealing with the inverse problem by using mathematical programming techniques. The principal idea behind the method is to formulate the inverse problem as a nonlinear programming problem, a separable programming (SP) problem, or a linear programming problem according to the architectures of networks to be inverted or the types of network inversions to be computed. An important advantage of the method over the existing iterative inversion algorithm is that various designated network inversions of multilayer perceptrons and radial basis function neural networks can be obtained by solving the corresponding SP problems, which can be solved by a modified simplex method. We present several examples to demonstrate the proposed method and applications of network inversions to examine and improve the generalization performance of trained networks. The results show the effectiveness of the proposed method.  相似文献   

20.
The problem of training feedforward neural networks is considered. To solve it, new algorithms are proposed. They are based on the asymptotic analysis of the extended Kalman filter (EKF) and on a separable network structure. Linear weights are interpreted as diffusion random variables with zero expectation and a covariance matrix proportional to an arbitrarily large parameter λ. Asymptotic expressions for the EKF are derived as λ→∞. They are called diffusion learning algorithms (DLAs). It is shown that they are robust with respect to the accumulation of rounding errors in contrast to their prototype EKF with a large but finite λ and that, under certain simplifying assumptions, an extreme learning machine (ELM) algorithm can be obtained from a DLA. A numerical example shows that the accuracy of a DLA may be higher than that of an ELM algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号