首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mathematical essence and structures of the feedforward neural networks are investigated in this paper. The interpolation mechanisms of the feedforward neural networks are explored. For example, the well-known result, namely, that a neural network is an universal approximator, can be concluded naturally from the interpolative representations. Finally, the learning algorithms of the feedforward neural networks are discussed.  相似文献   

2.
Saul LK  Jordan MI 《Neural computation》2000,12(6):1313-1335
We study the probabilistic generative models parameterized by feedforward neural networks. An attractor dynamics for probabilistic inference in these models is derived from a mean field approximation for large, layered sigmoidal networks. Fixed points of the dynamics correspond to solutions of the mean field equations, which relate the statistics of each unit to those of its Markov blanket. We establish global convergence of the dynamics by providing a Lyapunov function and show that the dynamics generate the signals required for unsupervised learning. Our results for feedforward networks provide a counterpart to those of Cohen-Grossberg and Hopfield for symmetric networks.  相似文献   

3.
《国际计算机数学杂志》2012,89(1-2):201-222
Much effort has previously been spent in investigating the decision making/object identification capabilities of feedforward neural networks. In the present work we examine the less frequently investigated abilities of such networks to implement computationally useful operations in arithmetic and function evaluation. The approach taken is to employ standard training methods, such as backpropagation, to teach simple three-level networks to perform selected operations ranging from one-to-one mappings to many-to-many mappings. Examples considered cover a wide range, such as performing reciprocal arithmetic on real valued inputs, implementing particle identifier functions for identification of nuclear isotopes in scattering experiments, and locating the coordinates of a charged particle moving on a surface. All mappings are required to interpolate and extrapolate from a small sample of taught exemplars to the general continuous domain of possible inputs. A unifying principle is proposed that looks upon all such function constructions as expansions in terms of basis functions, each of which is associated with a hidden node and is parameterized by such techniques as gradient descent methods.  相似文献   

4.
This paper proposes a two-stage feedforward neural network (FFNN) based approach for modeling fundamental frequency (F0) values of a sequence of syllables. In this study, (i) linguistic constraints represented by positional, contextual and phonological features, (ii) production constraints represented by articulatory features and (iii) linguistic relevance tilt parameters are proposed for predicting intonation patterns. In the first stage, tilt parameters are predicted using linguistic and production constraints. In the second stage, F0 values of the syllables are predicted using the tilt parameters predicted from the first stage, and basic linguistic and production constraints. The prediction performance of the neural network models is evaluated using objective measures such as average prediction error (μ), standard deviation (σ) and linear correlation coefficient (γX,Y). The prediction accuracy of the proposed two-stage FFNN model is compared with other statistical models such as Classification and Regression Tree (CART) and Linear Regression (LR) models. The prediction accuracy of the intonation models is also analyzed by conducting listening tests to evaluate the quality of synthesized speech obtained after incorporation of intonation models into the baseline system. From the evaluation, it is observed that prediction accuracy is better for two-stage FFNN models, compared to the other models.  相似文献   

5.
Inverting feedforward neural networks using linear and nonlinearprogramming   总被引:1,自引:0,他引:1  
The problem of inverting trained feedforward neural networks is to find the inputs which yield a given output. In general, this problem is an ill-posed problem. We present a method for dealing with the inverse problem by using mathematical programming techniques. The principal idea behind the method is to formulate the inverse problem as a nonlinear programming problem, a separable programming (SP) problem, or a linear programming problem according to the architectures of networks to be inverted or the types of network inversions to be computed. An important advantage of the method over the existing iterative inversion algorithm is that various designated network inversions of multilayer perceptrons and radial basis function neural networks can be obtained by solving the corresponding SP problems, which can be solved by a modified simplex method. We present several examples to demonstrate the proposed method and applications of network inversions to examine and improve the generalization performance of trained networks. The results show the effectiveness of the proposed method.  相似文献   

6.
The problem of training feedforward neural networks is considered. To solve it, new algorithms are proposed. They are based on the asymptotic analysis of the extended Kalman filter (EKF) and on a separable network structure. Linear weights are interpreted as diffusion random variables with zero expectation and a covariance matrix proportional to an arbitrarily large parameter λ. Asymptotic expressions for the EKF are derived as λ→∞. They are called diffusion learning algorithms (DLAs). It is shown that they are robust with respect to the accumulation of rounding errors in contrast to their prototype EKF with a large but finite λ and that, under certain simplifying assumptions, an extreme learning machine (ELM) algorithm can be obtained from a DLA. A numerical example shows that the accuracy of a DLA may be higher than that of an ELM algorithm.  相似文献   

7.
文章提出了二阶有理式多层前馈神经网络的数学模型。有理式多层神经网络的思想来源于函数逼近理论中的有理式逼近。有理式前馈神经网络模型是传统前俯神经网络模型的推广,能有效地求解函数逼近问题。文章给出了有理式多层神经网络的学习算法,即误差反传播学习算法。就计算复杂度而言,有理式神经网络的学习算法与传统的多层神经网络反传播算法是同阶的。文章还给出了函数逼近和模式识别两个应用实例,实验结果说明二阶有理式多层神经网络在解决传统的问题上是有效的。  相似文献   

8.
Regularization parameter estimation for feedforward neural networks   总被引:2,自引:0,他引:2  
Under the framework of the Kullback-Leibler (KL) distance, we show that a particular case of Gaussian probability function for feedforward neural networks (NNs) reduces into the first-order Tikhonov regularizer. The smooth parameter in kernel density estimation plays the role of regularization parameter. Under some approximations, an estimation formula is derived for estimating regularization parameters based on training data sets. The similarity and difference of the obtained results are compared with other work. Experimental results show that the estimation formula works well in sparse and small training sample cases.  相似文献   

9.
Generalization and selection of examples in feedforward neural networks   总被引:1,自引:0,他引:1  
Franco L  Cannas SA 《Neural computation》2000,12(10):2405-2426
In this work, we study how the selection of examples affects the learning procedure in a boolean neural network and its relationship with the complexity of the function under study and its architecture. We analyze the generalization capacity for different target functions with particular architectures through an analytical calculation of the minimum number of examples needed to obtain full generalization (i.e., zero generalization error). The analysis of the training sets associated with such parameter leads us to propose a general architecture-independent criterion for selection of training examples. The criterion was checked through numerical simulations for various particular target functions with particular architectures, as well as for random target functions in a nonoverlapping receptive field perceptron. In all cases, the selection sampling criterion lead to an improvement in the generalization capacity compared with a pure random sampling. We also show that for the parity problem, one of the most used problems for testing learning algorithms, only the use of the whole set of examples ensures global learning in a depth two architecture. We show that this difficulty can be overcome by considering a tree-structured network of depth 2log2(N)-1.  相似文献   

10.
In this paper we examine a technique by which fault tolerance can be embedded into a feedforward network leading to a network tolerant to the loss of a node and its associated weights. The fault tolerance problem for a feedforward network is formulated as a constrained minimax optimization problem. Two different methods are used to solve it. In the first method, the constrained minimax optimization problem is converted to a sequence of unconstrained least-squares optimization problems, whose solutions converge to the solution of the original minimax problem. An efficient gradient-based minimization technique, specially tailored for nonlinear least-squares optimization, is then applied to perform the unconstrained minimization at each step of the sequence. Several modifications are made to the basic algorithm to improve its speed of convergence. In the second method a different approach is used to convert the problem to a single unconstrained minimization problem whose solution very nearly equals that of the original minimax problem. Networks synthesized using these methods, though not always fault tolerant, exhibit an acceptable degree of partial fault tolerance.  相似文献   

11.
Sensitivity of feedforward neural networks to weight errors   总被引:3,自引:0,他引:3  
An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).  相似文献   

12.
The affine transformation, which consists of rotation, translation, scaling, and shearing transformations, can be considered as an approximation to the perspective transformation. Therefore, it is very important to find an effective means for establishing point correspondences under affine transformation in many applications. In this paper, we consider the point correspondence problem as a subgraph matching problem and develop an energy formulation for affine invariant matching by a Hopfield type neural network. The fourth-order network is investigated first, then order reduction is done by incorporating the neighborhood information in the data. Thus we can use the second-order Hopfield network to perform subgraph isomorphism invariant to affine transformation, which can be applied to an affine invariant shape recognition problem. Experimental results show the effectiveness and efficiency of the proposed method.  相似文献   

13.
During the last 10 years different interpretative methods for analysing the effect or importance of input variables on the output of a feedforward neural network have been proposed. These methods can be grouped into two sets: analysis based on the magnitude of weights; and sensitivity analysis. However, as described throughout this study, these methods present a series of limitations. We have defined and validated a new method, called Numeric Sensitivity Analysis (NSA), that overcomes these limitations, proving to be the procedure that, in general terms, best describes the effect or importance of the input variables on the output, independently of the nature (quantitative or discrete) of the variables included. The interpretative methods used in this study are implemented in the software program Sensitivity Neural Network 1.0, created by our team.  相似文献   

14.
Adaptation algorithms for 2-D feedforward neural networks   总被引:1,自引:0,他引:1  
The generalized weight adaptation algorithms presented by J.G. Kuschewski et al. (1993) and by S.H. Zak and H.J. Sira-Ramirez (1990) are extended for 2-D madaline and 2-D two-layer feedforward neural nets (FNNs).  相似文献   

15.
An iterative pruning algorithm for feedforward neural networks   总被引:7,自引:0,他引:7  
The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach for tackling this problem is commonly known as pruning and it consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the idea of iteratively eliminating units and adjusting the remaining weights in such a way that the network performance does not worsen over the entire training set. The pruning problem is formulated in terms of solving a system of linear equations, and a very efficient conjugate gradient algorithm is used for solving it, in the least-squares sense. The algorithm also provides a simple criterion for choosing the units to be removed, which has proved to work well in practice. The results obtained over various test problems demonstrate the effectiveness of the proposed approach.  相似文献   

16.
多层前向小世界神经网络及其函数逼近   总被引:1,自引:0,他引:1  
借鉴复杂网络的研究成果, 探讨一种在结构上处于规则和随机连接型神经网络之间的网络模型—-多层前向小世界神经网络. 首先对多层前向规则神经网络中的连接依重连概率p进行重连, 构建新的网络模型, 对其特征参数的分析表明, 当0 < p < 1时, 该网络在聚类系数上不同于Watts-Strogatz 模型; 其次用六元组模型对网络进行描述; 最后, 将不同p值下的小世界神经网络用于函数逼近, 仿真结果表明, 当p = 0:1时, 网络具有最优的逼近性能, 收敛性能对比试验也表明, 此时网络在收敛性能、逼近速度等指标上要优于同规模的规则网络和随机网络.  相似文献   

17.
18.
A training algorithm for binary feedforward neural networks   总被引:9,自引:0,他引:9  
The authors present a new training algorithm to be used on a four-layer perceptron-type feedforward neural network for the generation of binary-to-binary mappings. This algorithm is called the Boolean-like training algorithm (BLTA) and is derived from original principles of Boolean algebra followed by selected extensions. The algorithm can be implemented on analog hardware, using a four-layer binary feedforward neural network (BFNN). The BLTA does not constitute a traditional circuit building technique. Indeed, the rules which govern the BLTA allow for generalization of data in the face of incompletely specified Boolean functions. When compared with techniques which employ descent methods, training times are greatly reduced in the case of the BLTA. Also, when the BFNN is used in conjunction with A/D converters, the applicability of the present algorithm can be extended to accept real-valued inputs.  相似文献   

19.
A new technique for facial expression recognition is proposed, which uses the two-dimensional (2D) discrete cosine transform (DCT) over the entire face image as a feature detector and a constructive one-hidden-layer feedforward neural network as a facial expression classifier. An input-side pruning technique, proposed previously by the authors, is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having five facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images of 20 men are used for generalization and testing. Confusion matrices calculated in both network training and generalization for four facial expressions (smile, anger, sadness, and surprise) are used to evaluate the performance of the trained network. It is demonstrated that the best recognition rates are 100% and 93.75% (without rejection), for the training and generalizing images, respectively. Furthermore, the input-side weights of the constructed network are reduced by approximately 30% using our pruning method. In comparison with the fixed structure back propagation-based recognition methods in the literature, the proposed technique constructs one-hidden-layer feedforward neural network with fewer number of hidden units and weights, while simultaneously provide improved generalization and recognition performance capabilities.  相似文献   

20.
This paper presents a function approximation to a general class of polynomials by using one-hidden-layer feedforward neural networks(FNNs). Both the approximations of algebraic polynomial and trigonometric polynomial functions are discussed in details. For algebraic polynomial functions, an one-hidden-layer FNN with chosen number of hidden-layer nodes and corresponding weights is established by a constructive method to approximate the polynomials to a remarkable high degree of accuracy. For trigonometric functions, an upper bound of approximation is therefore derived by the constructive FNNs. In addition, algorithmic examples are also included to confirm the accuracy performance of the constructive FNNs method. The results show that it improves efficiently the approximations of both algebraic polynomials and trigonometric polynomials. Consequently, the work is really of both theoretical and practical significance in constructing a one-hidden-layer FNNs for approximating the class of polynomials. The work also paves potentially the way for extending the neural networks to approximate a general class of complicated functions both in theory and practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号