首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 140 毫秒
1.
构造前向神经网络逼近多项式函数   总被引:1,自引:0,他引:1  
首先用构造性的方法证明:对于任意的n阶多元多项式函数,可以构造一个三层前向神经网络以任意精度逼近该多项式,所构造网络的隐层节点个数仅与多项式的维数d和阶数n有关.然后,我们给出实现这一逼近的具体算法.最后,给出两个算例进一步验证所得的理论结果.本文结果对神经网络逼近多元多项式函数的具体网络构造以及实现这一逼近的方法等问题具有指导意义.  相似文献   

2.
多项式函数的神经网络逼近: 网络的构造与逼近算法   总被引:2,自引:1,他引:2  
该文作者先用构造性方法证明:对于给定的r阶多项式函数,可以具体地构造出一个三层前向神经网络,以任意精度逼近该多项式,所构造的网络的隐层节点个数仅与多项式的阶数r和网络的输入个数s有关,并能准确地用r表达;然后,给出一个实现这一逼近的具体算法;最后,给出两个数值算例进一步验证所得的理论结果.该文所获得的结果对前向神经网络逼近多项式函数类的网络具体构造以及实现逼近的方法等问题具有较为重要的指导意义.  相似文献   

3.
用构造性方法证明:对于给定的r阶多项式函数,可以具体地构造出一个三层泛函网络,以任意精度逼近该多项式,所构造的网络的中问神经元个数仅与多项式基函数的阶数r有关,并能用r表达.该文所得结果对于基于多项式基函数的泛函网络逼近任意函数类的网络具体构造和逼近具有理论指导意义.  相似文献   

4.
关于多项式函数算法优化问题,人工神经网络是解决函数逼近问题的一个重要方法.但由于传统的学习型神经网络存在缺陷,如对初始权重非常敏感,极易收敛于局部极小;收敛缓慢甚至不能收敛;过拟合与过训练;网络隐含节点数不确定等.针对上述问题,提出了一种多项式函数的三层泛函网络与逼近算法,并给出了中间隐层计算单元个数是如何确定.提出的算法能以任意精度逼近多项式函数,同时具有较快收敛速度和良好性能,克服了人工神经网络的不足.最后,给出了两个数值算例进一步验证算法的正确性.  相似文献   

5.
两输入幂激励前向神经网络权值与结构确定   总被引:1,自引:0,他引:1  
基于多元函数逼近与二元幂级数展开理论,构建了一个以二元幂函数序列为隐神经元激励函数的两输入幂激励前向神经网络模型.以该网络模型为基础,基于权值直接确定法以及隐神经元数目与逼近误差的关系,提出了一种网络权值与结构确定算法.计算机仿真与数值实验结果验证了所构建的网络在逼近与去噪方面具有优越的性能,所提出的权值与结构确定算法能够快速、有效地确定网络的权值与最优结构,保证网络的最佳逼近能力.  相似文献   

6.
为克服邵神经网络模型及其学习算法中的固有缺陷,根据多项式插值和逼近理论,构造出一种以工;agucrre正交多项式作为隐层神经元激励函数的多输入前向神经网络模型。针对该网络模型,提出了权值与结构确定法,以便快速、自动地确定该网络的最优权值和最优结构。计算机仿真与实验结果显示:该算法是有效的,并且通过该算法所得到的网络具有较优的逼近性能和良好的去噪能力。  相似文献   

7.
基于正交多项式函数的神经网络及其性质研究   总被引:5,自引:0,他引:5  
神经网络的非线性逼近能力的研究是神经网络研究中的一个热点问题。该文提出了基于正交多项式函数的神经网络构造理论,以此为基础提出了基于正交多项式函数的神经网络的构造方法,利用Stone-Weierstrass定理从理论上证明了基于正交多项式函数的神经网络具有能以任意精度逼近任意紧集上的连续函数的全局逼近性质,最后,提出了基于正交多项式函数的神经网络的选择和评价方法,研究表明,在一定条件下,当选择Chebyshev多项式时,所构造出的神经网络性能最优。  相似文献   

8.
多项式函数型回归神经网络模型及应用   总被引:2,自引:1,他引:2  
周永权 《计算机学报》2003,26(9):1196-1200
文中利用回归神经网络既有前馈通路又有反馈通路的特点,将网络隐层中神经元的激活函数设置为可调多项式函数序列,提出了多项式函数型回归神经网络新模型,它不但具有传统回归神经网络的特点,而且具有较强的函数逼近能力,针对递归计算问题,提出了多项式函数型回归神经网络学习算法,并将该网络模型应用于多元多项式近似因式分解,其学习算法在多元多项式近似分解中体现了较强的优越性,通过算例分析表明,该算法十分有效,收敛速度快,计算精度高,可适用于递归计算问题领域,该文所提出的多项式函数型回归神经网络模型及学习算法对于代数符号近似计算有重要的指导意义。  相似文献   

9.
多变元周期函数的神经网络逼近:逼近阶估计   总被引:9,自引:3,他引:6  
该文证明具有三角隐层单元的三层前向神经网络逼近多变元周期函数速度的上界估计、下界估计和饱和定理,揭示该类神经网络之隐层单元数与网络逼 近速度、逼近函数结构之间的关系,特别指出二阶光滑模为该类神经网络的本质逼近阶,并且当被逼近函数属于二阶Lipschitz函数类时,该类神经网络的逼近能力完全取决于被逼近函数的光滑性,文中也证明了该类神经网络的最大逼近能力以及达到最大逼近能力的一个充分必要条件,该文所获结果对于澄清该类神经网络的函数逼近能力与应用有重要指导意义。  相似文献   

10.
前向代数神经网络的函数逼近理论及学习算法   总被引:12,自引:0,他引:12  
文中对MP神经元模型进行了推广,定义了多项代数神经元、多项式代数神经网络,将多项式代数融入代数神经网络,分析了前向多项式代数神经网络函数逼近能力及理论依据,设计出了一类双输入单输出的前向4层多层式代数神经网络模型,由该模型构成的网络能够逼近于给定的二元多项式到预定的精度。给出了在P-adic意义下的多项式代数神经网络函数逼近整体学习算法,在学习的过程中,不存在局部极小,通过实例表明,该算法有效,最  相似文献   

11.
This paper presents a function approximation to a general class of polynomials by using one-hidden-layer feedforward neural networks(FNNs). Both the approximations of algebraic polynomial and trigonometric polynomial functions are discussed in details. For algebraic polynomial functions, an one-hidden-layer FNN with chosen number of hidden-layer nodes and corresponding weights is established by a constructive method to approximate the polynomials to a remarkable high degree of accuracy. For trigonometric functions, an upper bound of approximation is therefore derived by the constructive FNNs. In addition, algorithmic examples are also included to confirm the accuracy performance of the constructive FNNs method. The results show that it improves efficiently the approximations of both algebraic polynomials and trigonometric polynomials. Consequently, the work is really of both theoretical and practical significance in constructing a one-hidden-layer FNNs for approximating the class of polynomials. The work also paves potentially the way for extending the neural networks to approximate a general class of complicated functions both in theory and practice.  相似文献   

12.
Ridge polynomial networks.   总被引:2,自引:0,他引:2  
This paper presents a polynomial connectionist network called ridge polynomial network (RPN) that can uniformly approximate any continuous function on a compact set in multidimensional input space R (d), with arbitrary degree of accuracy. This network provides a more efficient and regular architecture compared to ordinary higher-order feedforward networks while maintaining their fast learning property. The ridge polynomial network is a generalization of the pi-sigma network and uses a special form of ridge polynomials. It is shown that any multivariate polynomial can be represented in this form, and realized by an RPN. Approximation capability of the RPN's is shown by this representation theorem and the Weierstrass polynomial approximation theorem. The RPN provides a natural mechanism for incremental network growth. Simulation results on a surface fitting problem, the classification of high-dimensional data and the realization of a multivariate polynomial function are given to highlight the capability of the network. In particular, a constructive learning algorithm developed for the network is shown to yield smooth generalization and steady learning.  相似文献   

13.
Neural networks are used in many applications such as image recognition, classification, control and system identification. However, the parameters of the identified system are embedded within the neural network architecture and are not identified explicitly. In this paper, a mathematical relationship between the network weights and the transfer function parameters is derived. Furthermore, an easy-to-follow algorithm that can estimate the transfer function models for multi-layer feedforward neural networks is proposed. These estimated models provide an insight into the system dynamics, where information such as time response, frequency response, and pole/zero locations can be calculated and analyzed. In order to validate the suitability and accuracy of the proposed algorithm, four different simulation examples are provided and analyzed for three-layer neural network models.  相似文献   

14.
This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.  相似文献   

15.
The ability of feedforward neural networks to identify the number of real roots of univariate polynomials is investigated. Furthermore, their ability to determine whether a system of multivariate polynomial equations has real solutions is examined on a problem of determining the structure of a molecule. The obtained experimental results indicate that neural networks are capable of performing this task with high accuracy even when the training set is very small compared to the test set.  相似文献   

16.
前馈神经网络中隐层神经元数目的一种直接估计方法   总被引:19,自引:0,他引:19  
李玉鉴 《计算机学报》1999,22(11):1204-1208
目前还没有一个行之有效的方法直接估计前馈网络隐层神经元的数目。该文首先提出一种利用单调指数直接估算三层前馈网络隐层经元数目的方法,以保证网络近似逼近任意给定的训练数据。理论分析和计算实验表明,此方法能够在训练之前预先确定最优(最少)或接近最优的隐层神经元数目,使得网络在训练之后不仅可以较好地反映训练数据的变化趋势,而且有较为满意的逼近精度。  相似文献   

17.
Convex incremental extreme learning machine   总被引:8,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号