首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 880 毫秒
1.
B样条神经网络的构造理论   总被引:11,自引:0,他引:11  
文中首先讨论了B样条基函数的特性,在此基础上采用构造性的方法从理论上了B样条神经网络能够以任意精度逼近任意定义在致密区间上的连续实函数,最后给出了构造性算法,使用此算法,能在满足误差要求的条件下,构造出几乎最小的B样条基函数。  相似文献   

2.
曹飞龙  李有梅  徐宗本 《软件学报》2003,14(11):1869-1874
用构造性的方法证明对任何定义在多维欧氏空间紧集上的勒贝格可积函数以及它的导数可以用一个单隐层的神经网络同时逼近.这个方法自然地得到了网络的隐层设计和收敛速度的估计,所得到的结果描述了网络收敛速度与隐层神经元个数之间的关系,同时也推广了已有的关于一致度量下的稠密性结果.  相似文献   

3.
用构造性的方法证明对任何定义在多维欧氏空间紧集上的勒贝格可积函数以及它的导数可以用一个单隐层的神经网络同时逼近.这个方法自然地得到了网络的隐层设计和收敛速度的估计,所得到的结果描述了网络收敛速度与隐层神经元个数之间的关系,同时也推广了已有的关于一致度量下的稠密性结果.  相似文献   

4.
基于正交多项式函数的神经网络及其性质研究   总被引:5,自引:0,他引:5  
神经网络的非线性逼近能力的研究是神经网络研究中的一个热点问题。该文提出了基于正交多项式函数的神经网络构造理论,以此为基础提出了基于正交多项式函数的神经网络的构造方法,利用Stone-Weierstrass定理从理论上证明了基于正交多项式函数的神经网络具有能以任意精度逼近任意紧集上的连续函数的全局逼近性质,最后,提出了基于正交多项式函数的神经网络的选择和评价方法,研究表明,在一定条件下,当选择Chebyshev多项式时,所构造出的神经网络性能最优。  相似文献   

5.
利用人工神经网络实现函数逼近   总被引:9,自引:0,他引:9  
神经网络可以被用来计算复杂的输入与输出结果集之间的关系 ,因此神经网络具有强大的函数逼近功能。该文利用Cybenko理论论述了用一种单隐层的前馈神经网络模型在一定条件下可以逼近任何定义在C( [0 ,1] n)上的函数的问题 ,并给出了一个对一维非线性函数的仿真实例 ,取得了良好的效果。  相似文献   

6.
考虑到传统的几何迭代法仅有一阶的收敛性,提出一个二阶可导的能量函数来刻画当前曲线与目标点集之间的差异.首先根据初始的控制顶点和相应的基函数生成初始的样条曲线,然后求差异函数关于各个控制顶点的梯度,最后采用L-BFGS算法快速寻找最优的插值或者逼近曲线.实验结果表明,文中算法具有超线性的收敛速度,在同样的精度要求下比原来的几何迭代法快出数十倍甚至上百倍;既可用于插值问题,也可用于逼近问题;甚至也能适用于数据点参数可变的情形.  相似文献   

7.
首先,通过引入拟减法算子给出K-积分模定义,并针对广义Mamdani模糊系统实施等距剖分其输入空间. 其次,应用分片线性函数(Piecewise linear function,PLF)的性质构造性地证明了广义Mamdani模糊系统在K-积分模意义下具有泛逼近性,从而将该模糊系统对连续函数空间的逼近能力扩展到一类可积函数类空间上. 最后,通过模拟实例给出该广义Mamdani模糊系统对给定可积函数的泛逼近及实现过程.  相似文献   

8.
文中研究一类多元周期Lebesgue平方可积函数SFd与神经网络集合Πφ,n,d=(?)之间偏差dist(SFdφ,n,d)的估计问题.特别地,利用Fourier变换、逼近论等方法给出dist(SFdφ,n,d)的下界估计,即dist(?).所获下界估计仅与神经网络隐层的神经元数目有关,与目标函数及输入的维数无关.该估计也进一步揭示了神经网络逼近速度与其隐层拓扑结构之间的关系.  相似文献   

9.
为了改进BP网络的收敛速度与连续正交基网络无法逼近非连续函数的问题,构造了一类基于V正交基的前馈神经网络(简称V正交基网络),并研究其收敛性条件与伪逆规则.由于V系统是L2([0,1])上的一类完备的正交函数系,且Fourier-V级数有较快的收敛速度,因此,V正交基网络有较快的收敛速度,且能有效地逼近一类强间断的一元...  相似文献   

10.
随着人工神经网络(ANN)理论和技术的不断发展,人们加深了对高阶神经网络映射能力的研究。本文对高阶神经网络实现有限集A(?)R~n→有限集B(?)R的函数逼近问题进行了理论上的研究,得出了高阶神经网络可以实现任何有限集到有限集函数逼近的结论。  相似文献   

11.
The paper gives several strong results on neural network representation in an explicit form. Under very mild conditions a functional defined on a compact set in C[a, b] or L(p)[a, b], spaces of infinite dimensions, can be approximated arbitrarily well by a neural network with one hidden layer. The results are a significant development beyond earlier work, where theorems of approximating continuous functions defined on a finite-dimensional real space by neural networks with one hidden layer were given. All the results are shown to be applicable to the approximation of the output of dynamic systems at any particular time.  相似文献   

12.
Neural networks are widely used in many applications including astronomical physics,image processing, recognition, robotics, and automated target tracking, etc. Their ability to approximate arbitrary functions is the main reason for this popularity. In this paper, we discuss the constructive approximation on the whole real line by a neural networks with a sigmoidal activation function and a fixed weight. Using the convolution method, we show neural network approximation with a fixed weight to a continuous function on a compact interval. Also, we demonstrate a computational work that shows good agreement with theory.  相似文献   

13.
This paper presents a function approximation to a general class of polynomials by using one-hidden-layer feedforward neural networks(FNNs). Both the approximations of algebraic polynomial and trigonometric polynomial functions are discussed in details. For algebraic polynomial functions, an one-hidden-layer FNN with chosen number of hidden-layer nodes and corresponding weights is established by a constructive method to approximate the polynomials to a remarkable high degree of accuracy. For trigonometric functions, an upper bound of approximation is therefore derived by the constructive FNNs. In addition, algorithmic examples are also included to confirm the accuracy performance of the constructive FNNs method. The results show that it improves efficiently the approximations of both algebraic polynomials and trigonometric polynomials. Consequently, the work is really of both theoretical and practical significance in constructing a one-hidden-layer FNNs for approximating the class of polynomials. The work also paves potentially the way for extending the neural networks to approximate a general class of complicated functions both in theory and practice.  相似文献   

14.
In this paper, it is shown that four-layer regular fuzzy neural networks can serve as universal approximators for the sendograph-metric-continuous fuzzy-valued functions. The proof is constructive. We propose a principled method to design four-layer regular fuzzy neural neural network to approximate the target functions. In the previous work, a step function is used as the activation function. To improve the approximation accuracy, in the present work, we also consider using a semi-linear sigmoidal function as the activation function. Then it shows how to design the regular fuzzy neural networks (RFNNs) when the activation functions are the semi-linear sigmoidal function and the step function, respectively. After analyze the approximation accuracy of these two classes of RFNNs, it is found that the former has a much better performance than the latter in approximation accuracy. This conclusion also holds when the target functions satisfy other types of continuity. So the results in this paper can also be used to improve the related work. At last, we give a simulation example to validate the theoretical results.  相似文献   

15.
In this letter, the capabilities of feedforward neural networks (FNNs) on the realization and approximation of functions of the form g: R(l) --> A, which partition the R(l) space into polyhedral sets, each one being assigned to one out of the c classes of A, are investigated. More specifically, a constructive proof is given for the fact that FNNs consisting of nodes having sigmoid output functions are capable of approximating any function g with arbitrary accuracy. Also, the capabilities of FNNs consisting of nodes having the hard limiter as output function are reviewed. In both cases, the two-class as well as the multiclass cases are considered.  相似文献   

16.
The learning capability of neural networks is equivalent to modeling physical events that occur in the real environment. Several early works have demonstrated that neural networks belonging to some classes are universal approximators of input-output deterministic functions. Recent works extend the ability of neural networks in approximating random functions using a class of networks named stochastic neural networks (SNN). In the language of system theory, the approximation of both deterministic and stochastic functions falls within the identification of nonlinear no-memory systems. However, all the results presented so far are restricted to the case of Gaussian stochastic processes (SPs) only, or to linear transformations that guarantee this property. This paper aims at investigating the ability of stochastic neural networks to approximate nonlinear input-output random transformations, thus widening the range of applicability of these networks to nonlinear systems with memory. In particular, this study shows that networks belonging to a class named non-Gaussian stochastic approximate identity neural networks (SAINNs) are capable of approximating the solutions of large classes of nonlinear random ordinary differential transformations. The effectiveness of this approach is demonstrated and discussed by some application examples.  相似文献   

17.
由经典的函数逼近理论衍生的很多数值算法有共同的缺点:计算量大、适应性差、对模型和数据要求高,在实际应用中受到限制。神经网络可以被用来计算复杂输入与输出结果之间的关系,故神经网络具有很强的函数逼近功能。该文给出了径向基函数网络(RBFNN)的结构及学习过程,重点阐述了RBFNN在函数逼近、求解非线性方程组以及散乱数据插值中的应用,结合MATLAB神经网络工具箱给出了数值实例,并与BP网络进行了比较。应用结果表明RBFNN是数值计算的一个有力工具,与传统方法比较具有编程简单、实用的特点,在工程和科学研究上若将其制成软件包则具有很好的使用价值。  相似文献   

18.
多元多项式函数的三层前向神经网络逼近方法   总被引:4,自引:0,他引:4  
该文首先用构造性方法证明:对任意r阶多元多项式,存在确定权值和确定隐元个数的三层前向神经网络.它能以任意精度逼近该多项式.其中权值由所给多元多项式的系数和激活函数确定,而隐元个数由r与输入变量维数确定.作者给出算法和算例,说明基于文中所构造的神经网络可非常高效地逼近多元多项式函数.具体化到一元多项式的情形,文中结果比曹飞龙等所提出的网络和算法更为简单、高效;所获结果对前向神经网络逼近多元多项式函数类的网络构造以及逼近等具有重要的理论与应用意义,为神经网络逼近任意函数的网络构造的理论与方法提供了一条途径.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号