首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 187 毫秒
1.
Classification ability of single hidden layer feedforward neuralnetworks   总被引:1,自引:0,他引:1  
Multilayer perceptrons with hard-limiting (signum) activation functions can form complex decision regions. It is well known that a three-layer perceptron (two hidden layers) can form arbitrary disjoint decision regions and a two-layer perceptron (one hidden layer) can form single convex decision regions. This paper further proves that single hidden layer feedforward neural networks (SLFN) with any continuous bounded nonconstant activation function or any arbitrary bounded (continuous or not continuous) activation function which has unequal limits at infinities (not just perceptrons) can form disjoint decision regions with arbitrary shapes in multidimensional cases, SLFN with some unbounded activation function can also form disjoint decision regions with arbitrary shapes.  相似文献   

2.
We present a type of single-hidden layer feed-forward wavelet neural networks. First, we give a new and quantitative proof of the fact that a single-hidden layer wavelet neural network with n + 1 hidden neurons can interpolate + 1 distinct samples with zero error. Then, without training, we constructed a wavelet neural network X a (x, A), which can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. The given wavelet neural network can uniformly approximate any continuous function of one variable.  相似文献   

3.
According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g:R/spl rarr/R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g:R/spl rarr/R and /spl int//sub R/g(x)dx/spl ne/0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.  相似文献   

4.
Convex incremental extreme learning machine   总被引:8,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

5.
Di  Xiao-Jun  John A.   《Neurocomputing》2007,70(16-18):3019
Real-world systems usually involve both continuous and discrete input variables. However, in existing learning algorithms of both neural networks and fuzzy systems, these mixed variables are usually treated as continuous without taking into account the special features of discrete variables. It is inefficient to represent each discrete input variable having only a few fixed values by one input neuron with full connection to the hidden layer. This paper proposes a novel hierarchical hybrid fuzzy neural network to represent systems with mixed input variables. The proposed model consists of two levels: the lower level are fuzzy sub-systems each of which aggregates several discrete input variables into an intermediate variable as its output; the higher level is a neural network whose input variables consist of continuous input variables and intermediate variables. For systems or function approximations with mixed variables, it is shown that the proposed hierarchical hybrid fuzzy neural networks outperform standard neural networks in accuracy with fewer parameters, and both provide greater transparency and preserve the universal approximation property (i.e., they can approximate any function with mixed input variables to any degree of accuracy).  相似文献   

6.
曹飞龙  李有梅  徐宗本 《软件学报》2003,14(11):1869-1874
用构造性的方法证明对任何定义在多维欧氏空间紧集上的勒贝格可积函数以及它的导数可以用一个单隐层的神经网络同时逼近.这个方法自然地得到了网络的隐层设计和收敛速度的估计,所得到的结果描述了网络收敛速度与隐层神经元个数之间的关系,同时也推广了已有的关于一致度量下的稠密性结果.  相似文献   

7.
用构造性的方法证明对任何定义在多维欧氏空间紧集上的勒贝格可积函数以及它的导数可以用一个单隐层的神经网络同时逼近.这个方法自然地得到了网络的隐层设计和收敛速度的估计,所得到的结果描述了网络收敛速度与隐层神经元个数之间的关系,同时也推广了已有的关于一致度量下的稠密性结果.  相似文献   

8.
In this work an adaptive mechanism for choosing the activation function is proposed and described. Four bi-modal derivative sigmoidal adaptive activation function is used as the activation function at the hidden layer of a single hidden layer sigmoidal feedforward artificial neural networks. These four bi-modal derivative activation functions are grouped as asymmetric and anti-symmetric activation functions (in groups of two each). For the purpose of comparison, the logistic function (an asymmetric function) and the function obtained by subtracting 0.5 from it (an anti-symmetric) function is also used as activation function for the hidden layer nodes’. The resilient backpropagation algorithm with improved weight-tracking (iRprop+) is used to adapt the parameter of the activation functions and also the weights and/or biases of the sigmoidal feedforward artificial neural networks. The learning tasks used to demonstrate the efficacy and efficiency of the proposed mechanism are 10 function approximation tasks and four real benchmark problems taken from the UCI machine learning repository. The obtained results demonstrate that both for asymmetric as well as anti-symmetric activation usage, the proposed/used adaptive activation functions are demonstratively as good as if not better than the sigmoidal function without any adaptive parameter when used as activation function of the hidden layer nodes.  相似文献   

9.
The paper gives several strong results on neural network representation in an explicit form. Under very mild conditions a functional defined on a compact set in C[a, b] or L(p)[a, b], spaces of infinite dimensions, can be approximated arbitrarily well by a neural network with one hidden layer. The results are a significant development beyond earlier work, where theorems of approximating continuous functions defined on a finite-dimensional real space by neural networks with one hidden layer were given. All the results are shown to be applicable to the approximation of the output of dynamic systems at any particular time.  相似文献   

10.
Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号