首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 78 毫秒
1.
Neural Computing and Applications - During the last decade, a significant research progress has been drawn in both the theoretical aspects and the applications of Deep Learning Neural Networks....  相似文献   

2.
To enhance the generalization performance of radial basis function (RBF) neural networks, an RBF neural network based on a q-Gaussian function is proposed. A q-Gaussian function is chosen as the radial basis function of the RBF neural network, and a particle swarm optimization algorithm is employed to select the parameters of the network. The non-extensive entropic index q is encoded in the particle and adjusted adaptively in the evolutionary process of population. Simulation results of the function approximation indicate that an RBF neural network based on q-Gaussian function achieves the best generalization performance.  相似文献   

3.
Numerous studies have addressed nonlinear functional approximation by multilayer perceptrons (MLPs) and RBF networks as a special case of the more general mapping problem. The performance of both these supervised network models intimately depends on the efficiency of their learning process. This paper presents an unsupervised recurrent neural network, based on the recurrent Mean Field Theory (MFT) network model, that finds a least-squares approximation to an arbitrary L2 function, given a set of Gaussian radially symmetric basis functions (RBFs). Essential is the reformulation of RBF approximation as a problem of constrained optimisation. A new concept of adiabatic network organisation is introduced. Together with an adaptive mechanism of temperature control this allows the network to build a hierarchical multiresolution approximation with preservation of the global optimisation characteristics. A revised problem mapping results in a position invariant local interconnectivity pattern, which makes the network attractive for electronic implementation. The dynamics and performance of the network are illustrated by numerical simulation.  相似文献   

4.
To enhance the generalization performance of radial basis function (RBF) neural networks, an RBF neural network based on a q-Gaussian function is proposed. A q-Gaussian function is chosen as the radial basis function of the RBF neural network, and a particle swarm optimization algorithm is employed to select the parameters of the network. The non-extensive entropic index q is encoded in the particle and adjusted adaptively in the evolutionary process of population. Simulation results of the function approximation indicate that an RBF neural network based on q-Gaussian function achieves the best generalization performance.  相似文献   

5.
This paper proposes a new nonparametric regression method, based on the combination of generalized regression neural networks (GRNNs), density-dependent multiple kernel bandwidths, and regularization. The presented model is generic and substitutes the very large number of bandwidths with a much smaller number of trainable weights that control the regression model. It depends on sets of extracted data density features which reflect the density properties and distribution irregularities of the training data sets. We provide an efficient initialization scheme and a second-order algorithm to train the model, as well as an overfitting control mechanism based on Bayesian regularization. Numerical results show that the proposed network manages to reduce significantly the computational demands of having individual bandwidths, while at the same time, provides competitive function approximation accuracy in relation to existing methods.  相似文献   

6.
This paper presents a simple sequential growing and pruning algorithm for radial basis function (RBF) networks. The algorithm referred to as growing and pruning (GAP)-RBF uses the concept of "Significance" of a neuron and links it to the learning accuracy. "Significance" of a neuron is defined as its contribution to the network output averaged over all the input data received so far. Using a piecewise-linear approximation for the Gaussian function, a simple and efficient way of computing this significance has been derived for uniformly distributed input data. In the GAP-RBF algorithm, the growing and pruning are based on the significance of the "nearest" neuron. In this paper, the performance of the GAP-RBF learning algorithm is compared with other well-known sequential learning algorithms like RAN, RANEKF, and MRAN on an artificial problem with uniform input distribution and three real-world nonuniform, higher dimensional benchmark problems. The results indicate that the GAP-RBF algorithm can provide comparable generalization performance with a considerably reduced network size and training time.  相似文献   

7.
An orthogonal neural network for function approximation   总被引:6,自引:0,他引:6  
This paper presents a new single-layer neural network which is based on orthogonal functions. This neural network is developed to avoid the problems of traditional feedforward neural networks such as the determination of initial weights and the numbers of layers and processing elements. The desired output accuracy determines the required number of processing elements. Because weights are unique, the training of the neural network converges rapidly. An experiment in approximating typical continuous and discrete functions is given. The results show that the neural network has excellent performance in convergence time and approximation error.  相似文献   

8.
针对广义预测控制算法需要在线递推求解 Diophantine 方程及矩阵求逆等计算量大的缺陷,对参数未知多变量非线性系统提出一种径向基函数神经网络的直接广义预测控制算法.该算法将多变量非线性系统转化为多变量时变线性系统,用三次样条基函数逼近系统广义误差向量中的时变系数,然后利用径向基神经网络来逼近控制增量表达式,并基于广义误差估计值对控制器参数向量即网络权值向量θu和广义误差估计值中的未知向量θe进行自适应调整.仿真结果验证了此算法的有效性.  相似文献   

9.
针对前馈神经网络隐含层神经元不能在线调整的问题,提出了一种自适应增长修剪算法(AGP),利用增长和修剪相结合对神经网络隐含层神经元进行调整,实现神经网络结构的自组织,从而提高神经网络的性能.同时,将该算法应用于污水处理生化需氧量(BOD)软测量,仿真实验结果表明,与其他自组织神经网络相比,AGP具有较好的泛化能力及较高的拟合精度,能够实现出水BOD的预测.  相似文献   

10.
为了避免Diophantine方程求解和矩阵求逆运算,提高广义预测控制算法的实时性,对一类参数未知多变量非线性系统提出一种径向基函数神经网络的直接广义预测控制(GPC)算法。该算法将多变量非线性系统转换为多变量时变线性系统,然后利用径向基神经网络来逼近控制增量,对控制器参数向量,即网络权值中的未知向量基于跟踪误差进行自适应调整。理论证明,该方法可使跟踪误差收敛到原点的一个小邻域内。仿真结果验证了此算法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号