首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Presents a systematic approach for constructing reformulated radial basis function (RBF) neural networks, which was developed to facilitate their training by supervised learning algorithms based on gradient descent. This approach reduces the construction of radial basis function models to the selection of admissible generator functions. The selection of generator functions relies on the concept of the blind spot, which is introduced in the paper. The paper also introduces a new family of reformulated radial basis function neural networks, which are referred to as cosine radial basis functions. Cosine radial basis functions are constructed by linear generator functions of a special form and their use as similarity measures in radial basis function models is justified by their geometric interpretation. A set of experiments on a variety of datasets indicate that cosine radial basis functions outperform considerably conventional radial basis function neural networks with Gaussian radial basis functions. Cosine radial basis functions are also strong competitors to existing reformulated radial basis function models trained by gradient descent and feedforward neural networks with sigmoid hidden units.  相似文献   

2.
This paper introduces a learning algorithm that can be used for training reformulated radial basis function neural networks (RBFNNs) capable of identifying uncertainty in data classification. This learning algorithm trains a special class of reformulated RBFNNs, known as cosine RBFNNs, by updating selected adjustable parameters to minimize the class-conditional variances at the outputs of their radial basis functions (RBFs). The experiments verify that quantum neural networks (QNNs) and cosine RBFNNs trained by the proposed learning algorithm are capable of identifying uncertainty in data classification, a property that is not shared by cosine RBFNNs trained by the original learning algorithm and conventional feed-forward neural networks (FFNNs). Finally, this study leads to a simple classification strategy that can be used to improve the classification accuracy of QNNs and cosine RBFNNs by rejecting ambiguous feature vectors based on their responses.  相似文献   

3.
4.
In a radial basis function (RBF) network, the RBF centers and widths can be evolved by a cooperative-competitive genetic algorithm. The set of genetic strings in one generation of the algorithm represents one REP network, not a population of competing networks. This leads to moderate computation times for the algorithm as a whole. Selection operates on individual RBFs rather than on whole networks. Selection therefore requires a genetic fitness function that promotes competition among RBFs which are doing nearly the same job while at the same time promoting cooperation among RBFs which cover different parts of the domain of the function to be approximated. Niche creation resulting from a fitness function of the form |w(i)|(beta)/E(|w(i')|(beta)), 1相似文献   

5.
This article presents a new family of reformulated radial basis function (RBF) neural networks that employ adjustable weighted norms to measure the distance between the training vectors and the centers of the radial basis functions. The reformulated RBF model introduced in this article incorporates norm weights that can be updated during learning to facilitate the implementation of the desired input‐output mapping. Experiments involving classification and function approximation tasks verify that the proposed RBF neural networks outperform conventional RBF neural networks and reformulated RBF neural networks employing fixed Euclidean norms. Reformulated RBF neural networks with adjustable weighted norms are also strong competitors to conventional feedforward neural networks in terms of performance, implementation simplicity, and training speed. © 2003 Wiley Periodicals, Inc.  相似文献   

6.
一种基于高斯核的RBF神经网络学习算法   总被引:15,自引:0,他引:15  
殷勇  邱明 《计算机工程与应用》2002,38(21):118-119,178
RBF神经网络中心等参数确定得是否合理将直接影响到RBF神经网络的学习性能。通过有监督学习的方法来确定RBF神经网络的中心等参数是最一般化的方法。在这种方法中,参数的初始化问题是关键问题。文章在分析RBF神经网络映射性能的基础上,提出了中心等参数初始化的一种方法,并借助于梯度下降法给出了RBF神经网络的学习算法。多种实例表明,所给出的学习算法是有效的。该研究为RBF神经网络的广泛应用提供了一定的技术保障。  相似文献   

7.
The mixed use of different shapes of radial basis functions (RBFs) in radial basis functions neural networks (RBFNNs) is investigated in this paper. For this purpose, we propose the use of a generalised version of the standard RBFNN, based on the generalised Gaussian distribution. The generalised radial basis function (GRBF) proposed in this paper is able to reproduce other different radial basis functions (RBFs) by changing a real parameter τ. In the proposed methodology, a hybrid evolutionary algorithm (HEA) is employed to estimate the number of hidden neuron, the centres, type and width of each RBF associated with each radial unit. In order to test the performance of the proposed methodology, an experimental study is presented with 20 datasets from the UCI repository. The GRBF neural network (GRBFNN) was compared to RBFNNs with Gaussian, Cauchy and inverse multiquadratic RBFs in the hidden layer and to other classifiers, including different RBFNN design methods, support vector machines (SVMs), a sparse probabilistic classifier (sparse multinominal logistic regression, SMLR) and other non-sparse (but regularised) probabilistic classifiers (regularised multinominal logistic regression, RMLR). The GRBFNN models were found to be better than the alternative RBFNNs for almost all datasets, producing the highest mean accuracy rank.  相似文献   

8.
神经网络增强学习的梯度算法研究   总被引:11,自引:1,他引:11  
徐昕  贺汉根 《计算机学报》2003,26(2):227-233
针对具有连续状态和离散行为空间的Markov决策问题,提出了一种新的采用多层前馈神经网络进行值函数逼近的梯度下降增强学习算法,该算法采用了近似贪心且连续可微的Boltzmann分布行为选择策略,通过极小化具有非平稳行为策略的Bellman残差平方和性能指标,以实现对Markov决策过程最优值函数的逼近,对算法的收敛性和近似最优策略的性能进行了理论分析,通过Mountain-Car学习控制问题的仿真研究进一步验证了算法的学习效率和泛化性能。  相似文献   

9.
The paper presents novel modifications to radial basis functions (RBFs) and a neural network based classifier for holistic recognition of the six universal facial expressions from static images. The new basis functions, called cloud basis functions (CBFs) use a different feature weighting, derived to emphasize features relevant to class discrimination. Further, these basis functions are designed to have multiple boundary segments, rather than a single boundary as for RBFs. These new enhancements to the basis functions along with a suitable training algorithm allow the neural network to better learn the specific properties of the problem domain. The proposed classifiers have demonstrated superior performance compared to conventional RBF neural networks as well as several other types of holistic techniques used in conjunction with RBF neural networks. The CBF neural network based classifier yielded an accuracy of 96.1%, compared to 86.6%, the best accuracy obtained from all other conventional RBF neural network based classification schemes tested using the same database.  相似文献   

10.
P.A.  C.  M.  J.C.   《Neurocomputing》2009,72(13-15):2731
This paper proposes a hybrid neural network model using a possible combination of different transfer projection functions (sigmoidal unit, SU, product unit, PU) and kernel functions (radial basis function, RBF) in the hidden layer of a feed-forward neural network. An evolutionary algorithm is adapted to this model and applied for learning the architecture, weights and node typology. Three different combined basis function models are proposed with all the different pairs that can be obtained with SU, PU and RBF nodes: product–sigmoidal unit (PSU) neural networks, product–radial basis function (PRBF) neural networks, and sigmoidal–radial basis function (SRBF) neural networks; and these are compared to the corresponding pure models: product unit neural network (PUNN), multilayer perceptron (MLP) and the RBF neural network. The proposals are tested using ten benchmark classification problems from well known machine learning problems. Combined functions using projection and kernel functions are found to be better than pure basis functions for the task of classification in several datasets.  相似文献   

11.
在对有杆泵井进行故障诊断过程中,采用了具有很强轮廓形状识别能力的傅里叶描述子作为RBFNN(径向基函数神经网络)的输入向量特征提取,通过分析比较基于梯度下降法和遗传算法的RBF网络各自特点,提出了一种基于傅里叶描述子的分层循环学习RBFNN算法。通过对非线性函数逼近的仿真实验证明了所提算法是准确有效的,最后利用MATLAB神经网络工具箱,建立分层学习算法的网络模型实现对有杆泵井的故障诊断,通过仿真测试验证了所提出的故障诊断方法能够准确地判断出有杆泵井故障类型。  相似文献   

12.
The approximation properties of the RBF neural networks are investigated in this paper. A new approach is proposed, which is based on approximations with orthogonal combinations of functions. An orthogonalization framework is presented for the Gaussian basis functions. It is shown how to use this framework to design efficient neural networks. Using this method we can estimate the necessary number of the hidden nodes, and we can evaluate how appropriate the use of the Gaussian RBF networks is for the approximation of a given function.  相似文献   

13.
Numerous studies have addressed nonlinear functional approximation by multilayer perceptrons (MLPs) and RBF networks as a special case of the more general mapping problem. The performance of both these supervised network models intimately depends on the efficiency of their learning process. This paper presents an unsupervised recurrent neural network, based on the recurrent Mean Field Theory (MFT) network model, that finds a least-squares approximation to an arbitrary L2 function, given a set of Gaussian radially symmetric basis functions (RBFs). Essential is the reformulation of RBF approximation as a problem of constrained optimisation. A new concept of adiabatic network organisation is introduced. Together with an adaptive mechanism of temperature control this allows the network to build a hierarchical multiresolution approximation with preservation of the global optimisation characteristics. A revised problem mapping results in a position invariant local interconnectivity pattern, which makes the network attractive for electronic implementation. The dynamics and performance of the network are illustrated by numerical simulation.  相似文献   

14.
深度学习应用技术研究   总被引:2,自引:0,他引:2  
本文针对深度学习应用技术进行了研究性综述。详细阐述了RBM(Restricted Boltzmann Machine)逐层预训练后再用BP(back-propagation)微调的深度学习贪婪层训练方法,对比分析了BP算法中三种梯度下降的方式,建议在线学习系统,采用随机梯度下降,静态离线学习系统采用随机小批量梯度下降;归纳总结了深度学习深层结构特征,并推荐了目前最受欢迎的5层深度网络结构设计方法。分析了前馈神经网络非线性激活函数的必要性及常用的激活函数优点,并推荐ReLU (rectified linear units)激活函数。最后简要概括了深度CNNs(Convolutional Neural Networks), 深度RNNs(recurrent neural networks), LSTM(long short-termmemory networks)等新型深度网络的特点及应用场景,并归纳总结了当前深度学习可能的发展方向。  相似文献   

15.
为了更有效地优化前向神经网络的求解能力,提出了一种新的综合的转换函数,将多层感知机和RBF神经网络更有机地结合起来,以产生灵活的决策边界。在此基础上推导出了相应的学习算法。并通过对实际的模式分类问题的仿真,将文中的方法与带动量项BP算法、CSFN、RBF等算法进行了比较,验证了其有效性。  相似文献   

16.
A complex radial basis function neural network is proposed for equalization of quadrature amplitude modulation (QAM) signals in communication channels. The network utilizes a sequential learning algorithm referred to as complex minimal resource allocation network (CMRAN) and is an extension of the MRAN algorithm originally developed for online learning in real-valued radial basis function (RBF) networks. CMRAN has the ability to grow and prune the (complex) RBF network's hidden neurons to ensure a parsimonious network structure. The performance of the CMRAN equalizer for nonlinear channel equalization problems has been evaluated by comparing it with the functional link artificial neural network (FLANN) equalizer of J.C. Patra et al. (1999) and the Gaussian stochastic gradient (SG) RBF equalizer of I. Cha and S. Kassam (1995). The results clearly show that CMRANs performance is superior in terms of symbol error rates and network complexity.  相似文献   

17.
梯度算法下RBF网的参数变化动态   总被引:2,自引:0,他引:2  
分析神经网络学习过程中各参数的变化动态,对理解网络的动力学行为,改进网络的结构和性能等具有积极意义.本文讨论了用梯度算法优化误差平方和损失函数时RBF网隐节点参数的变化动态,即算法收敛后各隐节点参数的可能取值.主要结论包括:如果算法收敛后损失函数不为零,则各隐节点将位于样本输入的加权聚类中心;如果损失函数为零,则网络中的冗余隐节点将出现萎缩、衰减、外移或重合现象.进一步的试验发现,对结构过大的RBF网,冗余隐节点的萎缩、外移、衰减和重合是频繁出现的现象.  相似文献   

18.
通过PCA方法来提取人脸特征,这些特征进一步映射到Fisher最优子空间,在这个子空间,类间分布同类内分布的比率最大。然后,提出一种新颖的有监督的聚类方法,利用有限的训练数据信息来选择RBF的结构和初始参数。最后,提出了一种混合的学习算法来训练RBF神经网络,使得在梯度下降寻优算法中大大降低了搜索空间的维数。在ORL数据库上进行的仿真结果表明,这个方法无论是在分类的错误率上还是在学习的效率上都能表现出极好的性能。  相似文献   

19.
李享梅  赵天昀 《计算机应用》2005,25(12):2789-2791
针对BP神经网络中采用的梯度下降法局部搜索能力强、全局搜索能力差和遗传神经网络中采用的遗传算法全局搜索能力强、局部搜索能力差的特点,提出了一种集梯度下降法和遗传算法优点为一体的混合智能学习法(Hybrid Intelligence learning algorithm),简称HI算法,并将其应用到优化多层前馈型神经网络连接权问题。对该算法进行了设计和实现,从理论和实际两方面证明混合智能学习法神经网络与BP神经网络和基于遗传算法的神经网络相比有更好的运算性能、更快的收敛速度和更高的精度。  相似文献   

20.
This paper presents a wavelet-based recurrent fuzzy neural network (WRFNN) for prediction and identification of nonlinear dynamic systems. The proposed WRFNN model combines the traditional Takagi-Sugeno-Kang (TSK) fuzzy model and the wavelet neural networks (WNN). This paper adopts the nonorthogonal and compactly supported functions as wavelet neural network bases. Temporal relations embedded in the network are caused by adding some feedback connections representing the memory units into the second layer of the feedforward wavelet-based fuzzy neural networks (WFNN). An online learning algorithm, which consists of structure learning and parameter learning, is also presented. The structure learning depends on the degree measure to obtain the number of fuzzy rules and wavelet functions. Meanwhile, the parameter learning is based on the gradient descent method for adjusting the shape of the membership function and the connection weights of WNN. Finally, computer simulations have demonstrated that the proposed WRFNN model requires fewer adjustable parameters and obtains a smaller rms error than other methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号