首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
蒙西    乔俊飞    李文静   《智能系统学报》2018,13(3):331-338
针对径向基函数(radial basis function,RBF)神经网络隐含层结构难以确定的问题,提出一种基于快速密度聚类的网络结构设计算法。该算法将快速密度聚类算法良好的聚类特性用于RBF神经网络结构设计中,通过寻找密度最大的点并将其作为隐含层神经元,进而确定隐含层神经元个数和初始参数;同时,引入高斯函数的特性,保证了每个隐含层神经元的活性;最后,用一种改进的二阶算法对神经网络进行训练,提高了神经网络的收敛速度和泛化能力。利用典型非线性函数逼近和非线性动态系统辨识实验进行仿真验证,结果表明,基于快速密度聚类设计的RBF神经网络具有紧凑的网络结构、快速的学习能力和良好的泛化能力。  相似文献   

2.
针对径向基神经函数(RBF)网络隐层结构难以确定的问题,本文介绍了一种基于神经元特性的RBF神经网络自组织设计方法,该方法将神经元的激活活性、显著性、相关性相结合设计RBF(ASC–RBF)神经网络.首先利用神经元的激活活性,实现隐含层神经元的自适应增加,结合神经元的显著性以及神经元之间的相关性,实现神经元的自适应替换和合并,完成网络自组织设计并提高网络的紧凑性,然后利用二阶梯度算法对网络参数进行修正学习,保证了RBF网络的精度;另外,针对网络结构自组织机制给出了稳定性分析;最后通过两个基准非线性系统建模仿真实验以及实际污水处理过程水质参数预测实验验证,证明该算法的有效性.对比实验结果表明, ASC–RBF神经网络与现有的自组织网络相比,在保证泛化性能的同时,该网络的训练速度更快,而且有更紧凑的网络结构.  相似文献   

3.

针对增量型极限学习机(I-ELM) 中存在大量降低学习效率及准确性的冗余节点的问题, 提出一种基于Delta 检验(DT) 和混沌优化算法(COA) 的改进式增量型核极限学习算法. 利用COA的全局搜索能力对I-ELM 中的隐含层节点参数进行寻优, 结合DT 算法检验模型输出误差, 确定有效的隐含层节点数量, 从而降低网络复杂程度, 提高算法的学习效率; 加入核函数可增强网络的在线预测能力. 仿真结果表明, 所提出的DCI-ELMK 算法具有较好的预测精度和泛化能力, 网络结构更为紧凑.

  相似文献   

4.
本文提出了基于改进型粒子群优化的BP网络学习算法。在该算法中,首先改进了传统的BP算法,有效地使得网络中输入层、隐含层和输出层结点个数达到一个最优解。然后,用粒子群优化算法替代了传统BP算法中的梯度下降法,使得改进后的算法具有不易陷入局部极小、泛化性能好等特点,并将该算法应用在了股票预测的应用设计中。结果证明明:该算法能够明显减少迭代次数,提高收敛精度,其泛化性能也优于传统BP算法。  相似文献   

5.
构造了以单极Sigmoid函数作为隐层神经元激励函数的神经网络分类器,网络中输入层到隐层的权值和隐层神经元的阈值均为随机生成。同时,结合利用伪逆思想一步计算出隐层和输出层神经元之间连接权值的权值直接确定(WDD)法,进一步提出了具有边增边删和二次删除策略的网络结构自确定法,用来确定神经网络最优权值和结构。数值实验结果表明,该算法能够快速有效地确定单极Sigmoid激励函数神经网络分类器的最优网络结构; 分类器的分类性能良好。  相似文献   

6.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

7.
We present two new classifiers for two-class classification problems using a new Beta-SVM kernel transformation and an iterative algorithm to concurrently select the support vectors for a support vector machine (SVM) and the hidden units for a single hidden layer neural network to achieve a better generalization performance. To construct the classifiers, the contributing data points are chosen on the basis of a thresholding scheme of the outputs of a single perceptron trained using all training data samples. The chosen support vectors are used to construct a new SVM classifier that we call Beta-SVN. The number of chosen support vectors is used to determine the structure of the hidden layer in a single hidden layer neural network that we call Beta-NN. The Beta-SVN and Beta-NN structures produced by our method outperformed other commonly used classifiers when tested on a 2-dimensional non-linearly separable data set.  相似文献   

8.
In this letter, we attempt to quantify the significance of increasing the number of neurons in the hidden layer of a feedforward neural network architecture using the singular value decomposition (SVD). Through this, we extend some well-known properties of the SVD in evaluating the generalizability of single hidden layer feedforward networks (SLFNs) with respect to the number of hidden layer neurons. The generalization capability of the SLFN is measured by the degree of linear independency of the patterns in hidden layer space, which can be indirectly quantified from the singular values obtained from the SVD, in a postlearning step. A pruning/growing technique based on these singular values is then used to estimate the necessary number of neurons in the hidden layer. More importantly, we describe in detail properties of the SVD in determining the structure of a neural network particularly with respect to the robustness of the selected model  相似文献   

9.
针对传统极限学习机的输入权值矩阵和隐含层偏差是随机给定进而可能会导致在乳腺肿瘤的辅助诊断应用研究中存在精度明显不足的情况,提出用改进鱼群算法优化ELM方法。在完成对乳腺肿瘤有效的辅助诊断的过程中,本研究工作充分利用ELM能快速地完成训练过程且具有很好的泛化能力的特点,并结合用改进鱼群算法对ELM的隐含层偏差进行优化,构造出了乳腺肿瘤与从乳腺肿瘤样本数据中提取的10个特征向量之间的非线性映射关系。将本文提出的乳腺肿瘤识别方法的仿真结果与AFSA-ELM方法、ELM方法、LVQ方法、BP方法的仿真结果分别从识别准确率、假阴性率、学习速度三个方面做对比分析,仿真结果表明,本文所提方法对乳腺肿瘤诊断具有较高的分类识别准确率、假阴性率以及较快的学习速率。  相似文献   

10.

This paper presents an adaptive technique for obtaining centers of the hidden layer neurons of radial basis function neural network (RBFNN) for face recognition. The proposed technique uses firefly algorithm to obtain natural sub-clusters of training face images formed due to variations in pose, illumination, expression and occlusion, etc. Movement of fireflies in a hyper-dimensional input space is controlled by tuning the parameter gamma (γ) of firefly algorithm which plays an important role in maintaining the trade-off between effective search space exploration, firefly convergence, overall computational time and the recognition accuracy. The proposed technique is novel as it combines the advantages of evolutionary firefly algorithm and RBFNN in adaptive evolution of number and centers of hidden neurons. The strength of the proposed technique lies in its fast convergence, improved face recognition performance, reduced feature selection overhead and algorithm stability. The proposed technique is validated using benchmark face databases, namely ORL, Yale, AR and LFW. The average face recognition accuracies achieved using proposed algorithm for the above face databases outperform some of the existing techniques in face recognition.

  相似文献   

11.
张辉  柴毅 《计算机工程与应用》2012,48(20):146-149,157
提出了一种改进的RBF神经网络参数优化算法。通过资源分配网络算法确定隐含层节点个数,引入剪枝策略删除对网络贡献不大的节点,用改进的粒子群算法对RBF网络的中心、宽度、权值进行优化,使RBF网络不仅可以得到合适的结构,同时也可以得到合适的控制参数。将此算法用于连续搅拌釜反应器模型的预测,结果表明,此算法优化后的RBF网络结构小,并且具有较高的泛化能力。  相似文献   

12.
陈华伟  年晓玲  靳蕃 《计算机应用》2006,26(5):1106-1108
提出一种新的前向神经网络的学习算法,该算法在正向和反向阶段均可以对不同的层间的权值进行必要的调整,在正向阶段按最小范数二乘解原则确定连接隐层与输出层的权值,反向阶段则按误差梯度下降原则调整通连接输入层与隐层间的权值,具有很快的学习能力和收敛速度,并且能在一定的程度上保证所训练神经网络的泛化能力,实验结果初步验证了新算法的性能。  相似文献   

13.
在5G移动通信系统商用落地的背景下,设计准确、高效的信道估计方法对无线网络性能优化具有重要意义。基于改进GA-Elman算法,提出一种新的无线智能传播损耗预测方法。对Elman神经网络中的连接权值、阈值和隐藏神经元进行实数编码,在隐藏神经元编码中加入二进制控制基因,同时利用自适应遗传算法对权值、阈值和隐藏神经元数量进行优化,解决网络易陷入局部极小值和神经元数目难以确定的问题,从而提高预测性能。仿真结果表明,与仅优化连接权值及阈值的GA-Elman神经网络和标准Elman神经网络相比,该方法具有较高的预测精度。  相似文献   

14.
韩敏  刘晓欣 《控制与决策》2014,29(9):1576-1580

针对回归问题中存在的变量选择和网络结构设计问题, 提出一种基于互信息的极端学习机(ELM) 训练算法, 同时实现输入变量的选择和隐含层的结构优化. 该算法将互信息输入变量选择嵌入到ELM网络的学习过程之中, 以网络的学习性能作为衡量输入变量与输出变量相关与否的指标, 并以增量式的方法确定隐含层节点的规模.在Lorenz、Gas Furnace 和10 组标杆数据上的仿真结果表明了所提出算法的有效性. 该算法不仅可以简化网络结构, 还可以提高网络的泛化性能.

  相似文献   

15.
针对传统极端学习机输入权值与隐层阈值随机设定的问题,提出了输出值反向分配算法。算法在传统极端学习机的基础上,通过优化方法得到最优输出值分配系数,并利用最小二乘法确定网络输入参数。将该算法应用到常用数据集进行实验,并与其他极端学习机改进算法进行比较,显示该算法有良好的学习以及泛化能力,能够得到简单的网络结构,证明了算法的有效性。  相似文献   

16.
冯明琴  张靖  孙政顺 《自动化学报》2003,29(6):1015-1022
催化裂化装置是一个高度非线性、时变、长时延、强耦合、分布参数和不确定性的复杂系统.在研究其过程机理的基础上,定义了一种模糊神经网络用以建模,用自相关函数检验法检验模型的正确性,再用改进的Frank-Wolfe算法进行稳态优化计算,并以一炼油厂催化裂化装置为对象进行试验,研究其辨识、建模和稳态优化控制.这种模糊神经网络具有隐层数多、隐层结点数多、泛化能力和逼近能力强、收敛速度快的优点,更突出的特点还在于可由输出端对输入求导,为稳态优化计算提供了极大方便,它与改进的Frank-Wolfe算法相结合用于解决非线性复杂生产过程的建模和稳态优化控制问题是可行的.  相似文献   

17.
Self-Adaptive Evolutionary Extreme Learning Machine   总被引:1,自引:0,他引:1  
In this paper, we propose an improved learning algorithm named self-adaptive evolutionary extreme learning machine (SaE-ELM) for single hidden layer feedforward networks (SLFNs). In SaE-ELM, the network hidden node parameters are optimized by the self-adaptive differential evolution algorithm, whose trial vector generation strategies and their associated control parameters are self-adapted in a strategy pool by learning from their previous experiences in generating promising solutions, and the network output weights are calculated using the Moore?CPenrose generalized inverse. SaE-ELM outperforms the evolutionary extreme learning machine (E-ELM) and the different evolutionary Levenberg?CMarquardt method in general as it could self-adaptively determine the suitable control parameters and generation strategies involved in DE. Simulations have shown that SaE-ELM not only performs better than E-ELM with several manually choosing generation strategies and control parameters but also obtains better generalization performances than several related methods.  相似文献   

18.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

19.
A sequential orthogonal approach to the building and training of a single hidden layer neural network is presented in this paper. The Sequential Learning Neural Network (SLNN) model proposed by Zhang and Morris [1]is used in this paper to tackle the common problem encountered by the conventional Feed Forward Neural Network (FFNN) in determining the network structure in the number of hidden layers and the number of hidden neurons in each layer. The procedure starts with a single hidden neuron and sequentially increases in the number of hidden neurons until the model error is sufficiently small. The classical Gram–Schmidt orthogonalization method is used at each step to form a set of orthogonal bases for the space spanned by output vectors of the hidden neurons. In this approach it is possible to determine the necessary number of hidden neurons required. However, for the problems investigated in this paper, one hidden neuron itself is sufficient to achieve the desired accuracy. The neural network architecture has been trained and tested on two practical civil engineering problems – soil classification, and the prediction o strength and workability of high performance concrete.  相似文献   

20.
基于免疫RBF神经网络的逆运动学求解   总被引:1,自引:0,他引:1       下载免费PDF全文
魏娟  杨恢先  谢海霞 《计算机工程》2010,36(22):192-194
求解机械臂逆运动学问题可以采用神经网络来建立逆运动学模型,通过遗传算法或BP算法训练神经网络的权值从而得到问题的解,在求解精度和收敛速度上有待进一步改进。采用人工免疫原理对RBF网络训练数据集的泛化能力在线调整隐层结构,生成RBF网络隐层。当网络结构确定时,采用递推最小二乘法确定网络连接权值。由此对神经网络的网络结构和连接权进行自适应调整和学习。通过仿真可以看出,用免疫原理训练的神经网络收敛速度快,泛化能力强,可大幅提高机械臂逆运动学求解精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号