首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
遗传算法与BP神经网络相结合的说话人识别系统   总被引:2,自引:0,他引:2  
基于BP神经网络的说话人识别系统是目前说话人识别中的一种主要模型,但BP神经网络通常难以确定隐含层单元的数目,且收敛速度慢。针对此缺点,提出了一种基于遗传算法(GA)的说话人识别BP神经网络优化方案,该方案利用混合编码的GA对神经网络的连接权和结构进行了优化,可以有效地剔除整个网络冗余节点和冗余连接权,方案利用了BP神经网络的并行性和GA的全局搜索能力,显著地改善了网络的处理能力。实验表明:基于混合编码GA的BP神经网络具有快速学习网络权重的能力,识别率高,是说话人识别的一种有效可行的新方案。  相似文献   

2.
FERNN: An Algorithm for Fast Extraction of Rules from Neural Networks   总被引:4,自引:0,他引:4  
Before symbolic rules are extracted from a trained neural network, the network is usually pruned so as to obtain more concise rules. Typical pruning algorithms require retraining the network which incurs additional cost. This paper presents FERNN, a fast method for extracting rules from trained neural networks without network retraining. Given a fully connected trained feedforward network with a single hidden layer, FERNN first identifies the relevant hidden units by computing their information gains. For each relevant hidden unit, its activation values is divided into two subintervals such that the information gain is maximized. FERNN finds the set of relevant network connections from the input units to this hidden unit by checking the magnitudes of their weights. The connections with large weights are identified as relevant. Finally, FERNN generates rules that distinguish the two subintervals of the hidden activation values in terms of the network inputs. Experimental results show that the size and the predictive accuracy of the tree generated are comparable to those extracted by another method which prunes and retrains the network.  相似文献   

3.
In this paper, a hybrid method is proposed to control a nonlinear dynamic system using feedforward neural network. This learning procedure uses different learning algorithm separately. The weights connecting the input and hidden layers are firstly adjusted by a self organized learning procedure, whereas the weights between hidden and output layers are trained by supervised learning algorithm, such as a gradient descent method. A comparison with backpropagation (BP) shows that the new algorithm can considerably reduce network training time.  相似文献   

4.
This paper proposes a novel model by evolving partially connected neural networks (EPCNNs) to predict the stock price trend using technical indicators as inputs. The proposed architecture has provided some new features different from the features of artificial neural networks: (1) connection between neurons is random; (2) there can be more than one hidden layer; (3) evolutionary algorithm is employed to improve the learning algorithm and training weights. In order to improve the expressive ability of neural networks, EPCNN utilizes random connection between neurons and more hidden layers to learn the knowledge stored within the historic time series data. The genetically evolved weights mitigate the well-known limitations of gradient descent algorithm. In addition, the activation function is defined using sin(x) function instead of sigmoid function. Three experiments were conducted which are explained as follows. In the first experiment, we compared the predicted value of the trained EPCNN model with the actual value to evaluate the prediction accuracy of the model. Second experiment studied the over fitting problem which occurred in neural network training by taking different number of neurons and layers. The third experiment compared the performance of the proposed EPCNN model with other models like BPN, TSK fuzzy system, multiple regression analysis and showed that EPCNN can provide a very accurate prediction of the stock price index for most of the data. Therefore, it is a very promising tool in forecasting of the financial time series data.  相似文献   

5.
由神经网络提取规则的一种方法及其应用   总被引:10,自引:1,他引:9  
提出一种由预处理和规则提取两阶段组成的方法从神经网络中提取规则,预处理阶段包含有动态修正、聚类和删枝3部分。动态修正是自动生成或由初始规则集构造出全联接或非全联接网络初步拓扑结构;聚类和删枝分别删截掉不重要或多余的隐含节点和联接,从而可以得到最简洁和规模小的拓扑结构,成为提取规则的基础,提出了规则提取算法并用于已删截好的网络提取规则。该方法应用于美国AD报告中气象云图的数据,提取出规则集,经过测试  相似文献   

6.
极限学习机( Extreme Learning Machine , ELM)是一种新型的单馈层神经网络算法,克服了传统的误差反向传播方法需要多次迭代,算法的计算量和搜索空间大的缺点,只需要设置合适的隐含层节点个数,为输入权和隐含层偏差进行随机赋值,一次完成无需迭代。研究表明股票市场是一个非常复杂的非线性系统,需要用到人工智能理论、统计学理论和经济学理论。本文将极限学习机方法引入股票价格预测中,通过对比支持向量机( Support Vector Machine , SVM)和误差反传神经网络( Back Propagation Neural Network , BP神经网络),分析极限学习机在股票价格预测中的可行性和优势。结果表明极限学习机预测精度高,并且在参数选择及训练速度上具有较明显的优势。  相似文献   

7.
Feedforward neural networks are the most commonly used function approximation techniques in neural networks. By the universal approximation theorem, it is clear that a single-hidden layer feedforward neural network (FNN) is sufficient to approximate the corresponding desired outputs arbitrarily close. Some researchers use genetic algorithms (GAs) to explore the global optimal solution of the FNN structure. However, it is rather time consuming to use GA for the training of FNN. In this paper, we propose a new optimization algorithm for a single-hidden layer FNN. The method is based on the convex combination algorithm for massaging information in the hidden layer. In fact, this technique explores a continuum idea which combines the classic mutation and crossover strategies in GA together. The proposed method has the advantage over GA which requires a lot of preprocessing works in breaking down the data into a sequence of binary codes before learning or mutation can apply. Also, we set up a new error function to measure the performance of the FNN and obtain the optimal choice of the connection weights and thus the nonlinear optimization problem can be solved directly. Several computational experiments are used to illustrate the proposed algorithm, which has good exploration and exploitation capabilities in search of the optimal weight for single hidden layer FNNs.  相似文献   

8.
Yong  Binbin  Huang  Liang  Li  Fucun  Shen  Jun  Wang  Xin  Zhou  Qingguo 《The Journal of supercomputing》2020,76(8):6330-6343

In this paper, we apply the Monte Carlo neural network (MCNN), a type of neural network optimized by Monte Carlo algorithm, to electricity load forecast. Meanwhile, deep MCNNs with one, two and three hidden layers are designed. Results have demonstrated that three-layer MCNN improves 70.35% accuracy for 7-week electricity load forecast, compared with traditional neural network. And five-layer MCNN improves 17.24% accuracy for 7-week forecast. This proves that MCNN has great potential in electricity load forecast.

  相似文献   

9.
This study addresses the design and the training of a Multi-Layer Perceptron classifier for identification of wood veneer defects from statistical features of wood sub-images. Previous research utilised a neural network structure manually optimised using the Taguchi method with the connection weights trained using the Backpropagation rule. The proposed approach uses the evolutionary Artificial Neural Network Generation and Training (ANNGaT) algorithm to generate the neural network system. The algorithm evolves simultaneously the neural network topology and the weights. ANNGaT optimises the size of the hidden layer(s) of the neural network structure through genetic mutations of the individuals. The number of hidden layers is a system parameter. Experimental tests show that ANNGaT produces highly compact neural network structures capable of accurate and robust learning. The tests show no differences in accuracy between neural network architectures using one and two hidden layers of processing units. Compared to the manual approach, the evolutionary algorithm generates equally performing solutions using considerably smaller architectures. Moreover, the proposed algorithm requires a lower design effort since the process is fully automated.  相似文献   

10.
罗庚合 《计算机应用》2013,33(7):1942-1945
针对极限学习机(ELM)算法随机选择输入层权值的问题,借鉴第2类型可拓神经网络(ENN-2)聚类的思想,提出了一种基于可拓聚类的ELM(EC-ELM)神经网络。该神经网络是以隐含层神经元的径向基中心向量作为输入层权值,采用可拓聚类算法动态调整隐含层节点数目和径向基中心,并根据所确定的输入层权值,利用Moore-Penrose广义逆快速完成输出层权值的求解。同时,对标准的Friedman#1回归数据集和Wine分类数据集进行测试,结果表明,EC-ELM提供了一种简便的神经网络结构和参数学习方法,并且比基于可拓理论的径向基函数(ERBF)、ELM神经网络具有更高的建模精度和更快的学习速度,为复杂过程的建模提供了新思路。  相似文献   

11.
在5G移动通信系统商用落地的背景下,设计准确、高效的信道估计方法对无线网络性能优化具有重要意义。基于改进GA-Elman算法,提出一种新的无线智能传播损耗预测方法。对Elman神经网络中的连接权值、阈值和隐藏神经元进行实数编码,在隐藏神经元编码中加入二进制控制基因,同时利用自适应遗传算法对权值、阈值和隐藏神经元数量进行优化,解决网络易陷入局部极小值和神经元数目难以确定的问题,从而提高预测性能。仿真结果表明,与仅优化连接权值及阈值的GA-Elman神经网络和标准Elman神经网络相比,该方法具有较高的预测精度。  相似文献   

12.
Nonlinear system identification using optimized dynamic neural network   总被引:1,自引:0,他引:1  
W.F.  Y.Q.  Z.Y.  Y.K.   《Neurocomputing》2009,72(13-15):3277
In this paper, both off-line architecture optimization and on-line adaptation have been developed for a dynamic neural network (DNN) in nonlinear system identification. In the off-line architecture optimization, a new effective encoding scheme—Direct Matrix Mapping Encoding (DMME) method is proposed to represent the structure of neural network by establishing connection matrices. A series of GA operations are applied to the connection matrices to find the optimal number of neurons on each hidden layer and interconnection between two neighboring layers of DNN. The hybrid training is adopted to evolve the architecture, and to tune the weights and input delays of DNN by combining GA with the modified adaptation laws. The modified adaptation laws are subsequently used to tune the input time delays, weights and linear parameters in the optimized DNN-based model in on-line nonlinear system identification. The effectiveness of the architecture optimization and adaptation is extensively tested by means of two nonlinear system identification examples.  相似文献   

13.
针对输出权值采用最小二乘法的回声状态网络(ESN),在随机选取输入权值和隐层神经元阈值时,存在收敛速度慢、预测精度不稳定等问题,提出了基于蚁群算法优化回声状态网络(ACO-ESN)的算法。该算法将优化回声状态网络的初始输入权值、隐层神经元阈值问题转化为蚁群算法中蚂蚁寻找最佳路径的问题,输出权值采用最小二乘法计算,通过蚁群算法的更新、变异、遗传等操作训练回声状态网络,选择出使回声状态网络预测误差最小的输入权值和阈值,从而提高其预测性能。将ACO-ESN与ELM、I-ELM、OS-ELM、B-ELM等神经网络的仿真结果进行对比,结果验证经过蚁群算法优化的回声状态网络加快了其收敛速度,改善了其预测性能,并增强了隐层神经元的敏感度。  相似文献   

14.
针对BP神经网络易陷入局部极小问题以及收敛速度慢的问题, 引入量子粒子群优化算法和BP神经网络相结合的方法, 共享BP神经网络强大的灵活性和量子粒子群全局搜索能力强的优势, 通过改进QPSO的平均最优位置的计算方法, 实现基于BP神经网络和量子粒子群的油田节能指标预测. 以大庆某采油厂注水泵机组单耗数据为训练数据, 预测结果表明该方法能达到良好的预测效果, 具有可行性.  相似文献   

15.
The relation between the decision trees generated by a machine learning algorithm and the hidden layers of a neural network is described. A continuous ID3 algorithm is proposed that converts decision trees into hidden layers. The algorithm allows self-generation of a feedforward neural network architecture. In addition, it allows interpretation of the knowledge embedded in the generated connections and weights. A fast simulated annealing strategy, known as Cauchy training, is incorporated into the algorithm to escape from local minima. The performance of the algorithm is analyzed on spiral data.  相似文献   

16.
Sperduti and Starita proposed a new type of neural network which consists of generalized recursive neurons for classification of structures. In this paper, we propose an entropy-based approach for constructing such neural networks for classification of acyclic structured patterns. Given a classification problem, the architecture, i.e., the number of hidden layers and the number of neurons in each hidden layer, and all the values of the link weights associated with the corresponding neural network are automatically determined. Experimental results have shown that the networks constructed by our method can have a better performance, with respect to network size, learning speed, or recognition accuracy, than the networks obtained by other methods.  相似文献   

17.
In this work we investigate how artificial neural network (ANN) evolution with genetic algorithm (GA) improves the reliability and predictability of artificial neural network. This strategy is applied to predict permeability of Mansuri Bangestan reservoir located in Ahwaz, Iran utilizing available geophysical well log data. Our methodology utilizes a hybrid genetic algorithm–neural network strategy (GA–ANN). The proposed algorithm combines the local searching ability of the gradient–based back-propagation (BP) strategy with the global searching ability of genetic algorithms. Genetic algorithms are used to decide the initial weights of the gradient decent methods so that all the initial weights can be searched intelligently. The genetic operators and parameters are carefully designed and set avoiding premature convergence and permutation problems. For an evaluation purpose, the performance and generalization capabilities of GA–ANN are compared with those of models developed with the common technique of BP. The results demonstrate that carefully designed genetic algorithm-based neural network outperforms the gradient descent-based neural network.  相似文献   

18.
遗传算法优化BP 神经网络的短时交通流混沌预测   总被引:5,自引:0,他引:5  
为了提高BP神经网络预测模型对混沌时间序列的预测准确性,提出了一种基于遗传算法优化BP神经网络的改进混沌时间序列预测方法.利用遗传算法优化BP神经网络的权值和阈值,然后训练BP神经网络预测模型以求得最优解,并将该预测方法应用到几个典型混沌时间序列和实测短时交通流时间序列进行有效性验证.仿真结果表明,该方法对典型混沌时间序列和短时交通流具有较好的非线性拟合能力和更高的预测准确性.  相似文献   

19.
A new multilayer incremental neural network (MINN) architecture and its performance in classification of biomedical images is discussed. The MINN consists of an input layer, two hidden layers and an output layer. The first stage between the input and first hidden layer consists of perceptrons. The number of perceptrons and their weights are determined by defining a fitness function which is maximized by the genetic algorithm (GA). The second stage involves feature vectors which are the codewords obtained automaticaly after learning the first stage. The last stage consists of OR gates which combine the nodes of the second hidden layer representing the same class. The comparative performance results of the MINN and the backpropagation (BP) network indicates that the MINN results in faster learning, much simpler network and equal or better classification performance.  相似文献   

20.
针对BP神经网络存在易陷入局部极小值、收敛速度慢等问题,提出用遗传算法优化BP神经网络并用于房价预测。采用BP神经网络建立房价预测模型。利用遗传算法对BP神经网络的初始权值和阈值进行优化。选取1998年2011年贵阳市的房价及其主要影响因素作为实验数据,分别对传统的BP神经网络和经过遗传算法优化后的BP神经网络进行训练和仿真实验,结果表明,与传统的BP神经网络预测模型相比,经过遗传算法优化后的BP神经网络预测模型能加快网络的收敛速度,提高房价的预测精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号