首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
基于遗传算法优化神经网络的多用户检测   总被引:1,自引:0,他引:1       下载免费PDF全文
利用遗传算法全局搜索能力强和反向传播(BP)算法局部搜索速度快的特点,采取两段式训练方法,既避免陷入局部最小,又加快收敛速度。提出基于遗传算法优化神经网络权值的多用户检测算法。采用实数编码方式,将传统神经网络的能量函数作为适应度函数,选择算子选用轮盘赌算子,交叉算子选用单点交叉算子,变异算子选用正态变异算子。仿真结果表明,该算法的误码率、信干比和信道跟踪能力等方面的性能与传统前馈神经网络多用户检测算法相比均有一定的改善。  相似文献   

2.
利用影响因子遗传算法优化前馈神经网络   总被引:1,自引:0,他引:1  
提出了一种带有影响因子的改进遗传算法并以此来优化前馈神经网络.染色体的每个基因都有一个影响因子,其不同取值体现了基因对整条染色体的不同影响程度.在遗传进化过程中,通过影响因子等遗传操作以达到对前馈神经网络的权值、阈值和结构优化的目的.仿真实验表明,该算法能够快速地确定神经网络的结构并且有效地提高了神经网络的收敛速度.  相似文献   

3.
一种基于遗传算法的RBF神经网络优化方法   总被引:19,自引:0,他引:19       下载免费PDF全文
提出了一种新的RBF神经网络的训练方法,采用遗传算法对RBF神经网络的隐层中心值和宽度进行了优化,用递推最小二乘法训练隐层和输出层之间的权值。在对非线性函数进行逼近的仿真中,验证了该算法的有效性。  相似文献   

4.
为实现在实际的炉群多变量燃烧系统中,对各个燃烧的子系统的控制参数进行优化,提出了一种基于改进适应度函数的遗传单神经元控制算法,该算法克服了采用神经网络方法收敛速度慢、在求解过程中陷入局部极小点等缺点,利用遗传算法的全局寻优特性和神经网络对非线性函数较强的逼近能力,将改进的遗传算法和单神经元控制相结合,实现对一类非线性系统的参数进行优化。模拟实验和真实结果验证了这种方法是可行的。  相似文献   

5.
为防止交叉后优秀基因段的丢失,在随机非一致线性交叉的基础上,设计了一种与个体适应度相关的线性交叉方案。构造了一种使交叉率与变异率随进化过程自适应调整的方法,有效抑制了遗传算法的早熟收敛。然后,针对函数逼近问题用改进后的遗传算法去优化前馈神经网络的结构,降低了神经网络训练陷入局部最优的可能性,提高了网络的泛化能力。  相似文献   

6.
遗传BP神经网络在泥石流危险性评价中的应用   总被引:3,自引:0,他引:3       下载免费PDF全文
泥石流危险性的评价是地质灾害预测的重要研究课题之一,根据泥石流危险性评价因子,建立了遗传BP神经网络评价模型。模型利用BP神经网络的函数逼近能力,模拟泥石流某些主要评价指标与危险程度之间的非线性函数关系,实现对泥石流危险性的评价。为克服BP神经网络收敛慢、易陷入局部极小、泛化能力差等缺陷,引入遗传算法和对比分析方法优化BP评价网络的权值、阈值和网络结构。实验证明,采用所述方法优化后的BP神经网络的拟合精度、准确度、效率大幅提高,泛化能力也得到增强,该方法可作为解决泥石流危险性评价问题的一种新思路。  相似文献   

7.
通过仿真分析比较支持向量机与前馈神经网络在非线性函数逼近的结果,得出在小样本下,SVM的样本依赖程度、抗噪声能力和泛化性能都优于前馈神经网络。  相似文献   

8.
由于管理信息系统的结构越来越复杂、影响因素越来越多,根据人机交互方法研究管理信息系统具有十分重要的理论意义和应用价值.针对管理信息系统人机交互问题,提出了一种遗传神经网络方法,采用具有高度非线性映射能力的结构化神经网络来拟合管理信息系统人机交互模型的输入输出关系,利用具有全局寻优能力的遗传算法来训练结构化神经网络的参数,应用遗传算法对神经网络模型进行优化设计.上述方法从遗传算法的优化过程中抽取一些知识,采用知识来指导遗传算法的后续优化过程.  相似文献   

9.
在分析并行多物种遗传算法应用于神经网络拓扑结构的设计和学习之后,提出一种伪并行遗传(PPGA-MBP)混合算法,结合改进的BP算法对多层前馈神经网络的拓扑结构进行优化。算法编码采用基于实数的层次混合方式,允许两个不同结构的网络个体交叉生成有效子个体。利用该算法对N-Parity问题进行了实验仿真,并对算法中评价函数各部分系数和种群规模对算法的影响进行了分析。实验证明取得了明显的优化效果,提高了神经网络的自适应能力和泛化能力,具有全局快速收敛的性能。  相似文献   

10.
基于遗传小波神经网络的RFID调制识别   总被引:1,自引:0,他引:1       下载免费PDF全文
在射频识别的调制识别方法中,神经网络常用的反向传播算法普遍存在收敛速度慢、容易陷入局部极小点、网络参数的选取只能凭实验和经验确定等缺点。针对上述问题,提出一种基于遗传算法优化小波神经网络的识别分类器。该分类器可以充分发挥遗传算法的全局寻优能力、小波分析的非线性逼近能力和神经网络的自学习特性,仿真结果表明其可以优化系统的收敛速度和识别精度。  相似文献   

11.
Most of the cost functions used for blind equalization are nonconvex and nonlinear functions of tap weights, when implemented using linear transversal filter structures. Therefore, a blind equalization scheme with a nonlinear structure that can form nonconvex decision regions is desirable. The efficacy of complex-valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present a complex valued neural network for blind equalization with M-ary phase shift keying (PSK) signals. The complex nonlinear activation functions used in the neural network are especially defined for handling the M-ary PSK signals. The training algorithm based on constant modulus algorithm (CMA) cost function is derived. The improved performance of the proposed neural network in both, stationary and nonstationary environments, is confirmed through computer simulations.  相似文献   

12.
Feedforward neural networks are the most commonly used function approximation techniques in neural networks. By the universal approximation theorem, it is clear that a single-hidden layer feedforward neural network (FNN) is sufficient to approximate the corresponding desired outputs arbitrarily close. Some researchers use genetic algorithms (GAs) to explore the global optimal solution of the FNN structure. However, it is rather time consuming to use GA for the training of FNN. In this paper, we propose a new optimization algorithm for a single-hidden layer FNN. The method is based on the convex combination algorithm for massaging information in the hidden layer. In fact, this technique explores a continuum idea which combines the classic mutation and crossover strategies in GA together. The proposed method has the advantage over GA which requires a lot of preprocessing works in breaking down the data into a sequence of binary codes before learning or mutation can apply. Also, we set up a new error function to measure the performance of the FNN and obtain the optimal choice of the connection weights and thus the nonlinear optimization problem can be solved directly. Several computational experiments are used to illustrate the proposed algorithm, which has good exploration and exploitation capabilities in search of the optimal weight for single hidden layer FNNs.  相似文献   

13.
This paper presents a highly effective and precise neural network method for choosing the activation functions (AFs) and tuning the learning parameters (LPs) of a multilayer feedforward neural network by using a genetic algorithm (GA). The performance of the neural network mainly depends on the learning algorithms and the network structure. The backpropagation learning algorithm is used for tuning the network connection weights, and the LPs are obtained by the GA to provide both fast and reliable learning. Also, the AFs of each neuron in the network are automatically chosen by a GA. The present study consists of 10 different functions to accomplish a better convergence of the desired input–output mapping. Test studies are performed to solve a set of two-dimensional regression problems for the proposed genetic-based neural network (GNN) and conventional neural network having sigmoid AFs and constant learning parameters. The proposed GNN has also been tested by applying it to three real problems in the fields of environment, medicine, and economics. Obtained results prove that the proposed GNN is more effective and reliable when compared with the classical neural network structure.  相似文献   

14.
Neural networks can be used to develop effective models of nonlinear systems. Their main advantage being that they can model the vast majority of nonlinear systems to any arbitrary degree of accuracy. The ability of a neural network to predict the behavior of a nonlinear system accurately ought to be improved if there was some mechanism that allows the incorporation of first-principles model information into their training. This study proposes to use information obtained from a first-principle model to impart a sense of “direction” to the neural network model estimate. This is accomplished by modifying the objective function so as to include an additional term that is the difference between the time derivative of the outputs, as predicted by the neural network, and that of the outputs of the first-principles model during the training phase. The performance of a feedforward neural network model that uses this modified objective function is demonstrated on a chaotic process and compared to the conventional feedforward network trained on the usual objective function.  相似文献   

15.
针对连续状态空间的非线性系统控制问题,提出一种基于残差梯度法的神经网络Q学习算法。该算法采用多层前馈神经网络逼近Q值函数,同时利用残差梯度法更新神经网络参数以保证收敛性。引入经验回放机制实现神经网络参数的小批量梯度更新,有效减少迭代次数,加快学习速度。为了进一步提高训练过程的稳定性,引入动量优化。此外,采用Softplus函数代替一般的ReLU激活函数,避免了ReLU函数在负数区域值恒为零所导致的某些神经元可能永远无法被激活,相应的权重参数可能永远无法被更新的问题。通过CartPole控制任务的仿真实验,验证了所提算法的正确性和有效性。  相似文献   

16.
This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.  相似文献   

17.
The problems associated with training feedforward artificial neural networks (ANNs) such as the multilayer perceptron (MLP) network and radial basis function (RBF) network have been well documented. The solutions to these problems have inspired a considerable amount of research, one particular area being the application of evolutionary search algorithms such as the genetic algorithm (GA). To date, the vast majority of GA solutions have been aimed at the MLP network. This paper begins with a brief overview of feedforward ANNs and GAs followed by a review of the current state of research in applying evolutionary techniques to training RBF networks.  相似文献   

18.
A hybrid linear/nonlinear training algorithm for feedforward neuralnetworks   总被引:1,自引:0,他引:1  
This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.  相似文献   

19.
Recurrent neural network training with feedforward complexity   总被引:1,自引:0,他引:1  
This paper presents a training method that is of no more than feedforward complexity for fully recurrent networks. The method is not approximate, but rather depends on an exact transformation that reveals an embedded feedforward structure in every recurrent network. It turns out that given any unambiguous training data set, such as samples of the state variables and their derivatives, we need only to train this embedded feedforward structure. The necessary recurrent network parameters are then obtained by an inverse transformation that consists only of linear operators. As an example of modeling a representative nonlinear dynamical system, the method is applied to learn Bessel's differential equation, thereby generating Bessel functions within, as well as outside the training set.  相似文献   

20.
Here, formation of continuous attractor dynamics in a nonlinear recurrent neural network is used to achieve a nonlinear speech denoising method, in order to implement robust phoneme recognition and information retrieval. Formation of attractor dynamics in recurrent neural network is first carried out by training the clean speech subspace as the continuous attractor. Then, it is used to recognize noisy speech with both stationary and nonstationary noise. In this work, the efficiency of a nonlinear feedforward network is compared to the same one with a recurrent connection in its hidden layer. The structure and training of this recurrent connection, is designed in such a way that the network learns to denoise the signal step by step, using properties of attractors it has formed, along with phone recognition. Using these connections, the recognition accuracy is improved 21% for the stationary signal and 14% for the nonstationary one with 0db SNR, in respect to a reference model which is a feedforward neural network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号