共查询到20条相似文献,搜索用时 203 毫秒
1.
2.
3.
4.
5.
6.
泥石流危险性的评价是地质灾害预测的重要研究课题之一,根据泥石流危险性评价因子,建立了遗传BP神经网络评价模型。模型利用BP神经网络的函数逼近能力,模拟泥石流某些主要评价指标与危险程度之间的非线性函数关系,实现对泥石流危险性的评价。为克服BP神经网络收敛慢、易陷入局部极小、泛化能力差等缺陷,引入遗传算法和对比分析方法优化BP评价网络的权值、阈值和网络结构。实验证明,采用所述方法优化后的BP神经网络的拟合精度、准确度、效率大幅提高,泛化能力也得到增强,该方法可作为解决泥石流危险性评价问题的一种新思路。 相似文献
7.
8.
由于管理信息系统的结构越来越复杂、影响因素越来越多,根据人机交互方法研究管理信息系统具有十分重要的理论意义和应用价值.针对管理信息系统人机交互问题,提出了一种遗传神经网络方法,采用具有高度非线性映射能力的结构化神经网络来拟合管理信息系统人机交互模型的输入输出关系,利用具有全局寻优能力的遗传算法来训练结构化神经网络的参数,应用遗传算法对神经网络模型进行优化设计.上述方法从遗传算法的优化过程中抽取一些知识,采用知识来指导遗传算法的后续优化过程. 相似文献
9.
在分析并行多物种遗传算法应用于神经网络拓扑结构的设计和学习之后,提出一种伪并行遗传(PPGA-MBP)混合算法,结合改进的BP算法对多层前馈神经网络的拓扑结构进行优化。算法编码采用基于实数的层次混合方式,允许两个不同结构的网络个体交叉生成有效子个体。利用该算法对N-Parity问题进行了实验仿真,并对算法中评价函数各部分系数和种群规模对算法的影响进行了分析。实验证明取得了明显的优化效果,提高了神经网络的自适应能力和泛化能力,具有全局快速收敛的性能。 相似文献
10.
11.
Rajoo Pandey 《Neural computing & applications》2005,14(4):290-298
Most of the cost functions used for blind equalization are nonconvex and nonlinear functions of tap weights, when implemented using linear transversal filter structures. Therefore, a blind equalization scheme with a nonlinear structure that can form nonconvex decision regions is desirable. The efficacy of complex-valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present a complex valued neural network for blind equalization with M-ary phase shift keying (PSK) signals. The complex nonlinear activation functions used in the neural network are especially defined for handling the M-ary PSK signals. The training algorithm based on constant modulus algorithm (CMA) cost function is derived. The improved performance of the proposed neural network in both, stationary and nonstationary environments, is confirmed through computer simulations. 相似文献
12.
Feedforward neural networks are the most commonly used function approximation techniques in neural networks. By the universal approximation theorem, it is clear that a single-hidden layer feedforward neural network (FNN) is sufficient to approximate the corresponding desired outputs arbitrarily close. Some researchers use genetic algorithms (GAs) to explore the global optimal solution of the FNN structure. However, it is rather time consuming to use GA for the training of FNN. In this paper, we propose a new optimization algorithm for a single-hidden layer FNN. The method is based on the convex combination algorithm for massaging information in the hidden layer. In fact, this technique explores a continuum idea which combines the classic mutation and crossover strategies in GA together. The proposed method has the advantage over GA which requires a lot of preprocessing works in breaking down the data into a sequence of binary codes before learning or mutation can apply. Also, we set up a new error function to measure the performance of the FNN and obtain the optimal choice of the connection weights and thus the nonlinear optimization problem can be solved directly. Several computational experiments are used to illustrate the proposed algorithm, which has good exploration and exploitation capabilities in search of the optimal weight for single hidden layer FNNs. 相似文献
13.
This paper presents a highly effective and precise neural network method for choosing the activation functions (AFs) and tuning the learning parameters (LPs) of a multilayer feedforward neural network by using a genetic algorithm (GA). The performance of the neural network mainly depends on the learning algorithms and the network structure. The backpropagation learning algorithm is used for tuning the network connection weights, and the LPs are obtained by the GA to provide both fast and reliable learning. Also, the AFs of each neuron in the network are automatically chosen by a GA. The present study consists of 10 different functions to accomplish a better convergence of the desired input–output mapping. Test studies are performed to solve a set of two-dimensional regression problems for the proposed genetic-based neural network (GNN) and conventional neural network having sigmoid AFs and constant learning parameters. The proposed GNN has also been tested by applying it to three real problems in the fields of environment, medicine, and economics. Obtained results prove that the proposed GNN is more effective and reliable when compared with the classical neural network structure. 相似文献
14.
Neural networks can be used to develop effective models of nonlinear systems. Their main advantage being that they can model the vast majority of nonlinear systems to any arbitrary degree of accuracy. The ability of a neural network to predict the behavior of a nonlinear system accurately ought to be improved if there was some mechanism that allows the incorporation of first-principles model information into their training. This study proposes to use information obtained from a first-principle model to impart a sense of “direction” to the neural network model estimate. This is accomplished by modifying the objective function so as to include an additional term that is the difference between the time derivative of the outputs, as predicted by the neural network, and that of the outputs of the first-principles model during the training phase. The performance of a feedforward neural network model that uses this modified objective function is demonstrated on a chaotic process and compared to the conventional feedforward network trained on the usual objective function. 相似文献
15.
针对连续状态空间的非线性系统控制问题,提出一种基于残差梯度法的神经网络Q学习算法。该算法采用多层前馈神经网络逼近Q值函数,同时利用残差梯度法更新神经网络参数以保证收敛性。引入经验回放机制实现神经网络参数的小批量梯度更新,有效减少迭代次数,加快学习速度。为了进一步提高训练过程的稳定性,引入动量优化。此外,采用Softplus函数代替一般的ReLU激活函数,避免了ReLU函数在负数区域值恒为零所导致的某些神经元可能永远无法被激活,相应的权重参数可能永远无法被更新的问题。通过CartPole控制任务的仿真实验,验证了所提算法的正确性和有效性。 相似文献
16.
Chin-Teng Lin Chong-Ping Jou 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2000,30(2):276-289
This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system. 相似文献
17.
The problems associated with training feedforward artificial neural networks (ANNs) such as the multilayer perceptron (MLP) network and radial basis function (RBF) network have been well documented. The solutions to these problems have inspired a considerable amount of research, one particular area being the application of evolutionary search algorithms such as the genetic algorithm (GA). To date, the vast majority of GA solutions have been aimed at the MLP network. This paper begins with a brief overview of feedforward ANNs and GAs followed by a review of the current state of research in applying evolutionary techniques to training RBF networks. 相似文献
18.
McLoone S. Brown M.D. Irwin G. Lightbody A. 《Neural Networks, IEEE Transactions on》1998,9(4):669-684
This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large. 相似文献
19.
Recurrent neural network training with feedforward complexity 总被引:1,自引:0,他引:1
This paper presents a training method that is of no more than feedforward complexity for fully recurrent networks. The method is not approximate, but rather depends on an exact transformation that reveals an embedded feedforward structure in every recurrent network. It turns out that given any unambiguous training data set, such as samples of the state variables and their derivatives, we need only to train this embedded feedforward structure. The necessary recurrent network parameters are then obtained by an inverse transformation that consists only of linear operators. As an example of modeling a representative nonlinear dynamical system, the method is applied to learn Bessel's differential equation, thereby generating Bessel functions within, as well as outside the training set. 相似文献
20.
Louiza Dehyadegary Author VitaeSeyyed Ali Seyyedsalehi Author Vitae Isar NejadgholiAuthor Vitae 《Neurocomputing》2011,74(17):2716-2724
Here, formation of continuous attractor dynamics in a nonlinear recurrent neural network is used to achieve a nonlinear speech denoising method, in order to implement robust phoneme recognition and information retrieval. Formation of attractor dynamics in recurrent neural network is first carried out by training the clean speech subspace as the continuous attractor. Then, it is used to recognize noisy speech with both stationary and nonstationary noise. In this work, the efficiency of a nonlinear feedforward network is compared to the same one with a recurrent connection in its hidden layer. The structure and training of this recurrent connection, is designed in such a way that the network learns to denoise the signal step by step, using properties of attractors it has formed, along with phone recognition. Using these connections, the recognition accuracy is improved 21% for the stationary signal and 14% for the nonstationary one with 0db SNR, in respect to a reference model which is a feedforward neural network. 相似文献