首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 62 毫秒
1.
一种基于在线学习前向神经网络的组合导航滤波器   总被引:2,自引:0,他引:2  
首先建立了带反馈校正的组合导航数学模型,在此基础上提出了一种在线学习的神经网络滤波算法。这种算法不需要噪声的先验知识,对系统模型的依赖也较弱。仿真表明,卡尔曼滤波器在理想情况下有较高的估计精度,而神经网络滤波器在非理想情况下有较高的精度,对模型误差和噪声特性的变化具有良好的鲁棒性。  相似文献   

2.
从理论上提出了子空间信息量(SIQ)及其准则(SIQC)的概念;在此基础上阐述了基于上述准则的前向神经网络设计的相关理论,包括前向神经网络隐含层信息量(HLIQ)、存在性和逼近定理,给出了选择隐含层神经元数、权值向量集和隐含层激励函数的指导方向;提出了基于上述理论的一种可行的次优网络设计算法;最后,详细分析了网络性能指标及其影响因素,上述理论和方法完全克服了传统学习算法的各种弊端,丰富了前向神经网络设计领域的理论依据,具有较大的理论指导和实际应用价值,文中通过具体实例验证了上述理论和方法的可行性和优越性.  相似文献   

3.
多层前向神经网络的新型二阶学习算法   总被引:6,自引:0,他引:6  
提出了多层前向神经网络的新型二阶递推学习算法。该算法不仅能使用网络各层误差而且使二阶导数信息因子反向传播。证明了新算法等价于Newton迭代法并且有二阶收敛速度。它实现了Newton搜索方向和Hessian阵逆的递推运算,其计算量几乎与普通递推最小二乘法相当。由算法性能分析证明新算法优于Karayiannis等人的二阶学习算法。  相似文献   

4.
本文构造了一种新的基于线性模型、多层前向网络的混合结构神经网络模型,并提出了相应的非迭代快速学习算法.该学习算法能够根据拟合精度要求,运用线性最小二乘法确定相应的最佳网络权值和线性部分的参数,并自动确定最佳的隐层节点数.与BP网络的比较结果表明,本文提出的混合结构前向神经网络的快速学习算法无论在拟合精度、学习速度、泛化能力、还是隐节点数均显著好于BP算法.  相似文献   

5.
多层前向神经网络的快速学习算法及其应用   总被引:16,自引:0,他引:16  
叶军  张新华 《控制与决策》2002,17(Z1):817-819
针对目前多层前向神经网络学习算法存在的不足,提出一种多层前向神经网络的快速学习算法,它不仅符合生物神经网络的基本特征,而且算法简单,学习收敛速度快,具有线性、非线性逼近精度高等特性.以二杆机械手逆运动学建模作为应用实例,仿真结果表明该方法是有效的,其算法与收敛速度更优于BP网络.  相似文献   

6.
陈华伟  年晓玲  靳蕃 《计算机应用》2006,26(5):1106-1108
提出一种新的前向神经网络的学习算法,该算法在正向和反向阶段均可以对不同的层间的权值进行必要的调整,在正向阶段按最小范数二乘解原则确定连接隐层与输出层的权值,反向阶段则按误差梯度下降原则调整通连接输入层与隐层间的权值,具有很快的学习能力和收敛速度,并且能在一定的程度上保证所训练神经网络的泛化能力,实验结果初步验证了新算法的性能。  相似文献   

7.
针对时变和/或非线性输入的前向神经网络提出了一种感知自适应算法。其本质是迫使输出的实际值和期望值之间的误差满足一个渐近稳定的差分方程,而不是用后向传播方法使误差函数极小化。通过适当排列扩张输入可以避免算法的奇异性。  相似文献   

8.
多层前向神经网络的快速学习算法及其应用   总被引:4,自引:1,他引:4  
叶军  张新华 《控制与决策》2002,17(11):817-819
针对目前多层前向神经网络学习算法存在的不足,提出一种多层前向神经网络的快速学习算法,它不仅符合生物神经网络的基本特征,而且算法简单,学习收敛速度快,具有线性,非线性逼近精度高等特性,以二杆机构手逆运动学建模作为应用实例,仿真结果表明该方法是有效的,其算法与收敛速度更优于BP网络。  相似文献   

9.
一种前向神经网络快速学习算法及其在系统辨识中的应用   总被引:14,自引:0,他引:14  
王正欧  林晨 《自动化学报》1997,23(6):728-735
提出一种基于最小二乘的前向神经网络快速学习算法.与现有同类算法相比,该算法无需任何矩阵求逆,计算量小,较适于需快速学习的系统辨识和其他应用.文中推导了算法,并给出一种更为简便的局部化算法.系统辨识的仿真实例表明了算法的优良性能.  相似文献   

10.
利用人工鱼群算法优化前向神经网络   总被引:20,自引:0,他引:20  
人工鱼群算法(AFSA)是一种最新提出的新型的寻优策略,文中尝试将人工鱼群算法用于三层前向神经网络的训练过程,建立了相应的优化模型,进行了实际的编程计算,并与加动量项的BP算法、演化算法以及模拟退火算法进行比较,结果表明AFSA具有鲁棒性强,全局收敛性好,以及对初值的不敏感性等特点。  相似文献   

11.
This paper deals with a fast and computationally simple Successive Over-relaxation Resilient Backpropagation (SORRPROP) learning algorithm which has been developed by modifying the Resilient Backpropagation (RPROP) algorithm. It uses latest computed values of weights between the hidden and output layers to update remaining weights. The modification does not add any extra computation in RPROP algorithm and maintains its computational simplicity. Classification and regression simulations examples have been used to compare the performance. From the test results for the examples undertaken it is concluded that SORRPROP has small convergence times and better performance in comparison to other first-order learning algorithms.  相似文献   

12.
A robust training algorithm for a class of single-hidden layer feedforward neural networks (SLFNs) with linear nodes and an input tapped-delay-line memory is developed in this paper. It is seen that, in order to remove the effects of the input disturbances and reduce both the structural and empirical risks of the SLFN, the input weights of the SLFN are assigned such that the hidden layer of the SLFN performs as a pre-processor, and the output weights are then trained to minimize the weighted sum of the output error squares as well as the weighted sum of the output weight squares. The performance of an SLFN-based signal classifier trained with the proposed robust algorithm is studied in the simulation section to show the effectiveness and efficiency of the new scheme.  相似文献   

13.
前馈神经网络的混沌BP 混合学习算法   总被引:7,自引:0,他引:7       下载免费PDF全文
简要分析由Logistic映射产生的混沌数以及不同混沌序列之间的概率统计特性,为混沌全局性搜索提供了依据.将一种快速BP算法与混沌优化相结合,提出了混沌BP混合算法,由于混沌Logistic映射的遍历性、随机性,使得混合算法收敛速度快,且具有全局性,采用混合算法对XOR问题和非线性函数进行仿真,结果表明该算法明显优于标准BP算法和快速BP算法。  相似文献   

14.
Online learning algorithms have been preferred in many applications due to their ability to learn by the sequentially arriving data. One of the effective algorithms recently proposed for training single hidden-layer feedforward neural networks (SLFNs) is online sequential extreme learning machine (OS-ELM), which can learn data one-by-one or chunk-by-chunk at fixed or varying sizes. It is based on the ideas of extreme learning machine (ELM), in which the input weights and hidden layer biases are randomly chosen and then the output weights are determined by the pseudo-inverse operation. The learning speed of this algorithm is extremely high. However, it is not good to yield generalization models for noisy data and is difficult to initialize parameters in order to avoid singular and ill-posed problems. In this paper, we propose an improvement of OS-ELM based on the bi-objective optimization approach. It tries to minimize the empirical error and obtain small norm of network weight vector. Singular and ill-posed problems can be overcome by using the Tikhonov regularization. This approach is also able to learn data one-by-one or chunk-by-chunk. Experimental results show the better generalization performance of the proposed approach on benchmark datasets.  相似文献   

15.
This paper presents a highly effective and precise neural network method for choosing the activation functions (AFs) and tuning the learning parameters (LPs) of a multilayer feedforward neural network by using a genetic algorithm (GA). The performance of the neural network mainly depends on the learning algorithms and the network structure. The backpropagation learning algorithm is used for tuning the network connection weights, and the LPs are obtained by the GA to provide both fast and reliable learning. Also, the AFs of each neuron in the network are automatically chosen by a GA. The present study consists of 10 different functions to accomplish a better convergence of the desired input–output mapping. Test studies are performed to solve a set of two-dimensional regression problems for the proposed genetic-based neural network (GNN) and conventional neural network having sigmoid AFs and constant learning parameters. The proposed GNN has also been tested by applying it to three real problems in the fields of environment, medicine, and economics. Obtained results prove that the proposed GNN is more effective and reliable when compared with the classical neural network structure.  相似文献   

16.
为了解决前馈神经网络训练收敛速度慢、易陷入局部极值及对初始权值依赖性强等缺点, 提出了一种基于反传的无限折叠迭代混沌粒子群优化(ICMICPSO)算法训练前馈神经网络(FNNs)参数。该方法在充分利用BP算法的误差反传信息和梯度信息的基础上, 引入了ICMIC混沌粒子群的概念, 将ICMIC粒子群(ICMICPS)作为全局搜索器, 梯度下降信息作为局部搜索器来调整网络的权值和阈值, 使得粒子能够在全局寻优的基础上对整个空间进行搜索。通过仿真实验与多种算法进行对比, 结果表明在训练和泛化能力上ICMICPSO-BPNN方法明显优于其他算法。  相似文献   

17.
The following learning problem is considered, for continuous-time recurrent neural networks having sigmoidal activation functions. Given a “black box” representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to approximate the observed behavior. It is shown that the number of inputs needed for reliable generalization (the sample complexity of the learning problem) is upper bounded by an expression that grows polynomially with the dimension of the network and logarithmically with the number of output derivatives being matched.  相似文献   

18.
There is no method to determine the optimal topology for multi-layer neural networks for a given problem. Usually the designer selects a topology for the network and then trains it. Since determination of the optimal topology of neural networks belongs to class of NP-hard problems, most of the existing algorithms for determination of the topology are approximate. These algorithms could be classified into four main groups: pruning algorithms, constructive algorithms, hybrid algorithms and evolutionary algorithms. These algorithms can produce near optimal solutions. Most of these algorithms use hill-climbing method and may be stuck at local minima. In this article, we first introduce a learning automaton and study its behaviour and then present an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks. The survival algorithm uses learning automata as a global search method to increase the probability of obtaining the optimal topology. The algorithm considers the problem of optimization of the topology of neural networks as object partitioning rather than searching or parameter optimization as in existing algorithms. In survival algorithm, the training begins with a large network, and then by adding and deleting hidden units, a near optimal topology will be obtained. The algorithm has been tested on a number of problems and shown through simulations that networks generated are near optimal.  相似文献   

19.
A supervised learning algorithm for quantum neural networks (QNN) based on a novel quantum neuron node implemented as a very simple quantum circuit is proposed and investigated. In contrast to the QNN published in the literature, the proposed model can perform both quantum learning and simulate the classical models. This is partly due to the neural model used elsewhere which has weights and non-linear activations functions. Here a quantum weightless neural network model is proposed as a quantisation of the classical weightless neural networks (WNN). The theoretical and practical results on WNN can be inherited by these quantum weightless neural networks (qWNN). In the quantum learning algorithm proposed here patterns of the training set are presented concurrently in superposition. This superposition-based learning algorithm (SLA) has computational cost polynomial on the number of patterns in the training set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号