首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Modified backpropagation algorithm for fast learning in neural networks   总被引:2,自引:0,他引:2  
Reyneri  L.M. Filippi  E. 《Electronics letters》1990,26(19):1564-1566
A fast learning rule for artificial neural systems which is based on modifications to a backpropagation algorithm is described. The rule minimises the error function along the direction of the gradient and backpropagates the error pattern according to a constant error energy approach.<>  相似文献   

2.
基于蚁群算法的二进神经网络学习算法   总被引:2,自引:0,他引:2  
本文提出一种实现任意布尔函数的二进神经网络学习算法,该算法首先借助蚁群算法优化选择核心节点及节点访问顺序;其次,根据优化的节点访问顺序给出扩张分类超平面的步骤,减少了隐层神经元的数目,同时给出隐层神经元及输出元的表达形式;并进一步通过理论分析了该算法的收敛性。该算法成功地改进了已有学习算法的不足,并通过典型实例验证了该算法的有效性。  相似文献   

3.
We present a conditional distribution learning formulation for real-time signal processing with neural networks based on an extension of maximum likelihood theory-partial likelihood (PL) estimation-which allows for (i) dependent observations and (ii) sequential processing. For a general neural network conditional distribution model, we establish a fundamental information-theoretic connection, the equivalence of maximum PL estimation, and accumulated relative entropy (ARE) minimization, and obtain large sample properties of PL for the general case of dependent observations. As an example, the binary case with the sigmoidal perceptron as the probability model is presented. It is shown that the single and multilayer perceptron (MLP) models satisfy conditions for the equivalence of the two cost functions: ARE and negative log partial likelihood. The practical issue of their gradient descent minimization is then studied within the well-formed cost functions framework. It is shown that these are well-formed cost functions for networks without hidden units; hence, their gradient descent minimization is guaranteed to converge to a solution if one exists on such networks. The formulation is applied to adaptive channel equalization, and simulation results are presented to show the ability of the least relative entropy equalizer to realize complex decision boundaries and to recover during training from convergence at the wrong extreme in cases where the mean square error-based MLP equalizer cannot  相似文献   

4.
基于正交校正共轭梯度法的快速神经网络学习算法研究   总被引:1,自引:0,他引:1  
前馈神经网络由于具有理论上逼近任意非线性连续映射的能力,因而非常适合于非线性系统建模及构成自适应控制。为了提高前馈神经网络的权的学习效率及稳定性,该文提出一种基于正交校正共轭梯度优化方法的快速神经网络学习算法,通过与其它学习算法(如:BP算法、变尺度法、用差商近似代替导数的Powell法等)的比较,经仿真试验表明,本算法是一种高效、快速的学习算法。  相似文献   

5.
6.
李爱军  章卫国  沈毅 《电光与控制》2003,10(3):16-18,22
提出了一种用于控制复杂非线性系统的超稳定自适应控制算法。使用波波夫超稳定性原理设计控制器。用神经网络在线辨识系统的建模误差及不确定性因素,辨识结果作为补偿信号以实现系统的鲁棒控制。对一双输入双输出非线性系统的仿真结果表明,所提出的超稳定自适应控制算法具有较好的性能。  相似文献   

7.
A new learning scheme, called projection learning (PL), for self-organizing neural networks is presented. By iteratively subtracting out the projection of the “twinning” neuron onto the null space of the input vector, the neuron is made more similar to the input. By subtracting the projection onto the null space as opposed to making the weight vector directly aligned to the input, we attempt to reduce the bias of the weight vectors. This reduced bias will improve the generalizing abilities of the network. Such a feature is important in problems where the in-class variance is very high, such as, traffic sign recognition problems. Comparisons of PL with standard Kohonen learning indicate that projection learning is faster. Projection learning is implemented on a new self-organizing neural network model called the reconfigurable neural network (RNN). The RNN is designed to incorporate new patterns online without retraining the network. The RNN is used to recognize traffic signs for a mobile robot navigation system  相似文献   

8.
A fast new algorithm for training feedforward neural networks   总被引:5,自引:0,他引:5  
A fast algorithm is presented for training multilayer perceptrons as an alternative to the backpropagation algorithm. The number of iterations required by the new algorithm to converge is less than 20% of what is required by the backpropagation algorithm. Also, it is less affected by the choice of initial weights and setup parameters. The algorithm uses a modified form of the backpropagation algorithm to minimize the mean-squared error between the desired and actual outputs with respect to the inputs to the nonlinearities. This is in contrast to the standard algorithm which minimizes the mean-squared error with respect to the weights. Error signals, generated by the modified backpropagation algorithm, are used to estimate the inputs to the nonlinearities, which along with the input vectors to the respective nodes, are used to produce an updated set of weights through a system of linear equations at each node. These systems of linear equations are solved using a Kalman filter at each layer  相似文献   

9.
10.
A neural network based algorithm for problems in nonconvex optimisation is described. The algorithm restricts the search by exploiting the properties of the parameter space. It keeps the initial point fixed and reshapes the energy function so that, in the neighbourhood of an arbitrary initial condition, basins of attraction are formed corresponding to valid solutions first and then these basins are made deeper to improve the quality. This algorithm was applied to both the travelling salesman problem (TSP) and the graph mapping problem (GMP) and good results were obtained.<>  相似文献   

11.
Matsumoto  T. Koga  M. 《Electronics letters》1990,26(15):1136-1137
An analogue-neural-network learning method based on hardware is proposed. All the network parameters are oscillated with slightly different frequencies, and the spectra appearing in the error signal are used to change the parameters. Learning can be carried out without knowing the internal conditions of neural networks.<>  相似文献   

12.
Recurrent neural networks have become popular models for system identification and time series prediction. Nonlinear autoregressive models with exogenous inputs (NARX) neural network models are a popular subclass of recurrent networks and have been used in many applications. Although embedded memory can be found in all recurrent network models, it is particularly prominent in NARX models. We show that using intelligent memory order selection through pruning and good initial heuristics significantly improves the generalization and predictive performance of these nonlinear systems on problems as diverse as grammatical inference and time series prediction  相似文献   

13.
一种新的基于混沌神经网络的动态路由选择算法   总被引:4,自引:0,他引:4  
针对通信网的路由选择问题,提出了一种动态路由选择的混沌神经网络实现方法。所提出的此方法具有许多优良特性,即暂态混沌特性和平稳收敛特性,能有效地避免传统Hopfield神经网络极易陷入局部极值的缺陷。它通过短暂的倒分叉过程,能很快进入稳定收敛状态。实验证明了本算法能实时、有效地实现通信网的路由选择,并且当通信网中的业务量发生变化时,算法能自动调整最短路径和负载平衡之间的关系。  相似文献   

14.
An optimum block-adaptive learning rate (OBALR) backpropagation (BP) algorithm for training feedforward neural networks with an arbitrary number of neuron layers is described. The algorithm uses block-smoothed gradient as direction for descent and no momentum term, but produces an optimum block-adaptive learning rate which is constant within each block and is updated adaptively at the beginning of each block iteration so that it is kept optimum in a sense of minimizing the approximate output mean-square error of the block. Several computer simulations were tested on learning a deterministic chaos time-series mapping. The OBALR BP algorithm not only overcame the difficulty in choosing good values of the two parameters, but also provided significant improvement on learning speed and descent capability over the standard BP algorithm  相似文献   

15.
A self-creating harmonic neural network (HNN) trained with a competitive algorithm effective for on-line learning vector quantisation is presented. It is shown that by employing dual resource counters to record the activity of each node during the training process, the equi-error and equi-probable criteria can be harmonised. Training in HNNs is smooth and incremental, and it not only achieves the biologically plausible on-line learning property, but it can also avoid the stability-plasticity dilemma, the dead-node problem, and the deficiency of the local minimum. Characterising HNNs reveals the great controllability of HNNs in favouring one criterion over the other, when faced with a must-choose situation between equi-error and equi-probable. Comparison studies on learning vector quantisation involving stationary and non-stationary, structured and non-structured inputs demonstrate that the HNN outperforms other competitive networks in terms of quantisation error, learning speed and codeword search efficiency  相似文献   

16.
二进神经网络的模式匹配学习   总被引:1,自引:0,他引:1  
二进神经网络的知识提取需要了解每个神经元的逻辑意义。一般来说,对二进神经网络学习结果的分析是困难的。该文提出了一种基于线性可分结构系结构分析的学习算法,采用这种方法对布尔空间的样本集合进行学习,得到的二进神经网络隐层神经元都归属于一类或几类线性可分结构系,只要这几类线性可分结构系的逻辑意义是清晰的,就可以分析整个学习结果的知识内涵。  相似文献   

17.
Cichocki  A. Unbehauen  R. 《Electronics letters》1991,27(22):2026-2028
A new and simple winner-take-all neural subnetwork suitable for VLSI CMOS implementation is proposed. The subnetwork is used for the real-time solving of minimax optimisation problems. In particular, the Letter shows how to solve, in real time, an over-determined system of linear equations by using the Chebyshev L/sub infinity / norm criterion and a neural network approach. The validity and performance of the network have been checked by extensive computer simulation of different minimax optimisation problems.<>  相似文献   

18.
马霓  韦岗 《通信学报》2000,21(10):31-37
为改善预测类声码器中长时预测器特性,本文引入了一种全连接回归神经网络(FRNN)非线性预测器并将其应用于话音编码算法中。FRNN在隐层单元不仅有来自自身的反馈,也有来自输出单元的反馈,因此其预测性能好于常规预测器。将其应用于码本激励话音编码系统(CELP)中,可得到较低的传输码率,同时亦可改善编码质量。  相似文献   

19.
The channel assignment problem involves not only assigning channels or frequencies to each radio cell. but also satisfying frequency constraints given by a compatibility matrix. The proposed parallel algorithm is based on an artificial neural network composed of nm processing elements for an n-cell-m-frequency problem. The algorithm runs not only on a sequential machine but also on a parallel machine with up to a maximum of nm processors. The algorithm was tested by solving eight benchmark problems where the total number of frequencies varied from 100 to 533. The algorithm found the solutions in nearly constant time with nm processors. The simulation results showed that the algorithm found better solutions than the existing algorithm in one out of eight problems  相似文献   

20.
张立斌 《信息技术》2010,(7):60-64,68
将受训神经网络应用于分类领域时如何更好地抽取符号化规则是当今学术界广泛研究的问题。随着网络节点数和连接成几何级数增长,以前那种对网络连接和输出值进行全面分析的方法不再适用。提出了一种新颖的遗传算法用于从受训神经网络中提取符号化的规则。经实验证明这种方法对于提取规则是可行的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号