首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
An orthogonal neural network for function approximation   总被引:6,自引:0,他引:6  
This paper presents a new single-layer neural network which is based on orthogonal functions. This neural network is developed to avoid the problems of traditional feedforward neural networks such as the determination of initial weights and the numbers of layers and processing elements. The desired output accuracy determines the required number of processing elements. Because weights are unique, the training of the neural network converges rapidly. An experiment in approximating typical continuous and discrete functions is given. The results show that the neural network has excellent performance in convergence time and approximation error.  相似文献   

2.
T.  S. 《Neurocomputing》2009,72(16-18):3915
The major drawbacks of backpropagation algorithm are local minima and slow convergence. This paper presents an efficient technique ANMBP for training single hidden layer neural network to improve convergence speed and to escape from local minima. The algorithm is based on modified backpropagation algorithm in neighborhood based neural network by replacing fixed learning parameters with adaptive learning parameters. The developed learning algorithm is applied to several problems. In all the problems, the proposed algorithm outperform well.  相似文献   

3.
标准BP算法采用的最陡梯度下降法使得均方误差达到最小的策略可能存在两大问题:①陷入局部最小而没有收敛到全局最小,即不收敛;②收敛速率慢。本文从训练算法角度方面,比较了标准BP算法、动量算法、可变学习速率算法和Levenberg-Marquardt算法这几种方法的收敛性以及收敛速率,并通过Matlab仿真进行了验证。  相似文献   

4.
The orthogonal neural network is a recently developed neural network based on the properties of orthogonal functions. It can avoid the drawbacks of traditional feedforward neural networks such as initial values of weights, number of processing elements, and slow convergence speed. Nevertheless, it needs many processing elements if a small training error is desired. Therefore, numerous data sets are required to train the orthogonal neural network. In the article, a least‐squares method is proposed to determine the exact weights by applying limited data sets. By using the Lagrange interpolation method, the desired data sets required to solve for the exact weights can be calculated. An experiment in approximating typical continuous and discrete functions is given. The Chebyshev polynomial is chosen to generate the processing elements of the orthogonal neural network. The experimental results show that the numerical method in determining the weights gives as good performance in approximation error as the known training method and the former has less convergence time. © 2004 Wiley Periodicals, Inc. Int J Int Syst 19: 1257–1275, 2004.  相似文献   

5.
有序神经网络及在阳极效应预报中的应用   总被引:1,自引:0,他引:1  
邢杰  萧德云 《控制工程》2007,14(1):27-31
提出了利用有序神经网络研究铝电解槽阳极效应的预报问题。概述了铝电解槽及其阳极效应的基本情况,针对铝电解槽控制难题和传统方法的不足,选择有序神经网络用于阳极效应概率预报。描述了有序神经网络的基本结构、与传统单隐层BP神经网络的区别以及由此带来的网络映射性能的改善,并使用梯度下降原则推导了有序神经网络的学习算法。使用铝电解槽的现场数据对有序神经网络进行训练并检验,结果表明有序神经网络可以比传统神经网络更及时、准确地对铝电解槽阳极效应进行预报。  相似文献   

6.
The use of neural network models for time series forecasting has been motivated by experimental results that indicate high capacity for function approximation with good accuracy. Generally, these models use activation functions with fixed parameters. However, it is known that the choice of activation function strongly influences the complexity and neural network performance and that a limited number of activation functions has been used in general. We describe the use of an asymmetric activation functions family with free parameter for neural networks. We prove that the activation functions family defined, satisfies the requirements of the universal approximation theorem We present a methodology for global optimization of the activation functions family with free parameter and the connections between the processing units of the neural network. The main idea is to optimize, simultaneously, the weights and activation function used in a Multilayer Perceptron (MLP), through an approach that combines the advantages of simulated annealing, tabu search and a local learning algorithm. We have chosen two local learning algorithms: the backpropagation with momentum (BPM) and Levenberg–Marquardt (LM). The overall purpose is to improve performance in time series forecasting.  相似文献   

7.
Multifeedback-Layer Neural Network   总被引:1,自引:0,他引:1  
The architecture and training procedure of a novel recurrent neural network (RNN), referred to as the multifeedback-layer neural network (MFLNN), is described in this paper. The main difference of the proposed network compared to the available RNNs is that the temporal relations are provided by means of neurons arranged in three feedback layers, not by simple feedback elements, in order to enrich the representation capabilities of the recurrent networks. The feedback layers provide local and global recurrences via nonlinear processing elements. In these feedback layers, weighted sums of the delayed outputs of the hidden and of the output layers are passed through certain activation functions and applied to the feedforward neurons via adjustable weights. Both online and offline training procedures based on the backpropagation through time (BPTT) algorithm are developed. The adjoint model of the MFLNN is built to compute the derivatives with respect to the MFLNN weights which are then used in the training procedures. The Levenberg-Marquardt (LM) method with a trust region approach is used to update the MFLNN weights. The performance of the MFLNN is demonstrated by applying to several illustrative temporal problems including chaotic time series prediction and nonlinear dynamic system identification, and it performed better than several networks available in the literature  相似文献   

8.
Curvature-driven smoothing: a learning algorithm for feedforwardnetworks   总被引:1,自引:0,他引:1  
The performance of feedforward neural networks in real applications can often be improved significantly if use is made of a priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard backpropagation algorithm cannot be applied. In this letter, we derive a computationally efficient learning algorithm, for a feedforward network of arbitrary topology, which can be used to minimize such error functions. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.  相似文献   

9.
Design and analysis of maximum Hopfield networks   总被引:7,自引:0,他引:7  
Since McCulloch and Pitts presented a simplified neuron model (1943), several neuron models have been proposed. Among them, the binary maximum neuron model was introduced by Takefuji et al. and successfully applied to some combinatorial optimization problems. Takefuji et al. also presented a proof for the local minimum convergence of the maximum neural network. In this paper we discuss this convergence analysis and show that this model does not guarantee the descent of a large class of energy functions. We also propose a new maximum neuron model, the optimal competitive Hopfield model (OCHOM), that always guarantees and maximizes the decrease of any Lyapunov energy function. Funabiki et al. (1997, 1998) applied the maximum neural network for the n-queens problem and showed that this model presented the best overall performance among the existing neural networks for this problem. Lee et al. (1992) applied the maximum neural network for the bipartite subgraph problem showing that the solution quality was superior to that of the best existing algorithm. However, simulation results in the n-queens problem and in the bipartite subgraph problem show that the OCHOM is much superior to the maximum neural network in terms of the solution quality and the computation time.  相似文献   

10.
This article discusses the application of orthogonal neural networks to detect collisions between multiple robot manipulators that work in an overlapped space. By applying an expansion/shrinkage algorithm, the problem of collision detection between arms is transformed into that among cylinders (or rectangular solids) and line segments. This mapping simplifies the collision detection problem and thus neural networks can be applied to solve it. The property of parallel processing enables neural networks to detect collisions rapidly. A single-layer orthogonal neural network is developed to avert the problems of conventional multilayer feedforward neural networks such as initial weights and the number of layers and processing elements. This orthogonal neural network can approximate various functions and is used to calculate forward solution and to detect collisions. An efficient neural network system for collision detection is also developed. © 1995 John Wiley & Sons, Inc.  相似文献   

11.
The optical bench training of an optical feedforward neural network, developed by the authors, is presented. The network uses an optical nonlinear material for neuron processing and a trainable applied optical pattern as the network weights. The nonlinear material, with the applied weight pattern, modulates the phase front of a forward propagating information beam by dynamically altering the index of refraction profile of the material. To verify that the network can be trained in real time, six logic gates were trained using a reinforcement training paradigm. More importantly, to demonstrate optical backpropagation, three gates were trained via optical error backpropagation. The output error is optically backpropagated, detected with a CCD camera, and the weight pattern is updated and stored on a computer. The obtained results lay the ground work for the implementation of multilayer neural networks that are trained using optical error backpropagation and are able to solve more complex problems.  相似文献   

12.
Chaos synchronization problems are addressed in this paper. For chaotic synchronization systems with uncertainties and external disturbances, an orthogonal function neural network is used to achieve the synchronization of chaotic systems. Legendre orthogonal polynomials are selected as the basis functions of the orthogonal function neural network. An adaptive learning law is derived to guarantee that the tracking errors are bounded using Lyapunov stability theory. Simulation results show the efficiency of the proposed scheme. Copyright © 2008 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

13.
本文提出了一种改进的神经网络板形模式识别方法,该方法基于支持向量机(SVM)与径向基(RBF)网络的结构等价性,利用SVM的回归确定RBF网络较优的初始参数,解决了传统神经网络模式识别方法存在的网络学习时间长,易陷入局部极小值等问题。此外,由于板形标准模式具有两两互反性,将输入样本与基本模式的模糊距离差作为网络输入,使输入节点减少一半,近一步实现了网络结构的固定化和简单化。实验表明,它提高了板形识别精度和速度,可推广到其他标准模式具有两两互反性的模式识别中。  相似文献   

14.
为解决差分式Hopfield网络能量函数的局部极小问题,本文对之改进得到一种具有迭代学习功能的线性差分式Hopfield网络.理论分析表明,该网络具有稳定性,且稳定状态使其能量函数达到唯一极小值.基于线性差分式Hopfield网络稳定性与其能量函数收敛特性的关系,本文将该网络用于求解多变量时变系统的线性二次型最优控制问题.网络的理论设计方法表明,网络的稳态输出就是欲求的最优控制向量.数字仿真取得了与理论分析一致的实验结果.  相似文献   

15.
非单调的多层神经网络   总被引:1,自引:0,他引:1  
介绍了应用非单调函数作为隐层节点的激励函数的前馈多层神经网络的性能。并对以前神经网络中的两个难点:局部极小和收敛速度慢的问题进行分析。表明了采用非单调激励函数的神经网络能够较好地解决局部极小的问题。最后的实验结果也表明了该网络的性能要好于采用S型函数的神经网络的性能。  相似文献   

16.
A gradient descent algorithm suitable for training multilayer feedforward networks of processing units with hard-limiting output functions is presented. The conventional backpropagation algorithm cannot be applied in this case because the required derivatives are not available. However, if the network weights are random variables with smooth distribution functions, the probability of a hard-limiting unit taking one of its two possible values is a continuously differentiable function. In the paper, this is used to develop an algorithm similar to backpropagation, but for the hard-limiting case. It is shown that the computational framework of this algorithm is similar to standard backpropagation, but there is an additional computational expense involved in the estimation of gradients. Upper bounds on this estimation penalty are given. Two examples which indicate that, when this algorithm is used to train networks of hard-limiting units, its performance is similar to that of conventional backpropagation applied to networks of units with sigmoidal characteristics are presented.  相似文献   

17.
Multilayer perceptrons are successfully used in an increasing number of nonlinear signal processing applications. The backpropagation learning algorithm, or variations hereof, is the standard method applied to the nonlinear optimization problem of adjusting the weights in the network in order to minimize a given cost function. However, backpropagation as a steepest descent approach is too slow for many applications. In this paper a new learning procedure is presented which is based on a linearization of the nonlinear processing elements and the optimization of the multilayer perceptron layer by layer. In order to limit the introduced linearization error a penalty term is added to the cost function. The new learning algorithm is applied to the problem of nonlinear prediction of chaotic time series. The proposed algorithm yields results in both accuracy and convergence rates which are orders of magnitude superior compared to conventional backpropagation learning.  相似文献   

18.
Bartels–Stewart algorithm is an effective and widely used method with an O(n 3) time complexity for solving a static Sylvester equation. When applied to time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. Gradient-based recurrent neural network are able to solve the time-varying Sylvester equation in real time but there always exists an estimation error. In contrast, the recently proposed Zhang neural network has been proven to converge to the solution of the Sylvester equation ideally when time goes to infinity. However, this neural network with the suggested activation functions never converges to the desired value in finite time, which may limit its applications in realtime processing. To tackle this problem, a sign-bi-power activation function is proposed in this paper to accelerate Zhang neural network to finite-time convergence. The global convergence and finite-time convergence property are proven in theory. The upper bound of the convergence time is derived analytically. Simulations are performed to evaluate the performance of the neural network with the proposed activation function. In addition, the proposed strategy is applied to online calculating the pseudo-inverse of a matrix and nonlinear control of an inverted pendulum system. Both theoretical analysis and numerical simulations validate the effectiveness of proposed activation function.  相似文献   

19.
目前神经网络已经成为解决非线性系统辨识问题的一类有效的方法,但是常用的多层感知器存在网络稳定性差、收敛速度慢的问题.在多层感知器和傅里叶级数基础上提出的傅里叶神经网络具有较好的泛化性、模式识别能力,但其学习算法主要采用最速下降法,易产生陷入局部极小,学习速度慢等问题.提出一种采用双折线步方法的傅里叶神经网络,避免了局部极小问题,且具有二阶收敛速度.通过相应的数值算例验证新算法的性能,并应用于非线性系统的识别问题中,其结果和几类经典的神经网络算法做了相应的对比和分析.  相似文献   

20.
BP算法分析与改进   总被引:8,自引:0,他引:8  
贾丽会  张修如 《微机发展》2006,16(10):101-103
在人工神经网络中,BP神经网络是一种应用广泛的多层前馈神经网络。分析了BP算法的基本原理,指出了BP算法具有收敛速度慢、易陷入局部极小点等缺陷以及这些缺陷产生的根源。针对这些缺陷,通过在标准BP算法中引入变步长法、加动量项法、遗传算法、模拟退火算法等几种方法来优化BP算法。实验结果表明,这些方法有效地提高了BP算法的收敛性,避免陷入局部最小点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号