首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Proposed in this paper is a new conjugate gradient method with smoothing \(L_{1/2} \) regularization based on a modified secant equation for training neural networks, where a descent search direction is generated by selecting an adaptive learning rate based on the strong Wolfe conditions. Two adaptive parameters are introduced such that the new training method possesses both quasi-Newton property and sufficient descent property. As shown in the numerical experiments for five benchmark classification problems from UCI repository, compared with the other conjugate gradient training algorithms, the new training algorithm has roughly the same or even better learning capacity, but significantly better generalization capacity and network sparsity. Under mild assumptions, a global convergence result of the proposed training method is also proved.  相似文献   

2.
This paper shows the analysis and design of feedforward neural networks using the coordinate-free system of Clifford or geometric algebra. It is shown that real-, complex-, and quaternion-valued neural networks are simply particular cases of the geometric algebra multidimensional neural networks and that some of them can also be generated using support multivector machines (SMVMs). Particularly, the generation of radial basis function for neurocomputing in geometric algebra is easier using the SMVM, which allows one to find automatically the optimal parameters. The use of support vector machines in the geometric algebra framework expands its sphere of applicability for multidimensional learning. Interesting examples of nonlinear problems show the effect of the use of an adequate Clifford geometric algebra which alleviate the training of neural networks and that of SMVMs.  相似文献   

3.
This paper presents a novel neural network model with hybrid quantized architecture to improve the performance of the conventional Elman networks. The quantum gate technique is introduced for solving the pattern mismatch between the inputs stream and one-time-delay state feedback. A quantized back-propagation training algorithm with an adaptive dead zone scheme is developed for providing an optimal or suboptimal tradeoff between the convergence speed and the generalization performance. Furthermore, the effectiveness of the new real time learning algorithm is demonstrated by proving the quantum gate parameter convergence based on Lyapunov method. The numerical experiments are carried out to demonstrate the accuracy of the theoretical results.  相似文献   

4.
We introduce an advanced supervised training method for neural networks. It is based on Jacobian rank deficiency and it is formulated, in some sense, in the spirit of the Gauss-Newton algorithm. The Levenberg-Marquardt algorithm, as a modified Gauss-Newton, has been used successfully in solving nonlinear least squares problems including neural-network training. It outperforms the basic backpropagation and its variations with variable learning rate significantly, but with higher computation and memory complexities within each iteration. The mew method developed in this paper is aiming at improving convergence properties, while reducing the memory and computation complexities in supervised training of neural networks. Extensive simulation results are provided to demonstrate the superior performance of the new algorithm over the Levenberg-Marquardt algorithm.  相似文献   

5.
Barhen  J. Gulati  S. Zak  M. 《Computer》1989,22(6):67-76
Two issues that are fundamental to developing autonomous intelligent robots, namely rudimentary learning capability and dexterous manipulation, are examined. A powerful neural learning formalism is introduced for addressing a large class of nonlinear mapping problems, including redundant manipulator inverse kinematics, commonly encountered during the design of real-time adaptive control mechanisms. Artificial neural networks with terminal attractor dynamics are used. The rapid network convergence resulting from the infinite local stability of these attractors allows the development of fast neural learning algorithms. Approaches to manipulator inverse kinematics are reviewed, the neurodynamics model is discussed, and the neural learning algorithm is presented  相似文献   

6.
This paper aims to present a synchronization scheme for a class of delayed neural networks, which covers the Hopfield neural networks and cellular neural networks with time-varying delays. A feedback control gain matrix is derived to achieve the exponential synchronization of the drive-response structure of neural networks by using the Lyapunov stability theory, and its exponential synchronization condition can be verified if a certain Hamiltonian matrix with no eigenvalues on the imaginary axis. This condition can avoid solving an algebraic Riccati equation. Both the cellular neural networks and Hopfield neural networks with time-varying delays are given as examples for illustration.  相似文献   

7.
This paper introduces ANASA (adaptive neural algorithm of stochastic activation), a new, efficient, reinforcement learning algorithm for training neural units and networks with continuous output. The proposed method employs concepts, found in self-organizing neural networks theory and in reinforcement estimator learning algorithms, to extract and exploit information relative to previous input pattern presentations. In addition, it uses an adaptive learning rate function and a self-adjusting stochastic activation to accelerate the learning process. A form of optimal performance of the ANASA algorithm is proved (under a set of assumptions) via strong convergence theorems and concepts. Experimentally, the new algorithm yields results, which are superior compared to existing associative reinforcement learning methods in terms of accuracy and convergence rates. The rapid convergence rate of ANASA is demonstrated in a simple learning task, when it is used as a single neural unit, and in mathematical function modeling problems, when it is used to train various multilayered neural networks.  相似文献   

8.
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.  相似文献   

9.
基于扩展卡尔曼滤波器的RBF神经网络学习算法   总被引:1,自引:1,他引:0  
径向基函数(RBF)神经网络可广泛应用于解决信号处理与模式识别问题,目前存在一些学习算法用来确定RBF中心节点和训练网络,对于确定RBF中心节点向量值和网络权重值可以看作同一系统问题,因此该文提出把扩展卡尔曼滤波器(EKF)用于多输入多输出的径向基函数(RBF)神经网络作为其学习算法,当确定神经网络中网络节点的个数后,EKF可以同时确定中心节点向量值和网络权重矩阵,为提高收敛速度提出带有次优渐消因子的扩展卡尔曼滤波器(SFEKF)用于RBF神经网络学习算法,仿真结果说明了在学习过程中应用EKF比常规RBF神经网络有更好的效果,学习速度比梯度下降法明显加快,减少了计算负担。  相似文献   

10.
Eduardo  Refugio 《Pattern recognition》2003,36(12):2909-2926
This paper shows the analysis and design of feed-forward neural networks using the coordinate-free system of Clifford or geometric algebra. It is shown that real-, complex- and quaternion-valued neural networks are simply particular cases of the geometric algebra multidimensional neural networks and that they can be generated using Support Multi-Vector Machines. Particularly, the generation of RBF for neurocomputing in geometric algebra is easier using the SMVM that allows to find the optimal parameters automatically. The use of SVM in the geometric algebra framework expands its sphere of applicability for multidimensional learning.

We introduce a novel method of geometric preprocessing utilizing hypercomplex or Clifford moments. This method is applied together with geometric MLPs for tasks of 2D pattern recognition. Interesting examples of non-linear problems like the grasping of an object along a non-linear curve and the 3D pose recognition show the effect of the use of adequate Clifford or geometric algebras that alleviate the training of neural networks and that of Support Multi-Vector Machines.  相似文献   


11.
In view of the great potential in parallel processing and ready implementation via hardware, neural networks are now often employed to solve online nonlinear matrix equation problems. Recently, a novel class of neural networks, termed Zhang neural network (ZNN), has been formally proposed by Zhang et al. for solving online time-varying problems. Such a neural-dynamic system is elegantly designed by defining an indefinite matrix-valued error-monitoring function, which is called Zhang function (ZF). The dynamical system is then cast in the form of a first-order differential equation by using matrix notation. In this paper, different indefinite ZFs, which lead to different ZNN models, are proposed and developed as the error-monitoring functions for time-varying matrix square roots finding. Towards the final purpose of field programmable gate array (FPGA) and application-specific integrated circuit (ASIC) realization, the MATLAB Simulink modeling and verifications of such ZNN models are further investigated for online solution of time-varying matrix square roots. Both theoretical analysis and modeling results substantiate the efficacy of the proposed ZNN models for time-varying matrix square roots finding.  相似文献   

12.
In many applications, a class of optimization problems called quadratic programming with a special quadratic constraint (QPQC) often occurs, such as in the fields of maximum entropy spectral estimation, FIR filter design with time–frequency constraint and design of an FIR filter bank with perfect reconstruction property. In order to deal with this kind of optimization problems and be inspired by the computational virtue of analog or dynamic neural networks, a feedback neural network is proposed for solving for this class of QPQC computation problems in real time in this paper. The stability, convergence and computational performance of the proposed neural network have also been analyzed and proved in detail so as to theoretically guarantee the computational effectiveness and capability of the network. From the theoretical analyses it turns out that the solution of a QPQC problem is just the generalized minimum eigenvector of the objective matrix with respect to the constrained matrix. A number of simulation experiments have been given to further support our theoretical analysis and illustrate the computational performance of the proposed network.  相似文献   

13.
This paper studies the global output convergence of a class of recurrent delayed neural networks with time-varying inputs. We consider non-decreasing activations which may also have jump discontinuities in order to model the ideal situation where the gain of the neuron amplifiers is very high and tends to infinity. In particular, we drop the assumptions of Lipschitz continuity and boundedness on the activation functions, which are usually required in most of the existing works. Due to the possible discontinuities of the activations functions, we introduce a suitable notation of limit to study the convergence of the output of the recurrent delayed neural networks. Under suitable assumptions on the interconnection matrices and the time-varying inputs, we establish a sufficient condition for global output convergence of this class of neural networks. The convergence results are useful in solving some optimization problems and in the design of recurrent delayed neural networks with discontinuous neuron activations.  相似文献   

14.
Fuzzy Clustering Using A Compensated Fuzzy Hopfield Network   总被引:1,自引:0,他引:1  
Hopfield neural networks are well known for cluster analysis with an unsupervised learning scheme. This class of networks is a set of heuristic procedures that suffers from several problems such as not guaranteed convergence and output depending on the sequence of input data. In this paper, a Compensated Fuzzy Hopfield Neural Network (CFHNN) is proposed which integrates a Compensated Fuzzy C-Means (CFCM) model into the learning scheme and updating strategies of the Hopfield neural network. The CFCM, modified from Penalized Fuzzy C-Means algorithm (PFCM), is embedded into a Hopfield net to avoid the NP-hard problem and to speed up the convergence rate for the clustering procedure. The proposed network also avoids determining values for the weighting factors in the energy function. In addition, its training scheme enables the network to learn more rapidly and more effectively than FCM and PFCM. In experimental results, the CFHNN method shows promising results in comparison with FCM and PFCM methods.  相似文献   

15.
王作为    徐征    张汝波  洪才森  王殊 《智能系统学报》2020,15(5):835-846
记忆神经网络非常适合解决时间序列决策问题,将其用于机器人导航领域是非常有前景的新兴研究领域。本文主要讨论记忆神经网络在机器人导航领域的研究进展。给出几种基本记忆神经网络结合导航任务的工作机理,总结了不同模型的优缺点;对记忆神经网络在导航领域的研究进展进行简要综述;进一步介绍导航验证环境的发展;最后梳理了记忆神经网络在导航问题所面临的复杂性挑战,并预测了记忆神经网络在导航领域未来的发展方向。  相似文献   

16.
This paper studies the output convergence of a class of recurrent neural networks with time-varying inputs. The model of the studied neural networks has different dynamic structure from that in the well known Hopfield model, it does not contain linear terms. Since different structures of differential equations usually result in quite different dynamic behaviors, the convergence of this model is quite different from that of Hopfield model. This class of neural networks has been found many successful applications in solving some optimization problems. Some sufficient conditions to guarantee output convergence of the networks are derived.  相似文献   

17.
Recently, a neutral-type delayed projection neural network (NDPNN) was developed for solving variational inequality problems. This paper addresses the global stability and convergence of the NDPNN and presents new results for it to solve linear variational inequality (LVI). Compared with existing convergence results for neural networks to solve LVI, our results do not require the LVI that is monotone so as to guarantee the NDPNN that can solve a class of non-monotone LVI. All the results are expressed in terms of linear matrix inequalities, which can be easily checked. Simulation examples demonstrate the effectiveness of the obtained results.  相似文献   

18.
19.
In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all (k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.  相似文献   

20.
This paper deals with problems of stability and the stabilization of discrete-time neural networks. Neural structures under consideration belong to the class of the so-called locally recurrent globally feedforward networks. The single processing unit possesses dynamic behavior. It is realized by introducing into the neuron structure a linear dynamic system in the form of an infinite impulse response filter. In this way, a dynamic neural network is obtained. It is well known that the crucial problem with neural networks of the dynamic type is stability as well as stabilization in learning problems. The paper formulates stability conditions for the analyzed class of neural networks. Moreover, a stabilization problem is defined and solved as a constrained optimization task. In order to tackle this problem two methods are proposed. The first one is based on a gradient projection (GP) and the second one on a minimum distance projection (MDP). It is worth noting that these methods can be easily introduced into the existing learning algorithm as an additional step, and suitable convergence conditions can be developed for them. The efficiency and usefulness of the proposed approaches are justified by using a number of experiments including numerical complexity analysis, stabilization effectiveness, and the identification of an industrial process  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号