首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 655 毫秒
1.
Presents a recurrent neural network for solving the Sylvester equation with time-varying coefficient matrices. The recurrent neural network with implicit dynamics is deliberately developed in the way that its trajectory is guaranteed to converge exponentially to the time-varying solution of a given Sylvester equation. Theoretical results of convergence and sensitivity analysis are presented to show the desirable properties of the recurrent neural network. Simulation results of time-varying matrix inversion and online nonlinear output regulation via pole assignment for the ball and beam system and the inverted pendulum on a cart system are also included to demonstrate the effectiveness and performance of the proposed neural network.  相似文献   

2.
Differing from gradient-based neural networks (GNN), a special kind of recurrent neural network has recently been proposed by Zhang et al. for real-time inversion of time-varying matrices. The design of such a recurrent neural network is based on a matrix-valued error function instead of a scalar-valued norm-based energy-function. In addition, it is depicted in an implicit dynamics instead of an explicit dynamics. This paper investigates the simulation and verification of such a Zhang neural network (ZNN). Four important simulation techniques are employed to simulate this system: (1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector differential equation (VDE) [i.e., finally, there is a standard ordinary-differential-equation (ODE) formulation]. (2) MATLAB routine “ode45” with a mass-matrix property is introduced to simulate the transformed initial-value implicit ODE system. (3) Matrix derivatives are obtained using the routine “diff” and symbolic math toolbox. (4) Various implementation errors and different types of activation functions are investigated, further demonstrating the advantages of the ZNN model. Three illustrative computer-simulation examples substantiate the theoretical results and efficacy of the ZNN model for online time-varying matrix inversion.  相似文献   

3.
In addition to the parallel-distributed nature, recurrent neural networks can be implemented physically by designated hardware and thus have been found broad applications in many fields. In this paper, a special class of recurrent neural network named Zhang neural network (ZNN), together with its electronic realization, is investigated and exploited for online solution of time-varying linear matrix equations. By following the idea of Zhang function (i.e., error function), two ZNN models are proposed and studied, which allow us to choose plentiful activation functions (e.g., any monotonically-increasing odd activation function). It is theoretically proved that such two ZNN models globally and exponentially converge to the theoretical solution of time-varying linear matrix equations when using linear activation functions. Besides, the new activation function, named Li activation function, is exploited. It is theoretically proved that, when using Li activation function, such two ZNN models can be further accelerated to finite-time convergence to the time-varying theoretical solution. In addition, the upper bound of the convergence time is derived analytically via Lyapunov theory. Then, we conduct extensive simulations using such two ZNN models. The results substantiate the theoretical analysis and the efficacy of the proposed ZNN models for solving time-varying linear matrix equations.  相似文献   

4.
Neural Processing Letters - In the previous work, a typical recurrent neural network termed Zhang neural network (ZNN) has been developed for various time-varying problems solving. Based on the...  相似文献   

5.
Bartels–Stewart algorithm is an effective and widely used method with an O(n 3) time complexity for solving a static Sylvester equation. When applied to time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. Gradient-based recurrent neural network are able to solve the time-varying Sylvester equation in real time but there always exists an estimation error. In contrast, the recently proposed Zhang neural network has been proven to converge to the solution of the Sylvester equation ideally when time goes to infinity. However, this neural network with the suggested activation functions never converges to the desired value in finite time, which may limit its applications in realtime processing. To tackle this problem, a sign-bi-power activation function is proposed in this paper to accelerate Zhang neural network to finite-time convergence. The global convergence and finite-time convergence property are proven in theory. The upper bound of the convergence time is derived analytically. Simulations are performed to evaluate the performance of the neural network with the proposed activation function. In addition, the proposed strategy is applied to online calculating the pseudo-inverse of a matrix and nonlinear control of an inverted pendulum system. Both theoretical analysis and numerical simulations validate the effectiveness of proposed activation function.  相似文献   

6.
Different from gradient-based neural dynamics, a special kind of recurrent neural dynamics has recently been proposed by Zhang et al. for solving online time-varying problems. Such a recurrent neural dynamics is designed based on an indefinite error-monitoring function instead of a usually norm- or square-based energy function. In addition, Zhang neural dynamics (ZND) are depicted generally in implicit dynamics, whereas gradient-based neural dynamics (GND) are associated with explicit dynamics. In this paper, we generalize the ZND design method to solving online nonlinear time-varying equations in the form of f (x, t) = 0. For comparative purposes, the GND model is also employed for such time-varying equations’ solving. Computer-simulation results via power-sigmoid activation functions substantiate the theoretical analysis and efficacy of the ZND model for solving online nonlinear time-varying equations.  相似文献   

7.
通常的递归神经网络计算方法采用渐近收敛的网络模型,误差函数渐近收敛于零,理论上需经过无穷长的计算时间才能获得被求解问题的精确解。文中提出了一种终态递归神经网络模型,该网络形式新颖,具有有限时间收敛特性,用于解决时变矩阵计算问题时可使得计算过程快速收敛,且计算精度高。该网络的另一特点是动态方程右端函数值有限,易于实现。首先,分析渐近收敛网络模型在时变计算问题求解方面的缺陷,说明引入终态网络模型的必要性;然后,给出终态网络动态方程,推导出该网络收敛时间的具体表达式。对于时变矩阵逆和广义逆求解,定义一个误差函数,并依据误差函数构造终态递归神经网络进行求解,使计算过程在有限时间内收敛便能得到精确解。在将任意初始位置下的冗余机械臂轨迹规划任务转换为二次规划问题后,利用所提出的神经网络进行计算,得出的关节角轨迹导致末端执行器完成封闭轨迹跟踪,且关节角严格返回初始位置,以实现可重复运动。使用MATLAB/SIMULINK对时变矩阵计算问题和机器人轨迹规划任务分别进行仿真,通过比较分别采用渐近网络模型和终态网络模型时的计算过程与结果可以看出,使用终态网络模型的计算过程收敛快且显著提高了计算精度。对不同时变计算问题的求解体现了所提神经网络的应用背景。  相似文献   

8.
In view of the great potential in parallel processing and ready implementation via hardware, neural networks are now often employed to solve online nonlinear matrix equation problems. Recently, a novel class of neural networks, termed Zhang neural network (ZNN), has been formally proposed by Zhang et al. for solving online time-varying problems. Such a neural-dynamic system is elegantly designed by defining an indefinite matrix-valued error-monitoring function, which is called Zhang function (ZF). The dynamical system is then cast in the form of a first-order differential equation by using matrix notation. In this paper, different indefinite ZFs, which lead to different ZNN models, are proposed and developed as the error-monitoring functions for time-varying matrix square roots finding. Towards the final purpose of field programmable gate array (FPGA) and application-specific integrated circuit (ASIC) realization, the MATLAB Simulink modeling and verifications of such ZNN models are further investigated for online solution of time-varying matrix square roots. Both theoretical analysis and modeling results substantiate the efficacy of the proposed ZNN models for time-varying matrix square roots finding.  相似文献   

9.
An ε-twin support vector machine for regression   总被引:1,自引:1,他引:0  
A special class of recurrent neural network termed Zhang neural network (ZNN) depicted in the implicit dynamics has recently been introduced for online solution of time-varying convex quadratic programming (QP) problems. Global exponential convergence of such a ZNN model is achieved theoretically in an error-free situation. This paper investigates the performance analysis of the perturbed ZNN model using a special type of activation functions (namely, power-sum activation functions) when solving the time-varying QP problems. Robustness analysis and simulation results demonstrate the superior characteristics of using power-sum activation functions in the context of large ZNN-implementation errors, compared with the case of using linear activation functions. Furthermore, the application to inverse kinematic control of a redundant robot arm also verifies the feasibility and effectiveness of the ZNN model for time-varying QP problems solving.  相似文献   

10.
Kong  Ying  Hu  Tanglong  Lei  Jingsheng  Han  Renji 《Neural Processing Letters》2022,54(1):125-144
Neural Processing Letters - Zhang neural network (ZNN), a special recurrent neural network, has recently been established as an effective alternative for time-varying linear equations with...  相似文献   

11.
Neural Processing Letters - Several improvements of the Zhang neural network (ZNN) dynamics for solving the time-varying matrix inversion problem are presented. Introduced ZNN dynamical design is...  相似文献   

12.
This paper aims to present a synchronization scheme for a class of delayed neural networks, which covers the Hopfield neural networks and cellular neural networks with time-varying delays. A feedback control gain matrix is derived to achieve the exponential synchronization of the drive-response structure of neural networks by using the Lyapunov stability theory, and its exponential synchronization condition can be verified if a certain Hamiltonian matrix with no eigenvalues on the imaginary axis. This condition can avoid solving an algebraic Riccati equation. Both the cellular neural networks and Hopfield neural networks with time-varying delays are given as examples for illustration.  相似文献   

13.
A special class of recurrent neural networks (RNN) has recently been proposed by Zhang et al. for solving online time-varying matrix problems. Being different from conventional gradient-based neural networks (GNN), such RNN (termed specifically as Zhang neural networks, ZNN) are designed based on matrix-valued error functions, instead of scalar-valued norm-based energy functions. In this paper, we generalize and further investigate the ZNN model for time-varying matrix square root finding. For the purpose of possible hardware (e.g., digital circuit) realization, a discrete-time ZNN model is constructed and developed, which incorporates Newton iteration as a special case. Besides, to obtain an appropriate step-size value (in each iteration), a line-search algorithm is employed for the proposed discrete-time ZNN model. Computer-simulation results substantiate the effectiveness of the proposed ZNN model aided with a line-search algorithm, in addition to the connection and explanation to Newton iteration for matrix square root finding.  相似文献   

14.
This paper studies the global output convergence of a class of recurrent delayed neural networks with time-varying inputs. We consider non-decreasing activations which may also have jump discontinuities in order to model the ideal situation where the gain of the neuron amplifiers is very high and tends to infinity. In particular, we drop the assumptions of Lipschitz continuity and boundedness on the activation functions, which are usually required in most of the existing works. Due to the possible discontinuities of the activations functions, we introduce a suitable notation of limit to study the convergence of the output of the recurrent delayed neural networks. Under suitable assumptions on the interconnection matrices and the time-varying inputs, we establish a sufficient condition for global output convergence of this class of neural networks. The convergence results are useful in solving some optimization problems and in the design of recurrent delayed neural networks with discontinuous neuron activations.  相似文献   

15.
Following the idea of using first-order time derivatives, this paper presents a general recurrent neural network (RNN) model for online inversion of time-varying matrices. Different kinds of activation functions are investigated to guarantee the global exponential convergence of the neural model to the exact inverse of a given time-varying matrix. The robustness of the proposed neural model is also studied with respect to different activation functions and various implementation errors. Simulation results, including the application to kinematic control of redundant manipulators, substantiate the theoretical analysis and demonstrate the efficacy of the neural model on time-varying matrix inversion, especially when using a power-sigmoid activation function.  相似文献   

16.
Zhengguang  Hongye  Jian  Wuneng   《Neurocomputing》2009,72(13-15):3337
This paper is concerned with the problem of robust exponential stability analysis for uncertain discrete recurrent neural networks with time-varying delays. In terms of linear matrix inequality (LMI) approach, some novel stability conditions are proposed via a new Lyapunov function. Neither any model transformation nor free-weighting matrices are employed in our theoretical derivation. The established stability criteria significantly improve and simplify some existing stability conditions. Numerical examples are given to demonstrate the effectiveness of the proposed methods.  相似文献   

17.
Ma  Zhisheng  Yu  Shihang  Han  Yang  Guo  Dongsheng 《Neural computing & applications》2021,33(21):14231-14245
Neural Computing and Applications - A typical class of recurrent neural networks called zeroing neural network (ZNN) has been considered as a powerful alternative for time-varying problems solving....  相似文献   

18.

The target of this article is to study almost periodic dynamical behaviors for complex-valued recurrent neural networks with discontinuous activation functions and time-varying delays. We construct an equivalent discontinuous right-hand equation by decomposing real and imaginary parts of complex-valued neural networks. Based on differential inclusions theory, diagonal dominant principle and nonsmooth analysis theory of generalized Lyapunov function method, we achieve the existence, uniqueness and global stability of almost periodic solution for the equivalent delayed differential network. In particular, we derive a series of results on the equivalent neural networks with discontinuous activation functions, constant coefficients as well as periodic coefficients, respectively. Finally, we give a numerical example to demonstrate the effectiveness and feasibility of the derived theoretical results.

  相似文献   

19.
This paper is concerned with the stability problem for a class of impulsive neural networks model, which includes simultaneously parameter uncertainties, stochastic disturbances and two additive time-varying delays in the leakage term. By constructing a suitable Lyapunov–Krasovskii functional that uses the information on the lower and upper bound of the delay sufficiently, a delay-dependent stability criterion is derived by using the free-weighting matrices method for such Takagi–Sugeno fuzzy uncertain impulsive stochastic recurrent neural networks. The obtained conditions are expressed with linear matrix inequalities (LMIs) whose feasibility can be checked easily by MATLAB LMI Control toolbox. Finally, the theoretical result is validated by simulations.  相似文献   

20.
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号