首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements that are shown to be extremely effective in computation. This paper presents an architecture of recurrent neural networks for solving the N-Queens problem. More specifically, a modified Hopfield network is developed and its internal parameters are explicitly computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points, which represent a solution of the considered problem. The network is shown to be completely stable and globally convergent to the solutions of the N-Queens problem. A fuzzy logic controller is also incorporated in the network to minimize convergence time. Simulation results are presented to validate the proposed approach.  相似文献   

2.
We propose a linear attractor network based on the observation that similar patterns form a pipeline in the state space, which can be used for pattern association. To model the pipeline in the state space, we present a learning algorithm using a recurrent neural network. A least-squares estimation approach utilizing the interdependency between neurons defines the dynamics of the network. The region of convergence around the line of attraction is defined based on the statistical characteristics of the input patterns. Performance of the learning algorithm is evaluated by conducting several experiments in benchmark problems, and it is observed that the new technique is suitable for multiple-valued pattern association.  相似文献   

3.
This paper presents a design method of finite dimensional robust H distributed consensus filters (DCFs) for a class of dissipative nonlinear partial differential equation (PDE) systems with a sensor network, for which the eigenvalue spectrum of the spatial differential operator can be partitioned into a finite dimensional slow one and an infinite dimensional stable fast complement. Initially, the modal decomposition technique is applied to the PDE system to derive a finite dimensional ordinary differential equation system that accurately describes the dynamics of the dominant (slow) modes of the PDE system. Then, based on the slow subsystem, a set of finite dimensional robust H DCFs are developed to enforce the consensus of the slow mode estimates and state estimates of all local filters for all admissible nonlinear dynamics and observation spillover, while attenuating the effect of external disturbances. The Luenberger and consensus gains of the proposed DCFs can be obtained by solving a set of linear matrix inequalities (LMIs). Furthermore, by the existing LMI optimization technique, a suboptimal design of robust H DCFs is proposed in the sense of minimizing the attenuation level. Finally, the effectiveness of the proposed DCFs is demonstrated on the state estimation of one dimensional Kuramoto–Sivashinsky equation system with a sensor network. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
This letter presents a study of the Simultaneous Recurrent Neural network, an adaptive algorithm, as a nonlinear dynamic system for static optimization. Empirical findings, which were recently reported in the literature, suggest that the Simultaneous Recurrent Neural network offers superior performance for large-scale instances of combinatorial optimization problems in terms of desirable convergence characteristics improved solution quality and computational complexity measures. A theoretical study that encompasses exploration of initialization properties of the Simultaneous Recurrent Neural network dynamics to facilitate application of a fixed-point training algorithm is carried out. Specifically, initialization of the weight matrix entries to induce one or more stable equilibrium points in the state space of the nonlinear network dynamics is investigated and applicable theoretical bounds are derived. A simulation study to confirm the theoretical bounds on initial values of weights is realized. Theoretical findings and correlating simulation study performed suggest that the Simultaneous Recurrent Neural network dynamics possesses desirable stability characteristics as an adaptive recurrent neural network for addressing static optimization problems. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

5.
The restoration of digital images and patterns by the splitting-integrating method (SIM) proposed by Li (1993) and Liet al. (1992) is much simpler than other algorithms because no solutions of nonlinear algebraic equations are required. Let a pixel in 2D images be split intoN 2 subpixels; the convergence rates areO(1/N) andO/(1/N 2) for pixel greyness under image normalization by SIM. In this paper, the advanced SIM using spline functions can raise the convergence rates to (O(1/N 3) andO(1/N 4). Error bounds of pixel greyness obtained are derived from numerical analysis, and numerical experiments are carried out to confirm the high convergence rates ofO(1/N 3) andO(1/N 4).  相似文献   

6.
This paper investigates the possibility of improving the classification capability of single-layer and multilayer perceptrons by incorporating additional output layers. This Multi-Output-Layer Perceptron (MOLP) is a new type of constructive network, though the emphasis is on improving pattern separability rather than network efficiency. The MOLP is trained using the standard back-propagation (BP) algorithm. The studies are concentrated on realizations of arbitrary functions which map from an x-dimensional input MOLP, all problems existing in an original n-dimensional space in the hidden layer are transformed to a higher (n +1)-dimensional space, so that the possibility of linear separability is increased. Experimental investigations show that the classification ability of the MOLP is superior to that of an equivalent MLP. In general, this performance increase can be achieved with shorter training times and simpler network architectures.  相似文献   

7.
This research is concerned with a gradient descent training algorithm for a target network that makes use of a helper feed-forward network (FFN) to represent the cost function required for training the target network. A helper FFN is trained because the cost relation for the target is not differentiable. The transfer function of the trained helper FFN provides a differentiable cost function of the parameter vector for the target network allowing gradient search methods for finding the optimum values of the parameters. The method is applied to the training of discrete recurrent networks (DRNNs) that are used as a tool for classification of temporal sequences of characters from some alphabet and identification of a finite state machine (FSM) that may have produced all the sequences. Classification of sequences that are input to the DRNN is based on the terminal state of the network after the last element in the input sequence has been processed. If the DRNN is to be used for classifying sequences the terminal states for class 0 sequences must be distinct from the terminal states for class 1 sequences. The cost value to be used in training must therefore be a function of this disjointedness and no more. The outcome of this is a cost relationship that is not continuous but discrete and therefore derivative free methods have to be used or alternatively the method suggested in this paper. In the latter case the transform function of the helper FFN that is trained using the cost function is a differentiable function that can be used in the training of the DRNN using gradient descent.Acknowledgement. This work was supported by a discovery grant from the Government of Canada. The comments made by the reviewers are also greatly appreciated and have proven to be quite useful.  相似文献   

8.
This paper proposes NARX (nonlinear autoregressive model with exogenous input) model structures with functional expansion of input patterns by using low complexity ANN (artificial neural network) for nonlinear system identification. Chebyshev polynomials, Legendre polynomials, trigonometric expansions using sine and cosine functions as well as wavelet basis functions are used for the functional expansion of input patterns. The past input and output samples are modeled as a nonlinear NARX process and robust H filter is proposed as the learning algorithm for the neural network to identify the unknown plants. H filtering approach is based on the state space modeling of model parameters and evaluation of Jacobian matrices. This approach is the robustification of Kalman filter which exhibits robust characteristics and fast convergence properties. Comparison results for different nonlinear dynamic plants with forgetting factor recursive least square (FFRLS) and extended Kalman filter (EKF) algorithms demonstrate the effectiveness of the proposed approach.  相似文献   

9.

This paper offers a recurrent neural network to support vector machine (SVM) learning in stochastic support vector regression with probabilistic constraints. The SVM is first converted into an equivalent quadratic programming (QP) formulation in linear and nonlinear cases. An artificial neural network for SVM learning is then proposed. The presented neural network framework guarantees obtaining the optimal solution of the SVM problem. The existence and convergence of the trajectories of the network are studied. The Lyapunov stability for the considered neural network is also shown. The efficiency of the proposed method is shown by three illustrative examples.

  相似文献   

10.
对一种在Elman动态递归网络基础上发展而来的复合输入动态递归网络(CIDRNN)作 了改进,提出一种新的动态递归神经网络结构,称为状态延迟动态递归神经网络(State Delay Input Dynamical Recurrent Neural Network).具有这种新的拓扑结构和学习规则的动态递归网 络,不仅明确了各权值矩阵的意义,而且使权值的训练过程更为简洁,意义更为明确.仿真实验 表明,这种结构的网络由于增加了网络输入输出的前一步信息,提高了收敛速度,增强了实时 控制的可能性.然后将该网络用于机器人未知非线性动力学的辨识中,使用辨识实际输出与机理 模型输出之间的偏差,来识别机理模型或简化模型所丢失的信息,既利用了机器人现有的建模 方法,又可以减小网络运算量,提高辨识速度.仿真结果表明了这种改进的有效性.  相似文献   

11.
In this paper, a novel robust training algorithm of multi-input multi-output recurrent neural network and its application in the fault tolerant control of a robotic system are investigated. The proposed scheme optimizes the gradient type training on basis of three new adaptive parameters, namely, dead-zone learning rate, hybrid learning rate, and normalization factor. The adaptive dead-zone learning rate is employed to improve the steady state response. The normalization factor is used to maximize the gradient depth in the training, so as to improve the transient response. The hybrid learning rate switches the training between the back-propagation and the real-time recurrent learning mode, such that the training is robust stable. The weight convergence and L 2 stability of the algorithm are proved via Lyapunov function and the Cluett’s law, respectively. Based upon the theoretical results, we carry out simulation studies of a two-link robot arm position tracking control system. A computed torque controller is designed to provide a specified closed-loop performance in a fault-free condition, and then the RNN compensator and the robust training algorithm are employed to recover the performance in case that fault occurs. Comparisons are given to demonstrate the advantages of the control method and the proposed training algorithm.  相似文献   

12.
韩敏  王亚楠 《自动化学报》2010,36(1):169-173
针对多元非线性时间序列, 结合回声状态网络和Kalman滤波提出一种新的在线自适应预报方法. 该方法将Kalman滤波应用于回声状态网络储备池高维状态空间中, 直接对网络的输出权值进行在线更新, 省去了传统递归网络扩展Kalman滤波中Jacobian矩阵的计算, 在提高预测精度的同时令算法的适用范围得到扩展. 在回声状态网络稳定时给出所提算法的收敛性证明. 仿真实例验证了所提方法的有效性.  相似文献   

13.
In this paper the order of convergence of the evolution operator method used to solve a nonlinear autonomous system in ODE's [2] is investigated. The order is found, to be 2N+1 where N comes from the [N+1,N] Pade approximation used in the method. The order is independent of the choice of the weight function.  相似文献   

14.
A recurrent Sigma-Pi-linked back-propagation neural network is presented. The increase of input information is achieved by the introduction of “higher-order≓ terms, that are generated through functional-linked input nodes. Based on the Sigma-Pi-linked model, this network is capable of approximating more complex function at a much faster convergence rate. This recurrent network is intensively tested by applying to different types of linear and nonlinear time-series. Comparing to the conventional feedforward BP network, the training convergence rate is substantially faster. Results indicate that the functional approximation property of this recurrent network is remarkable for time-series applications.  相似文献   

15.
In this work, a novel and model-based artificial neural network (ANN) training method is developed supported by optimal control theory. The method augments training labels in order to robustly guarantee training loss convergence and improve training convergence rate. Dynamic label augmentation is proposed within the framework of gradient descent training where the convergence of training loss is controlled. First, we capture the training behavior with the help of empirical Neural Tangent Kernels (NTK) and borrow tools from systems and control theory to analyze both the local and global training dynamics (e.g., stability, reachability). Second, we propose to dynamically alter the gradient descent training mechanism via fictitious labels as control inputs and an optimal state feedback policy. In this way, we enforce locally optimal and convergent training behavior. The novel algorithm, Controlled Descent Training (CDT), guarantees local convergence. CDT unleashes new potentials in the analysis, interpretation, and design of ANN architectures. The applicability of the method is demonstrated on standard regression and classification problems.  相似文献   

16.

针对现有的利用非线性滤波算法对神经网络进行训练中存在滤波精度受限和效率不高的缺陷, 提出一种基于容积卡尔曼滤波(CKF) 的神经网络训练算法. 在算法实现过程中, 首先构建神经网络的状态空间模型; 然后将网络连接权值作为系统的状态参量, 并采用三阶Spherical-Radial 准则生成的容积点实现神经网络中节点连接权值的训练. 理论分析和仿真结果验证了所提出算法的可行性和有效性.

  相似文献   

17.
We propose to fit a recurrent feedback neural network structure to input–output data through prediction error minimization. The recurrent feedback neural network structure takes the form of a nonlinear state estimator, which can compactly represent a multivariable dynamic system with stochastic inputs. The inclusion of the feedback error term as an input to the model allows the user to update the model based on feedback measurements in real-time uses. The model can be useful in a variety of applications including software sensing, process monitoring, and predictive control. A dynamic learning algorithm for training the recurrent neural network has been developed. Through some examples, we evaluate the efficacy of the proposed method and the prediction improvement achieved by the inclusion of the feedback error term.  相似文献   

18.
Recurrent neural network training with feedforward complexity   总被引:1,自引:0,他引:1  
This paper presents a training method that is of no more than feedforward complexity for fully recurrent networks. The method is not approximate, but rather depends on an exact transformation that reveals an embedded feedforward structure in every recurrent network. It turns out that given any unambiguous training data set, such as samples of the state variables and their derivatives, we need only to train this embedded feedforward structure. The necessary recurrent network parameters are then obtained by an inverse transformation that consists only of linear operators. As an example of modeling a representative nonlinear dynamical system, the method is applied to learn Bessel's differential equation, thereby generating Bessel functions within, as well as outside the training set.  相似文献   

19.
Bartels–Stewart algorithm is an effective and widely used method with an O(n 3) time complexity for solving a static Sylvester equation. When applied to time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. Gradient-based recurrent neural network are able to solve the time-varying Sylvester equation in real time but there always exists an estimation error. In contrast, the recently proposed Zhang neural network has been proven to converge to the solution of the Sylvester equation ideally when time goes to infinity. However, this neural network with the suggested activation functions never converges to the desired value in finite time, which may limit its applications in realtime processing. To tackle this problem, a sign-bi-power activation function is proposed in this paper to accelerate Zhang neural network to finite-time convergence. The global convergence and finite-time convergence property are proven in theory. The upper bound of the convergence time is derived analytically. Simulations are performed to evaluate the performance of the neural network with the proposed activation function. In addition, the proposed strategy is applied to online calculating the pseudo-inverse of a matrix and nonlinear control of an inverted pendulum system. Both theoretical analysis and numerical simulations validate the effectiveness of proposed activation function.  相似文献   

20.
Convergence of a boundary value method (BVM) in Aceto et al. [Boundary value methods for the reconstruction of Sturm–Liouville potentials, Appl. Math. Comput. 219 (2012), pp. 2960–2974] for computing Sturm–Liouville potentials from two spectra is discussed. In Aceto et al. (2012), a continuous approximation of the unknown potential belonging to a suitable function space of finite dimension is obtained by forming an associated set of nonlinear equations and solving these with a quasi-Newton approach. In our paper, convergence of the quasi-Newton approach is established and convergence of the estimate of the unknown potential, provided by the exact solution of the nonlinear equation, to the true potential is proved. To further investigate the properties of the BVM in Aceto et al. (2012), some other spaces of functions are introduced. Numerical examples confirm the theoretically predicted convergence properties and show the accuracy and stability of the BVM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号