首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
Dynamics analysis and analog associative memory of networks with LT neurons   总被引:1,自引:0,他引:1  
The additive recurrent network structure of linear threshold neurons represents a class of biologically-motivated models, where nonsaturating transfer functions are necessary for representing neuronal activities, such as that of cortical neurons. This paper extends the existing results of dynamics analysis of such linear threshold networks by establishing new and milder conditions for boundedness and asymptotical stability, while allowing for multistability. As a condition for asymptotical stability, it is found that boundedness does not require a deterministic matrix to be symmetric or possess positive off-diagonal entries. The conditions put forward an explicit way to design and analyze such networks. Based on the established theory, an alternate approach to study such networks is through permitted and forbidden sets. An application of the linear threshold (LT) network is analog associative memory, for which a simple design method describing the associative memory is suggested in this paper. The proposed design method is similar to a generalized Hebbian approach, but with distinctions of additional network parameters for normalization, excitation and inhibition, both on a global and local scale. The computational abilities of the network are dependent on its nonlinear dynamics, which in turn is reliant upon the sparsity of the memory vectors.  相似文献   

2.
Lei  Zhang  Jiali  Pheng Ann   《Neurocomputing》2009,72(16-18):3809
Multistability is an important dynamical property in neural networks in order to enable certain applications where monostable networks could be computationally restrictive. This paper studies some multistability properties for a class of bidirectional associative memory recurrent neural networks with unsaturating piecewise linear transfer functions. Based on local inhibition, conditions for globally exponential attractivity are established. These conditions allow coexistence of stable and unstable equilibrium points. By constructing some energy-like functions, complete convergence is studied.  相似文献   

3.
Yi Z  Tan KK  Lee TH 《Neural computation》2003,15(3):639-662
Multistability is a property necessary in neural networks in order to enable certain applications (e.g., decision making), where monostable networks can be computationally restrictive. This article focuses on the analysis of multistability for a class of recurrent neural networks with unsaturating piecewise linear transfer functions. It deals fully with the three basic properties of a multistable network: boundedness, global attractivity, and complete convergence. This article makes the following contributions: conditions based on local inhibition are derived that guarantee boundedness of some multistable networks, conditions are established for global attractivity, bounds on global attractive sets are obtained, complete convergence conditions for the network are developed using novel energy-like functions, and simulation examples are employed to illustrate the theory thus developed.  相似文献   

4.
当神经网络应用于最优化计算时,理想的情形是只有一个全局渐近稳定的平衡点,并且以指数速度趋近于平衡点,从而减少神经网络所需计算时间.研究了带时变时滞的递归神经网络的全局渐近稳定性.首先将要研究的模型转化为描述系统模型,然后利用Lyapunov-Krasovskii稳定性定理、线性矩阵不等式(LMI)技术、S过程和代数不等式方法,得到了确保时变时滞递归神经网络渐近稳定性的新的充分条件,并将它应用于常时滞神经网络和时滞细胞神经网络模型,分别得到了相应的全局渐近稳定性条件.理论分析和数值模拟显示,所得结果为时滞递归神经网络提供了新的稳定性判定准则.  相似文献   

5.
Stability conditions for multiclass fluid queueing networks   总被引:1,自引:0,他引:1  
We introduce a new method to investigate stability of work-conserving policies in multiclass queueing networks. The method decomposes feasible trajectories and uses linear programming to test stability. We show that this linear program is a necessary and sufficient condition for the stability of all work-conserving policies for multiclass fluid queueing networks with two stations. Furthermore, we find new sufficient conditions for the stability of multiclass queueing networks involving any number of stations and conjecture that these conditions are also necessary. Previous research had identified sufficient conditions through the use of a particular class of (piecewise linear convex) Lyapunov functions. Using linear programming duality, we show that for two-station systems the Lyapunov function approach is equivalent to ours and therefore characterizes stability exactly  相似文献   

6.
Zeng Z  Wang J 《Neural computation》2007,19(8):2149-2182
In this letter, some sufficient conditions are obtained to guarantee recurrent neural networks with linear saturation activation functions, and time-varying delays have multiequilibria located in the saturation region and the boundaries of the saturation region. These results on pattern characterization are used to analyze and design autoassociative memories, which are directly based on the parameters of the neural networks. Moreover, a formula for the numbers of spurious equilibria is also derived. Four design procedures for recurrent neural networks with linear saturation activation functions and time-varying delays are developed based on stability results. Two of these procedures allow the neural network to be capable of learning and forgetting. Finally, simulation results demonstrate the validity and characteristics of the proposed approach.  相似文献   

7.
This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.  相似文献   

8.
We study the application of neural networks to modeling the blood glucose metabolism of a diabetic. In particular we consider recurrent neural networks and time series convolution neural networks which we compare to linear models and to nonlinear compartment models. We include a linear error model to take into account the uncertainty in the system and for handling missing blood glucose observations. Our results indicate that best performance can be achieved by the combination of the recurrent neural network and the linear error model.  相似文献   

9.
This paper studies the global output convergence of a class of recurrent delayed neural networks with time-varying inputs. We consider non-decreasing activations which may also have jump discontinuities in order to model the ideal situation where the gain of the neuron amplifiers is very high and tends to infinity. In particular, we drop the assumptions of Lipschitz continuity and boundedness on the activation functions, which are usually required in most of the existing works. Due to the possible discontinuities of the activations functions, we introduce a suitable notation of limit to study the convergence of the output of the recurrent delayed neural networks. Under suitable assumptions on the interconnection matrices and the time-varying inputs, we establish a sufficient condition for global output convergence of this class of neural networks. The convergence results are useful in solving some optimization problems and in the design of recurrent delayed neural networks with discontinuous neuron activations.  相似文献   

10.
This paper considers homogeneous networks of general, linear time-invariant, second-order systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilisable. We show that consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. To achieve this, we provide a new and simple derivation of the conditions for a second-order polynomial with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback.  相似文献   

11.
The class of nonlinear systems described by a discrete-time state equation containing a repeated scalar nonlinearity as in recurrent neural networks is considered. Sufficient conditions are derived for the stability and induced norm of such systems using positive definite diagonally dominant Lyapunov functions or storage functions, satisfying appropriate linear matrix inequalities. Results are also presented for model reduction errors for such systems  相似文献   

12.
In this paper, we investigate the dynamics problem about the memristor-based recurrent network with bounded activation functions and bounded time-varying delays in the presence of strong external stimuli. It is shown that global exponential stability of such networks can be achieved when the external stimuli are sufficiently strong, without the need for other conditions. A sufficient condition on the bounds of stimuli is derived for global exponential stability of memristor-based recurrent networks. And all the results are in the sense of Filippov solutions. Simulation results illustrate the uses of the criteria to ascertain the global exponential stability of specific networks.  相似文献   

13.
《国际计算机数学杂志》2012,89(10):1313-1322
Several explicit algorithms for tracking the parameters of second order models have been derived by the authors based on information available from the system time trajectory. In this paper the problem is recast in terms of recurrent integral-hybrid networks used in a hierarchical formation for both the reduced order model and to estimate the derivatives for parameter tracking. We relax the constant parameter condition by assuming linear time variation, the additional information is extracted from the system output trajectory by obtaining higher time derivatives which result in explicit functions to track the parameters online.  相似文献   

14.
Lambertian reflectance and linear subspaces   总被引:23,自引:0,他引:23  
We prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image.  相似文献   

15.
This paper studies multiperiodicity and attractivity for a class of recurrent neural networks (RNNs) with unsaturating piecewise linear transfer functions and variable delays. Using local inhibition, conditions for boundedness and global attractivity are established. These conditions allow coexistence of stable and unstable trajectories. Moreover, multiperiodicity of the network is investigated by using local invariant sets. It shows that under some interesting conditions, there exists one periodic trajectory in each invariant set which exponentially attracts all trajectories in that region correspondingly. Simulations are carried out to illustrate the theories.  相似文献   

16.
We investigate the properties of an abstract negotiation framework where agents autonomously negotiate over allocations of indivisible resources. In this framework, reaching an allocation that is optimal may require very complex multilateral deals. Therefore, we are interested in identifying classes of valuation functions such that any negotiation conducted by means of deals involving only a single resource at a time is bound to converge to an optimal allocation whenever all agents model their preferences using these functions. In the case of negotiation with monetary side payments amongst self-interested but myopic agents, the class of modular valuation functions turns out to be such a class. That is, modularity is a sufficient condition for convergence in this framework. We also show that modularity is not a necessary condition. Indeed, there can be no condition on individual valuation functions that would be both necessary and sufficient in this sense. Evaluating conditions formulated with respect to the whole profile of valuation functions used by the agents in the system would be possible in theory, but turns out to be computationally intractable in practice. Our main result shows that the class of modular functions is maximal in the sense that no strictly larger class of valuation functions would still guarantee an optimal outcome of negotiation, even when we permit more general bilateral deals. We also establish similar results in the context of negotiation without side payments.  相似文献   

17.
This paper investigates the existence, uniqueness, and global exponential stability (GES) of the equilibrium point for a large class of neural networks with globally Lipschitz continuous activations including the widely used sigmoidal activations and the piecewise linear activations. The provided sufficient condition for GES is mild and some conditions easily examined in practice are also presented. The GES of neural networks in the case of locally Lipschitz continuous activations is also obtained under an appropriate condition. The analysis results given in the paper extend substantially the existing relevant stability results in the literature, and therefore expand significantly the application range of neural networks in solving optimization problems. As a demonstration, we apply the obtained analysis results to the design of a recurrent neural network (RNN) for solving the linear variational inequality problem (VIP) defined on any nonempty and closed box set, which includes the box constrained quadratic programming and the linear complementarity problem as the special cases. It can be inferred that the linear VIP has a unique solution for the class of Lyapunov diagonally stable matrices, and that the synthesized RNN is globally exponentially convergent to the unique solution. Some illustrative simulation examples are also given.  相似文献   

18.
In this paper, a class of piecewise linear Lyapunov functions is used to obtain the sufficient conditions for the stability of multiclass fluid networks under priority service disciplines. This extends and generalizes the existing work that is based on a linear Lyapunov function, and also complements the existing work that uses the piecewise linear Lyapunov function approach to study the global stability of multiclass queueing networks. A three-station network example is used to illustrate the quality of this sufficient condition. Both theoretical and experimental results indicate that the piecewise linear Lyapunov function approach yields better and effective sufficient conditions  相似文献   

19.
In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset. Here, we compare the performance of two prominent classes of memory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can outperform feedforward ones. Under a simple model in which neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli.  相似文献   

20.
Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号