首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Continuous attractors of a class of recurrent neural networks   总被引:1,自引:0,他引:1  
Recurrent neural networks (RNNs) may possess continuous attractors, a property that many brain theories have implicated in learning and memory. There is good evidence for continuous stimuli, such as orientation, moving direction, and the spatial location of objects could be encoded as continuous attractors in neural networks. The dynamical behaviors of continuous attractors are interesting properties of RNNs. This paper proposes studying the continuous attractors for a class of RNNs. In this network, the inhibition among neurons is realized through a kind of subtractive mechanism. It shows that if the synaptic connections are in Gaussian shape and other parameters are appropriately selected, the network can exactly realize continuous attractor dynamics. Conditions are derived to guarantee the validity of the selected parameters. Simulations are employed for illustration.  相似文献   

2.
Computing with continuous attractors: stability and online aspects   总被引:1,自引:0,他引:1  
Wu S  Amari S 《Neural computation》2005,17(10):2215-2239
Two issues concerning the application of continuous attractors in neural systems are investigated: the computational robustness of continuous attractors with respect to input noises and the implementation of Bayesian online decoding. In a perfect mathematical model for continuous attractors, decoding results for stimuli are highly sensitive to input noises, and this sensitivity is the inevitable consequence of the system's neutral stability. To overcome this shortcoming, we modify the conventional network model by including extra dynamical interactions between neurons. These interactions vary according to the biologically plausible Hebbian learning rule and have the computational role of memorizing and propagating stimulus information accumulated with time. As a result, the new network model responds to the history of external inputs over a period of time, and hence becomes insensitive to short-term fluctuations. Also, since dynamical interactions provide a mechanism to convey the prior knowledge of stimulus, that is, the information of the stimulus presented previously, the network effectively implements online Bayesian inference. This study also reveals some interesting behavior in neural population coding, such as the trade-off between decoding stability and the speed of tracking time-varying stimuli, and the relationship between neural tuning width and the tracking speed.  相似文献   

3.
Advances in understanding the neuronal code employed by cortical networks indicate that networks of parametrically coupled nonlinear iterative maps, each acting as a bifurcation processing element, furnish a potentially powerful tool for the modeling, simulation, and study of cortical networks and the host of higher-level processing and control functions they perform. Such functions are central to understanding and elucidating general principles on which the design of biomorphic learning and intelligent systems can be based. The networks concerned are dynamical in nature, in the sense that they compute not only with static (fixed-point) attractors but also with dynamic (periodic and chaotic) attractors. As such, they compute with diverse attractors, and utilize transitions (bifurcation) between attractors and transient chaos to carry out the functions they perform. An example of a dynamical network, a parametrically coupled net of logistic processing elements, is described and discussed together some of its behavioural attributes that are relevant to elucidating the possible role for coherence, bifurcation, and chaos in higher-level brain functions carried out by cortical networks.  相似文献   

4.
Fractal variation of dynamical attractors is observed in complex-valued neural networks where a negative-resistance nonlinearity is introduced as the neuron nonlinear function. When a parameter of the negative-resistance nonlinearity is continuously changed, it is found that the network attractors present a kind of fractal variation in a certain parameter range between deterministic and non-deterministic attractor ranges. The fractal pattern has a convergence point, which is also a critical point where deterministic attractors change into chaotic attractors. This result suggests that the complex-valued neural networks having negative-resistance nonlinearity present the dynamics complexity at the so-called edge of chaos.The author is also with the Research Center for Advanced Science and Technology (RCAST), University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153, Japan  相似文献   

5.
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.  相似文献   

6.
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.  相似文献   

7.
Chaotic dynamics in a recurrent neural network model, in which limit cycle memory attractors are stored, is investigated by means of numerical methods. In particular, we focus on quick and sensitive response characteristics of chaotic memory dynamics to external input, which consists of part of an embedded memory attractor. We have calculated the correlation functions between the firing activities of neurons to understand the dynamical mechanisms of rapid responses. The results of the latter calculation show that quite strong correlations occur very quickly between almost all neurons within 1 ~ 2 updating steps after applying a partial input. They suggest that the existence of dynamical correlations or, in other words, transient correlations in chaos, play a very important role in quick and/or sensitive responses.  相似文献   

8.
Siri B  Berry H  Cessac B  Delord B  Quoy M 《Neural computation》2008,20(12):2937-2966
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.  相似文献   

9.
Suemitsu Y  Nara S 《Neural computation》2004,16(9):1943-1957
Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.  相似文献   

10.
In this paper we analyze how supervised learning occurs in ecological neural networks, i.e. networks that interact with an autonomous external environment and, therefore, at least partially determine with their behavior their own input. Using an evolutionary method for selecting good teaching inputs we surprisingly find that to obtain a desired outputX it is better to use a teaching input different fromX. To explain this fact we claim that teaching inputs in ecological networks have two different effects: (a) to reduce the discrepancy between the actual output of the network and the teaching input, (b) to modify the network's behavior and, as a consequence, the network's learning experiences. Evolved teaching inputs appear to represent a compromise between these two needs. We finally show that evolved teaching inputs that are allowed to change during the learning process function differently at different stages of learning, first giving more weight to (b) and, later on, to (a).  相似文献   

11.
支持向量机理论与基于规划的神经网络学习算法   总被引:19,自引:3,他引:19  
张铃 《计算机学报》2001,24(2):113-118
近年来支持向量机(SVM)理论得到国外学者高度的重视,普遍认为这是神经网络学习的新研究方向,近来也开始得到国内学者的注意。该文将研究SVM理论与神经网络的规划算法的关系,首先指出,Vapnik的基于SVM的算法与该文作者1994年提出的神经网络的基于规划的算法是等价的,即在样本集是线性可分的情况下,二者求得的均是最大边缘(maximal margin)解。不同的是,前者(通常用拉格郎日乘子法)求解的复杂性将随规模呈指数增长,而后者的复杂性是规模的多项式函数。其次,作者将规划算法化为求一点到某一凸集上的投影,利用这个几何的直观,给出一个构造性的迭代求解算法--“单纯形迭代算法”。新算法有很强的几何直观性,这个直观性将加深对神经网络(线性可分情况下)学习的理解,并由此导出一个样本集是线性可分的充分必要条件。另外,新算法对知识扩充问题,给出一个非常方便的增量学习算法。最后指出,“将一些必须满足的条件,化成问题的约束条件,将网络的某一性能,作为目标函数,将网络的学习问题化为某种规划问题来求解”的原则,将是研究神经网络学习问题的一个十分有效的办法。  相似文献   

12.
A method is proposed for constructing salient features from a set of features that are given as input to a feedforward neural network used for supervised learning. Combinations of the original features are formed that maximize the sensitivity of the network's outputs with respect to variations of its inputs. The method exhibits some similarity to Principal Component Analysis, but also takes into account supervised character of the learning task. It is applied to classification problems leading to improved generalization ability originating from the alleviation of the curse of dimensionality problem.  相似文献   

13.
Dynamics and computation of continuous attractors   总被引:1,自引:0,他引:1  
Continuous attractor is a promising model for describing the encoding of continuous stimuli in neural systems. In a continuous attractor, the stationary states of the neural system form a continuous parameter space, on which the system is neutrally stable. This property enables the neutral system to track time-varying stimuli smoothly, but it also degrades the accuracy of information retrieval, since these stationary states are easily disturbed by external noise. In this work, based on a simple model, we systematically investigate the dynamics and the computational properties of continuous attractors. In order to analyze the dynamics of a large-size network, which is otherwise extremely complicated, we develop a strategy to reduce its dimensionality by utilizing the fact that a continuous attractor can eliminate the noise components perpendicular to the attractor space very quickly. We therefore project the network dynamics onto the tangent of the attractor space and simplify it successfully as a one-dimensional Ornstein-Uhlenbeck process. Based on this simplified model, we investigate (1) the decoding error of a continuous attractor under the driving of external noisy inputs, (2) the tracking speed of a continuous attractor when external stimulus experiences abrupt changes, (3) the neural correlation structure associated with the specific dynamics of a continuous attractor, and (4) the consequence of asymmetric neural correlation on statistical population decoding. The potential implications of these results on our understanding of neural information processing are also discussed.  相似文献   

14.
Real-time algorithms for gradient descent supervised learning in recurrent dynamical neural networks fail to support scalable VLSI implementation, due to their complexity which grows sharply with the network dimension. We present an alternative implementation in analog VLSI, which employs a stochastic perturbation algorithm to observe the gradient of the error index directly on the network in random directions of the parameter space, thereby avoiding the tedious task of deriving the gradient from an explicit model of the network dynamics. The network contains six fully recurrent neurons with continuous-time dynamics, providing 42 free parameters which comprise connection strengths and thresholds. The chip implementing the network includes local provisions supporting both the learning and storage of the parameters, integrated in a scalable architecture which can be readily expanded for applications of learning recurrent dynamical networks requiring larger dimensionality. We describe and characterize the functional elements comprising the implemented recurrent network and integrated learning system, and include experimental results obtained from training the network to represent a quadrature-phase oscillator.  相似文献   

15.
This article proposes a neural network model of supervised learning that employs biologically motivated constraints of using local, on-line, constructive learning. The model possesses two novel learning mechanisms. The first is a network for learning topographic mixtures. The network's internal category nodes are the mixture components, which learn to encode smooth distributions in the input space by taking advantage of topography in the input feature maps. The second mechanism is an attentional biasing feedback circuit. When the network makes an incorrect output prediction, this feedback circuit modulates the learning rates of the category nodes, by amounts based on the sharpness of their tuning, in order to improve the network's prediction accuracy. The network is evaluated on several standard classification benchmarks and shown to perform well in comparison to other classifiers.  相似文献   

16.
Traditionally, associative memory models are based on point attractor dynamics, where a memory state corresponds to a stationary point in state space. However, biological neural systems seem to display a rich and complex dynamics whose function is still largely unknown. We use a neural network model of the olfactory cortex to investigate the functional significance of such dynamics, in particular with regard to learning and associative memory. the model uses simple network units, corresponding to populations of neurons connected according to the structure of the olfactory cortex. All essential dynamical properties of this system are reproduced by the model, especially oscillations at two separate frequency bands and aperiodic behavior similar to chaos. By introducing neuromodulatory control of gain and connection weight strengths, the dynamics can change dramatically, in accordance with the effects of acetylcholine, a neuromodulator known to be involved in attention and learning in animals. With computer simulations we show that these effects can be used for improving associative memory performance by reducing recall time and increasing fidelity. the system is able to learn and recall continuously as the input changes, mimicking a real world situation of an artificial or biological system in a changing environment. © 1995 John Wiley & Sons, Inc.  相似文献   

17.
基于神经网络的机器人轨迹鲁棒跟踪控制   总被引:1,自引:0,他引:1  
在神经网络辨识的基础上 ,提出一种新的鲁棒迭代学习控制方法。该方法利用神经网络对非线性系统进行在线辨识 ,产生迭代学习控制算法的前馈作用 ,并与实时反馈控制相结合 ,实现连续轨迹跟踪控制。仿真结果表明 ,该方法能克服机器人系统动力学模型的不确定性和外部干扰 ,且以极少的学习次数和网络训练次数达到满意的跟踪控制要求 ,具有良好的鲁棒性和控制性能  相似文献   

18.
神经元网络奇怪吸引子的计算机模拟   总被引:5,自引:0,他引:5  
为模拟神经元网络的混沌现象,阐述了相空间重构技术,介绍了由一维可观察量计算系统的最大Lyapunov指数和关联维数的方法。利用Lyapunov指数作判据,构造了3层反馈神经元网络的奇怪吸引子,分析了奇怪吸引子的运动特征并计算了奇怪吸引子的关联维数。研究表明混沌神经元网络具有复杂的动力学特征,同时存在各种吸引子,不仅有不动点、极限环、环面,而且有奇怪吸引子。  相似文献   

19.
一种基于正交神经网络的曲线重建方法   总被引:2,自引:0,他引:2       下载免费PDF全文
提出了一种基于正交神经网络的曲线重建方法。该正交神经网络结构与三层交向网络相同,不同的是正交网的隐单元处理函数采用Tchebycheff正交函数,而不是sigmoidial函数,新的曲线重建方法具有利用较少的数据点列将光滑的曲线以较高的精度重建的特点,网络训练采用Givens正交学习算法,由于它不是一种迭代算法,故学习速度快,而且没有网络初始参数的选取问题,网络训练又能避免陷入局部极小解等问题。实  相似文献   

20.
Efficient implementation of a neural network-based strategy for the online adaptive control of complex dynamical systems characterized by an interconnection of several subsystems (possibly nonlinear) centers on the rapidity of the convergence of the training scheme used for learning the system dynamics. For illustration, in order to achieve a satisfactory control of a multijointed robotic manipulator during the execution of high speed trajectory tracking tasks, the highly nonlinear and coupled dynamics together with the variations in the parameters necessitate a fast updating of the control actions. For facilitating this requirement, a multilayer neural network structure that includes dynamical nodes in the hidden layer is proposed, and a supervised learning scheme that employs a simple distributed updating rule is used for the online identification and decentralized adaptive control. Important characteristic features of the resulting control scheme are discussed and a quantitative evaluation of its performance in the above illustrative example is given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号