首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.  相似文献   

2.
A class of simplified background neural networks model with a large number of neurons is proposed. Continuous attractors of the simplified model are studied in this paper. It contains: (1) When the background inputs are set to zero and the excitatory connections are in Gaussian shape, continuous attractors of the new network are obtained under some condition. (2) When the background inputs are nonzero and the excitatory connections are still in Gaussian shape, continuous attractors are achieved under some appropriately selected condition. (3) Discussions and examples are used to illustrate the theories developed.  相似文献   

3.
Continuous attractors of a class of recurrent neural networks   总被引:1,自引:0,他引:1  
Recurrent neural networks (RNNs) may possess continuous attractors, a property that many brain theories have implicated in learning and memory. There is good evidence for continuous stimuli, such as orientation, moving direction, and the spatial location of objects could be encoded as continuous attractors in neural networks. The dynamical behaviors of continuous attractors are interesting properties of RNNs. This paper proposes studying the continuous attractors for a class of RNNs. In this network, the inhibition among neurons is realized through a kind of subtractive mechanism. It shows that if the synaptic connections are in Gaussian shape and other parameters are appropriately selected, the network can exactly realize continuous attractor dynamics. Conditions are derived to guarantee the validity of the selected parameters. Simulations are employed for illustration.  相似文献   

4.
神经网络的混沌运动与控制   总被引:5,自引:0,他引:5  
本文采用一种由混沌神经元构成的联想记忆神经网络.以混沌神经网络为基础,研究其非线 性动力学特性、混沌吸引子轨迹以及对初始条件的敏感性, 实现混沌神经网络的动态联想记 忆功能.在网络输入发生较大变异情况下所发生的失忆,本文采用时空系统混沌控制的钉扎 反馈方法,使网络恢复记忆.上述研究通过对异步电动机故障的动态记忆和恢复控制的仿真 实验得到证实.本文研究结果表明,在国内外对神经网络混沌控制研究的热点中,时空系统 的钉扎反馈控制是一种值得推荐的方法;神经网络的混沌控制扩大了网络的容错性,进而提 高了混沌神经网络的实用性,这将在复杂模式识别,图象处理等工程上具有广阔的应用前景 .  相似文献   

5.
Different models of attractor networks have been proposed to form cell assemblies. Among them, networks with a fixed synaptic matrix can be distinguished from those including learning dynamics, since the latter adapt the attractor landscape of the lateral connections according to the statistics of the presented stimuli, yielding a more complex behavior. We propose a new learning rule that builds internal representations of input timuli as attractors of neurons in a recurrent network. The dynamics of activation and synaptic adaptation are analyzed in experiments where representations for different input patterns are formed, focusing on the properties of the model as a memory system. The experimental results are exposed along with a survey of different Hebbian rules proposed in the literature for attractors formation. These rules are compared with the help of a new tool, the learning map, where LTP and LTD, as well as homo- and heterosynaptic competition, can be graphically interpreted.  相似文献   

6.
联想记忆神经网络的训练   总被引:2,自引:0,他引:2  
张承福  赵刚 《自动化学报》1995,21(6):641-648
提出了一种联想记忆神经网络的优化训练方案,说明网络的样本吸引域可用阱深参数作一定程度的控制,使网络具有尽可能好的容错性.计算表明,训练网络可达到α<1(α=M/N,N是神经元数,M是贮存样本数),而仍有良好的容错性,明显优于外积法、正交化外积法、赝逆法等常用方案.文中还对训练网络的对称性与收敛性问题进行了讨论.  相似文献   

7.
This technical note proposes to study the activity invariant sets and exponentially stable attractors of linear threshold discrete-time recurrent neural networks. The concept of activity invariant sets deeply describes the property of an invariant set by that the activity of some neurons keeps invariant all the time. Conditions are obtained for locating activity invariant sets. Under some conditions, it shows that an activity invariant set can have one equilibrium point which attracts exponentially all trajectories starting in the set. Since the attractors are located in activity invariant sets, each attractor has binary pattern and also carries analog information. Such results can provide new perspective to apply attractor networks for applications such as group winner-take-all, associative memory, etc.   相似文献   

8.
Chaotic dynamics in a recurrent neural network model, in which limit cycle memory attractors are stored, is investigated by means of numerical methods. In particular, we focus on quick and sensitive response characteristics of chaotic memory dynamics to external input, which consists of part of an embedded memory attractor. We have calculated the correlation functions between the firing activities of neurons to understand the dynamical mechanisms of rapid responses. The results of the latter calculation show that quite strong correlations occur very quickly between almost all neurons within 1 ~ 2 updating steps after applying a partial input. They suggest that the existence of dynamical correlations or, in other words, transient correlations in chaos, play a very important role in quick and/or sensitive responses.  相似文献   

9.
Miller P 《Neural computation》2006,18(6):1268-1317
Attractor networks are likely to underlie working memory and integrator circuits in the brain. It is unknown whether continuous quantities are stored in an analog manner or discretized and stored in a set of discrete attractors. In order to investigate the important issue of how to differentiate the two systems, here we compare the neuronal spiking activity that arises from a continuous (line) attractor with that from a series of discrete attractors. Stochastic fluctuations cause the position of the system along its continuous attractor to vary as a random walk, whereas in a discrete attractor, noise causes spontaneous transitions to occur between discrete states at random intervals. We calculate the statistics of spike trains of neurons firing as a Poisson process with rates that vary according to the underlying attractor network. Since individual neurons fire spikes probabilistically and since the state of the network as a whole drifts randomly, the spike trains of individual neurons follow a doubly stochastic (Poisson) point process. We compare the series of spike trains from the two systems using the autocorrelation function, Fano factor, and interspike interval (ISI) distribution. Although the variation in rate can be dramatically different, especially for short time intervals, surprisingly both the autocorrelation functions and Fano factors are identical, given appropriate scaling of the noise terms. Since the range of firing rates is limited in neurons, we also investigate systems for which the variation in rate is bounded by either rigid limits or because of leak to a single attractor state, such as the Ornstein-Uhlenbeck process. In these cases, the time dependence of the variance in rate can be different between discrete and continuous systems, so that in principle, these processes can be distinguished using second-order spike statistics.  相似文献   

10.
本文采用耦合的混沌振荡子作为单个混沌神经元构造混沌神经网络模型,用改进Hebb算法设计网络的连接权值。在此基础上,实现了混沌神经网络的动态联想记忆并应用该混沌神经网络模型对发电机定子绕组匝间短路故障进行诊断。结果表明,该种方法有助于故障模式的记  相似文献   

11.
Optoelectronic spiking neuron that is based on bispin-device is described. The neuron has separate optical inputs for excitatory and inhibitory signals, which are represented with pulses of single polarity. Experimental data, which demonstrates similarity in form of output pulses and set of functions of the suggested neuron and a biological one is given. An example of hardware implementation of optoelectronic pulsed neural network (PNN) that is based on proposed neurons is described. Main elements of the neural network are a line of pulsed neurons and a connection array, part of which is made as a spatial light modulator (SLM) with memory. Usage of SLM allows modification of weights of connections in the learning process of the network. It is possible to create adaptive (capable of additional learning and relearning) optoelectronic PNNs with about 2000 neurons.  相似文献   

12.
Simultaneous sparse approximation is a generalization of the standard sparse approximation, for simultaneously representing a set of signals using a common sparsity model. Distributed compressive sensing (DCS) framework has utilized simultaneous sparse approximation for generalizing compressive sensing to multiple signals. DCS finds the sparse representation of multiple correlated signals from compressive measurements using the common + innovation signal model. However, DCS is limited for joint recovery of a large number of signals since it requires large memory and computational time. In this paper, we propose a new hierarchical algorithm to implement the joint sparse recovery part of DCS more efficiently. The proposed approach is based on partitioning the input set and hierarchically solving for the sparse common component across these partitions. The numerical evaluation of the proposed method shows the decrease in computational time over DCS with an increase in reconstruction error. The proposed algorithm is evaluated for two different applications. In the first application, the proposed method is applied to video background extraction problem, where the background corresponds to the common sparse activity across frames. In the second application, a common network structure is extracted from dynamic functional brain connectivity networks.  相似文献   

13.
This paper considers the encoding of structured sets into Hopfield associative memories. A structured set is a set of vectors with equal Hamming distance h from one another, and its centroid is an external vector that has distance h/2 from every vector of the set. Structured sets having centroids are not infrequent. When such a set is encoded into a noiseless Hopfield associative memory using a bipolar outer-product connection matrix, and the network operates with synchronous neuronal update, the memory of all encoded vectors is annihilated even for sets with as few as three vectors in dimension n>5 (four for n=5). In such self-annihilating structured sets, the centroid emerges as a stable attractor. We call it an alien attractor. For canonical structured sets, self-annihilation takes place only if h相似文献   

14.
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.  相似文献   

15.
This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides "on its own" the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more "respectful" of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as "frustrated chaos."  相似文献   

16.
This paper discusses a model refernce adaptive (MRAC) position/force controller using proposed neural networks for two co-operating planar robots. The proposed neural network is a recurrent hybrid network. The recurrent networks have feedback connections and thus an inherent memory for dynamics, which makes them suitable for representing dynamic systems. A feature of the networks adopted is their hybrid hidden layer, which includes both linear and nonlinear neurons. On the other hand, the results of the case of a single robot under position control alone are presented for comparison. The results presented show the superior ability of the proposed neural network based model reference adaptive control scheme at adapting to changes in the dynamics parameters of robots.  相似文献   

17.
In a previous paper, the self-trapping network (STN) was introduced as more biologically realistic than attractor neural networks (ANNs) based on the Ising model. This paper extends the previous analysis of a one-dimensional (1-D) STN storing a single memory to a model that stores multiple memories and that possesses generalized sparse connectivity. The energy, Lyapunov function, and partition function derived for the 1-D model are generalized to the case of an attractor network with only near-neighbor synapses, coupled to a system that computes memory overlaps. Simulations reveal that 1) the STN dramatically reduces intra-ANN connectivity without severly affecting the size of basins of attraction, with fast self-trapping able to sustain attractors even in the absence of intra-ANN synapses; 2) the basins of attraction can be controlled by a single free parameter, providing natural attention-like effects; 3) the same parameter determines the memory capacity of the network, and the latter is much less dependent than a standard ANN on the noise level of the system; 4) the STN serves as a useful memory for some correlated memory patterns for which the standard ANN totally fails; 5) the STN can store a large number of sparse patterns; and 6) a Monte Carlo procedure, a competitive neural network, and binary neurons with thresholds can be used to induce self-trapping.  相似文献   

18.
The brain is not a huge fixed neural network, but a dynamic, changing neural network that continuously adapts to meet the demands of communication and computational needs. In classical neural networks approaches, particularly associative memory models, synapses are only adjusted during the training phase. After this phase, synapses are no longer adjusted. In this paper we describe a new dynamical model where synapses of the associative memory could be adjusted even after the training phase as a response to an input stimulus. We provide some propositions that guarantee perfect and robust recall of the fundamental set of associations. In addition, we describe the behavior of the proposed associative model under noisy versions of the patterns. At last, we present some experiments aimed to show the accuracy of the proposed model.  相似文献   

19.
针对现有的动态手势识别3D卷积方法计算参数量大和对2D卷积长时间序列的空时特征难以提取的问题,提出一种基于2D卷积神经网络和长短期记忆网络相结合的提取时空域特征的动态手势识别方法.首先基于2D卷积神经网络提取空域特征,再通过长短期记忆网络进行序列图像时序上的相互关联提取时间维度上的信息.为验证算法的有效性,使用自采集的...  相似文献   

20.
The local identical index (LII) associative memory (AM) proposed by the authors in a previous paper is a one-shot feedforward structure designed to exhibit no spurious attractors. In this paper we relax the latter design constraint in exchange for enlarged basins of attraction and we develop a family of modified LII AM networks that exhibit improved performance, particularly in memorizing highly correlated patterns. The new algorithm meets the requirement of no spurious attractors only in a local sense. Finally, we show that the modified LII family of networks can accommodate composite patterns of any size by storing (memorizing) only the basic (prime) prototype patterns. The latter property translates to low learning complexity and a simple network structure with significant memory savings. Simulation studies and comparisons illustrate and support the the optical developments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号