首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 28 毫秒
1.
Different models of attractor networks have been proposed to form cell assemblies. Among them, networks with a fixed synaptic matrix can be distinguished from those including learning dynamics, since the latter adapt the attractor landscape of the lateral connections according to the statistics of the presented stimuli, yielding a more complex behavior. We propose a new learning rule that builds internal representations of input timuli as attractors of neurons in a recurrent network. The dynamics of activation and synaptic adaptation are analyzed in experiments where representations for different input patterns are formed, focusing on the properties of the model as a memory system. The experimental results are exposed along with a survey of different Hebbian rules proposed in the literature for attractors formation. These rules are compared with the help of a new tool, the learning map, where LTP and LTD, as well as homo- and heterosynaptic competition, can be graphically interpreted.  相似文献   

2.
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.  相似文献   

3.
Amit Y  Mascaro M 《Neural computation》2001,13(6):1415-1442
We describe a system of thousands of binary perceptrons with coarse-oriented edges as input that is able to recognize shapes, even in a context with hundreds of classes. The perceptrons have randomized feedforward connections from the input layer and form a recurrent network among themselves. Each class is represented by a prelearned attractor (serving as an associative hook) in the recurrent net corresponding to a randomly selected subpopulation of the perceptrons. In training, first the attractor of the correct class is activated among the perceptrons; then the visual stimulus is presented at the input layer. The feedforward connections are modified using field-dependent Hebbian learning with positive synapses, which we show to be stable with respect to large variations in feature statistics and coding levels and allows the use of the same threshold on all perceptrons. Recognition is based on only the visual stimuli. These activate the recurrent network, which is then driven by the dynamics to a sustained attractor state, concentrated in the correct class subset and providing a form of working memory. We believe this architecture is more transparent than standard feedforward two-layer networks and has stronger biological analogies.  相似文献   

4.
This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides "on its own" the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more "respectful" of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as "frustrated chaos."  相似文献   

5.
Hebbian heteroassociative learning is inherently asymmetric. Storing a forward association, from item A to item B, enables recall of B (given A), but does not permit recall of A (given B). Recurrent networks can solve this problem by associating A to B and B back to A. In these recurrent networks, the forward and backward associations can be differentially weighted to account for asymmetries in recall performance. In the special case of equal strength forward and backward weights, these recurrent networks can be modeled as a single autoassociative network where A and B are two parts of a single, stored pattern. We analyze a general, recurrent neural network model of associative memory and examine its ability to fit a rich set of experimental data on human associative learning. The model fits the data significantly better when the forward and backward storage strengths are highly correlated than when they are less correlated. This network-based analysis of associative learning supports the view that associations between symbolic elements are better conceptualized as a blending of two ideas into a single unit than as separately modifiable forward and backward associations linking representations in memory.  相似文献   

6.
In this article we revisit the classical neuroscience paradigm of Hebbian learning. We find that it is difficult to achieve effective associative memory storage by Hebbian synaptic learning, since it requires network-level information at the synaptic level or sparse coding level. Effective learning can yet be achieved even with nonsparse patterns by a neuronal process that maintains a zero sum of the incoming synaptic efficacies. This weight correction improves the memory capacity of associative networks from an essentially bounded one to a memory capacity that scales linearly with network size. It also enables the effective storage of patterns with multiple levels of activity within a single network. Such neuronal weight correction can be successfully carried out by activity-dependent homeostasis of the neuron's synaptic efficacies, which was recently observed in cortical tissue. Thus, our findings suggest that associative learning by Hebbian synaptic learning should be accompanied by continuous remodeling of neuronally driven regulatory processes in the brain.  相似文献   

7.
This technical note proposes to study the activity invariant sets and exponentially stable attractors of linear threshold discrete-time recurrent neural networks. The concept of activity invariant sets deeply describes the property of an invariant set by that the activity of some neurons keeps invariant all the time. Conditions are obtained for locating activity invariant sets. Under some conditions, it shows that an activity invariant set can have one equilibrium point which attracts exponentially all trajectories starting in the set. Since the attractors are located in activity invariant sets, each attractor has binary pattern and also carries analog information. Such results can provide new perspective to apply attractor networks for applications such as group winner-take-all, associative memory, etc.   相似文献   

8.
Dynamics analysis and analog associative memory of networks with LT neurons   总被引:1,自引:0,他引:1  
The additive recurrent network structure of linear threshold neurons represents a class of biologically-motivated models, where nonsaturating transfer functions are necessary for representing neuronal activities, such as that of cortical neurons. This paper extends the existing results of dynamics analysis of such linear threshold networks by establishing new and milder conditions for boundedness and asymptotical stability, while allowing for multistability. As a condition for asymptotical stability, it is found that boundedness does not require a deterministic matrix to be symmetric or possess positive off-diagonal entries. The conditions put forward an explicit way to design and analyze such networks. Based on the established theory, an alternate approach to study such networks is through permitted and forbidden sets. An application of the linear threshold (LT) network is analog associative memory, for which a simple design method describing the associative memory is suggested in this paper. The proposed design method is similar to a generalized Hebbian approach, but with distinctions of additional network parameters for normalization, excitation and inhibition, both on a global and local scale. The computational abilities of the network are dependent on its nonlinear dynamics, which in turn is reliant upon the sparsity of the memory vectors.  相似文献   

9.
Continuous attractors of a class of recurrent neural networks   总被引:1,自引:0,他引:1  
Recurrent neural networks (RNNs) may possess continuous attractors, a property that many brain theories have implicated in learning and memory. There is good evidence for continuous stimuli, such as orientation, moving direction, and the spatial location of objects could be encoded as continuous attractors in neural networks. The dynamical behaviors of continuous attractors are interesting properties of RNNs. This paper proposes studying the continuous attractors for a class of RNNs. In this network, the inhibition among neurons is realized through a kind of subtractive mechanism. It shows that if the synaptic connections are in Gaussian shape and other parameters are appropriately selected, the network can exactly realize continuous attractor dynamics. Conditions are derived to guarantee the validity of the selected parameters. Simulations are employed for illustration.  相似文献   

10.
We study the effect of competition between short-term synaptic depression and facilitation on the dynamic properties of attractor neural networks, using Monte Carlo simulation and a mean-field analysis. Depending on the balance of depression, facilitation, and the underlying noise, the network displays different behaviors, including associative memory and switching of activity between different attractors. We conclude that synaptic facilitation enhances the attractor instability in a way that (1) intensifies the system adaptability to external stimuli, which is in agreement with experiments, and (2) favors the retrieval of information with less error during short time intervals.  相似文献   

11.
Chaotic dynamics in a recurrent neural network model, in which limit cycle memory attractors are stored, is investigated by means of numerical methods. In particular, we focus on quick and sensitive response characteristics of chaotic memory dynamics to external input, which consists of part of an embedded memory attractor. We have calculated the correlation functions between the firing activities of neurons to understand the dynamical mechanisms of rapid responses. The results of the latter calculation show that quite strong correlations occur very quickly between almost all neurons within 1 ~ 2 updating steps after applying a partial input. They suggest that the existence of dynamical correlations or, in other words, transient correlations in chaos, play a very important role in quick and/or sensitive responses.  相似文献   

12.
Neurons that sustain elevated firing in the absence of stimuli have been found in many neural systems. In graded persistent activity, neurons can sustain firing at many levels, suggesting a widely found type of network dynamics in which networks can relax to any one of a continuum of stationary states. The reproduction of these findings in model networks of nonlinear neurons has turned out to be nontrivial. A particularly insightful model has been the "bump attractor," in which a continuous attractor emerges through an underlying symmetry in the network connectivity matrix. This model, however, cannot account for data in which the persistent firing of neurons is a monotonic -- rather than a bell-shaped -- function of a stored variable. Here, we show that the symmetry used in the bump attractor network can be employed to create a whole family of continuous attractor networks, including those with monotonic tuning. Our design is based on tuning the external inputs to networks that have a connectivity matrix with Toeplitz symmetry. In particular, we provide a complete analytical solution of a line attractor network with monotonic tuning and show that for many other networks, the numerical tuning of synaptic weights reduces to the computation of a single parameter.  相似文献   

13.
Zemel RS  Mozer MC 《Neural computation》2001,13(5):1045-1064
Attractor networks, which map an input space to a discrete output space, are useful for pattern completion--cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attractor net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters. We present simulation experiments that explore the behavior of localist attractor networks, showing that they yield few spurious attractors, and they readily exhibit two desirable properties of psychological and neurobiological models: priming (faster convergence to an attractor if the attractor has been recently visited) and gang effects (in which the presence of an attractor enhances the attractor basins of neighboring attractors).  相似文献   

14.
We present a hybrid learning method bridging the fields of recurrent neural networks, unsupervised Hebbian learning, vector quantization, and supervised learning to implement a sophisticated image and feature segmentation architecture. This architecture is based on the competitive layer model (CLM), a dynamic feature binding model, which is applicable on a wide range of perceptual grouping and segmentation problems. A predefined target segmentation can be achieved as attractor states of this linear threshold recurrent network, if the lateral weights are chosen by Hebbian learning. The weight matrix is given by the correlation matrix of special pattern vectors with a structure dependent on the target labeling. Generalization is achieved by applying vector quantization on pair-wise feature relations, like proximity and similarity, defined by external knowledge. We show the successful application of the method to a number of artificial test examples and a medical image segmentation problem of fluorescence microscope cell images.  相似文献   

15.
Siri B  Berry H  Cessac B  Delord B  Quoy M 《Neural computation》2008,20(12):2937-2966
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.  相似文献   

16.
A bidirectional heteroassociative memory for binary and grey-level patterns   总被引:2,自引:0,他引:2  
Typical bidirectional associative memories (BAM) use an offline, one-shot learning rule, have poor memory storage capacity, are sensitive to noise, and are subject to spurious steady states during recall. Recent work on BAM has improved network performance in relation to noisy recall and the number of spurious attractors, but at the cost of an increase in BAM complexity. In all cases, the networks can only recall bipolar stimuli and, thus, are of limited use for grey-level pattern recall. In this paper, we introduce a new bidirectional heteroassociative memory model that uses a simple self-convergent iterative learning rule and a new nonlinear output function. As a result, the model can learn online without being subject to overlearning. Our simulation results show that this new model causes fewer spurious attractors when compared to others popular BAM networks, for a comparable performance in terms of tolerance to noise and storage capacity. In addition, the novel output function enables it to learn and recall grey-level patterns in a bidirectional way.  相似文献   

17.
We investigate through theoretical analysis and computer simulations the consequences of unreliable synapses for fast analog computations in networks of spiking neurons, with analog variables encoded by the current firing activities of pools of spiking neurons. Our results suggest a possible functional role for the well-established unreliability of synaptic transmission on the network level. We also investigate computations on time series and Hebbian learning in this context of space-rate coding in networks of spiking neurons with unreliable synapses.  相似文献   

18.
Bipolar spectral associative memories   总被引:1,自引:0,他引:1  
Nonlinear spectral associative memories are proposed as quantized frequency domain formulations of nonlinear, recurrent associative memories in which volatile network attractors are instantiated by attractor waves. In contrast to conventional associative memories, attractors encoded in the frequency domain by convolution may be viewed as volatile online inputs, rather than nonvolatile, off-line parameters. Spectral memories hold several advantages over conventional associative memories, including decoder/attractor separability and linear scalability, which make them especially well suited for digital communications. Bit patterns may be transmitted over a noisy channel in a spectral attractor and recovered at the receiver by recurrent, spectral decoding. Massive nonlocal connectivity is realized virtually, maintaining high symbol-to-bit ratios while scaling linearly with pattern dimension. For n-bit patterns, autoassociative memories achieve the highest noise immunity, whereas heteroassociative memories offer the added flexibility of achieving various code rates, or degrees of extrinsic redundancy. Due to linear scalability, high noise immunity and use of conventional building blocks, spectral associative memories hold much promise for achieving robust communication systems. Simulations are provided showing bit error rates for various degrees of decoding time, computational oversampling, and signal-to-noise ratio.  相似文献   

19.
In this paper, we analyze a model of recurrent kernel associative memory (RKAM) recently proposed by Garcia and Moreno. We show that this model consists in a kernelization of the recurrent correlation associative memory (RCAM) of Chiueh and Goodman. In particular, using an exponential kernel, we obtain a generalization of the well-known exponential correlation associative memory (ECAM), while using a polynomial kernel, we obtain a generalization of higher order Hopfield networks with Hebbian weights. We show that the RKAM can outperform the aforementioned associative memory models, becoming equivalent to them when a dominance condition is fulfilled by the kernel matrix. To ascertain the dominance condition, we propose a statistical measure which can be easily computed from the probability distribution of the interpattern Hamming distance or directly estimated from the memory vectors. The RKAM can be used below saturation to realize associative memories with reduced dynamic range with respect to the ECAM and with reduced number of synaptic coefficients with respect to higher order Hopfield networks.  相似文献   

20.
Some neurons encode information about the orientation or position of an animal, and can maintain their response properties in the absence of visual input. Examples include head direction cells in rats and primates, place cells in rats and spatial view cells in primates. 'Continuous attractor' neural networks model these continuous physical spaces by using recurrent collateral connections between the neurons which reflect the distance between the neurons in the state space (e.g. head direction space) of the animal. These networks maintain a localized packet of neuronal activity representing the current state of the animal. We show how the synaptic connections in a one-dimensional continuous attractor network (of for example head direction cells) could be self-organized by associative learning. We also show how the activity packet could be moved from one location to another by idiothetic (self-motion) inputs, for example vestibular or proprioceptive, and how the synaptic connections could self-organize to implement this. The models described use 'trace' associative synaptic learning rules that utilize a form of temporal average of recent cell activity to associate the firing of rotation cells with the recent change in the representation of the head direction in the continuous attractor. We also show how a nonlinear neuronal activation function that could be implemented by NMDA receptors could contribute to the stability of the activity packet that represents the current state of the animal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号