首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 586 毫秒
1.
We propose a simple neural network model to understand the dynamics of temporal pulse coding. The model is composed of coincidence detector neurons with uniform synaptic efficacies and random pulse propagation delays. We also assume a global negative feedback mechanism which controls the network activity, leading to a fixed number of neurons firing within a certain time window. Due to this constraint, the network state becomes well defined and the dynamics equivalent to a piecewise nonlinear map. Numerical simulations of the model indicate that the latency of neuronal firing is crucial to the global network dynamics; when the timing of postsynaptic firing is less sensitive to perturbations in timing of presynaptic spikes, the network dynamics become stable and periodic, whereas increased sensitivity leads to instability and chaotic dynamics. Furthermore, we introduce a learning rule which decreases the Lyapunov exponent of an attractor and enlarges the basin of attraction.  相似文献   

2.
Karsten  Andreas  Bernd  Ana D.  Thomas 《Neurocomputing》2008,71(7-9):1694-1704
Biologically plausible excitatory neural networks develop a persistent synchronized pattern of activity depending on spontaneous activity and synaptic refractoriness (short term depression). By fixed synaptic weights synchronous bursts of oscillatory activity are stable and involve the whole network. In our modeling study we investigate the effect of a dynamic Hebbian-like learning mechanism, spike-timing-dependent plasticity (STDP), on the changes of synaptic weights depending on synchronous activity and network connection strategies (small-world topology). We show that STDP modifies the weights of synaptic connections in such a way that synchronization of neuronal activity is considerably weakened. Networks with a higher proportion of long connections can sustain a higher level of synchronization in spite of STDP influence. The resulting distribution of the synaptic weights in single neurons depends both on the global statistics of firing dynamics and on the number of incoming and outgoing connections.  相似文献   

3.
Experimental evidence indicates that synaptic modification depends on the timing relationship between the presynaptic inputs and the output spikes that they generate. In this letter, results are presented for models of spike-timing-dependent plasticity (STDP) whose weight dynamics is determined by a stable fixed point. Four classes of STDP are identified on the basis of the time extent of their input-output interactions. The effect on the potentiation of synapses with different rates of input is investigated to elucidate the relationship of STDP with classical studies of long-term potentiation and depression and rate-based Hebbian learning. The selective potentiation of higher-rate synaptic inputs is found only for models where the time extent of the input-output interactions is input restricted (i.e., restricted to time domains delimited by adjacent synaptic inputs) and that have a time-asymmetric learning window with a longer time constant for depression than for potentiation. The analysis provides an account of learning dynamics determined by an input-selective stable fixed point. The effect of suppressive interspike interactions on STDP is also analyzed and shown to modify the synaptic dynamics.  相似文献   

4.
We present a model for spike-driven dynamics of a plastic synapse, suited for aVLSI implementation. The synaptic device behaves as a capacitor on short timescales and preserves the memory of two stable states (efficacies) on long timescales. The transitions (LTP/LTD) are stochastic because both the number and the distribution of neural spikes in any finite (stimulation) interval fluctuate, even at fixed pre- and postsynaptic spike rates. The dynamics of the single synapse is studied analytically by extending the solution to a classic problem in queuing theory (Takacs process). The model of the synapse is implemented in aVLSI and consists of only 18 transistors. It is also directly simulated. The simulations indicate that LTP/LTD probabilities versus rates are robust to fluctuations of the electronic parameters in a wide range of rates. The solutions for these probabilities are in very good agreement with both the simulations and measurements. Moreover, the probabilities are readily manipulable by variations of the chip's parameters, even in ranges where they are very small. The tests of the electronic device cover the range from spontaneous activity (3-4 Hz) to stimulus-driven rates (50 Hz). Low transition probabilities can be maintained in all ranges, even though the intrinsic time constants of the device are short (approximately 100 ms). Synaptic transitions are triggered by elevated presynaptic rates: for low presynaptic rates, there are essentially no transitions. The synaptic device can preserve its memory for years in the absence of stimulation. Stochasticity of learning is a result of the variability of interspike intervals; noise is a feature of the distributed dynamics of the network. The fact that the synapse is binary on long timescales solves the stability problem of synaptic efficacies in the absence of stimulation. Yet stochastic learning theory ensures that it does not affect the collective behavior of the network, if the transition probabilities are low and LTP is balanced against LTD.  相似文献   

5.
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.  相似文献   

6.
In most neural network models, synapses are treated as static weights that change only with the slow time scales of learning. It is well known, however, that synapses are highly dynamic and show use-dependent plasticity over a wide range of time scales. Moreover, synaptic transmission is an inherently stochastic process: a spike arriving at a presynaptic terminal triggers the release of a vesicle of neurotransmitter from a release site with a probability that can be much less than one. We consider a simple model for dynamic stochastic synapses that can easily be integrated into common models for networks of integrate-and-fire neurons (spiking neurons). The parameters of this model have direct interpretations in terms of synaptic physiology. We investigate the consequences of the model for computing with individual spikes and demonstrate through rigorous theoretical results that the computational power of the network is increased through the use of dynamic synapses.  相似文献   

7.
In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed.  相似文献   

8.
Animal learning is associated with changes in the efficacy of connections between neurons. The rules that govern this plasticity can be tested in neural networks. Rules that train neural networks to map stimuli onto outputs are given by supervised learning and reinforcement learning theories. Supervised learning is efficient but biologically implausible. In contrast, reinforcement learning is biologically plausible but comparatively inefficient. It lacks a mechanism that can identify units at early processing levels that play a decisive role in the stimulus-response mapping. Here we show that this so-called credit assignment problem can be solved by a new role for attention in learning. There are two factors in our new learning scheme that determine synaptic plasticity: (1) a reinforcement signal that is homogeneous across the network and depends on the amount of reward obtained after a trial, and (2) an attentional feedback signal from the output layer that limits plasticity to those units at earlier processing levels that are crucial for the stimulus-response mapping. The new scheme is called attention-gated reinforcement learning (AGREL). We show that it is as efficient as supervised learning in classification tasks. AGREL is biologically realistic and integrates the role of feedback connections, attention effects, synaptic plasticity, and reinforcement learning signals into a coherent framework.  相似文献   

9.
Siri B  Berry H  Cessac B  Delord B  Quoy M 《Neural computation》2008,20(12):2937-2966
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.  相似文献   

10.
We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological findings that show that synaptic strength may either increase or decrease on a short timescale depending on presynaptic activity. We thus describe a mechanism by which fast presynaptic noise enhances the neural network sensitivity to an external stimulus. The reason is that, in general, presynaptic noise induces nonequilibrium behavior and, consequently, the space of fixed points is qualitatively modified in such a way that the system can easily escape from the attractor. As a result, the model shows, in addition to pattern recognition, class identification and categorization, which may be relevant to the understanding of some of the brain complex tasks.  相似文献   

11.
Elliott T 《Neural computation》2008,20(9):2253-2307
In a recently proposed, stochastic model of spike-timing-dependent plasticity, we derived general expressions for the expected change in synaptic strength, DeltaS n, induced by a typical sequence of precisely n spikes. We found that the rules DeltaS n, n >or= 3, exhibit regions of parameter space in which stable, competitive interactions between afferents are present, leading to the activity-dependent segregation of afferents on their targets. The rules DeltaS n, however, allow an indefinite period of time to elapse for the occurrence of precisely n spikes, while most measurements of changes in synaptic strength are conducted over definite periods of time during which a potentially unknown number of spikes may occur. Here, therefore, we derive an expression, DeltaS(t), for the expected change in synaptic strength of a synapse experiencing an average sequence of spikes of typical length occurring during a fixed period of time, t. We find that the resulting synaptic plasticity rule Delta S(t) exhibits a number of remarkable properties. It is an entirely self-stabilizing learning rule in all regions of parameter space. Further, its parameter space is carved up into three distinct, contiguous regions in which the exhibited synaptic interactions undergo different transitions as the time t is increased. In one region, the synaptic dynamics change from noncompetitive to competitive to entirely depressing. In a second region, the dynamics change from noncompetitive to competitive without the second transition to entirely depressing dynamics. In a third region, the dynamics are always noncompetitive. The locations of these regions are not fixed in parameter space but may be modified by changing the mean presynaptic firing rates. Thus, neurons may be moved among these three different regions and so exhibit different sets of synaptic dynamics depending on their mean firing rates.  相似文献   

12.
We introduce a model of generalized Hebbian learning and retrieval in oscillatory neural networks modeling cortical areas such as hippocampus and olfactory cortex. Recent experiments have shown that synaptic plasticity depends on spike timing, especially on synapses from excitatory pyramidal cells, in hippocampus, and in sensory and cerebellar cortex. Here we study how such plasticity can be used to form memories and input representations when the neural dynamics are oscillatory, as is common in the brain (particularly in the hippocampus and olfactory cortex). Learning is assumed to occur in a phase of neural plasticity, in which the network is clamped to external teaching signals. By suitable manipulation of the nonlinearity of the neurons or the oscillation frequencies during learning, the model can be made, in a retrieval phase, either to categorize new inputs or to map them, in a continuous fashion, onto the space spanned by the imprinted patterns. We identify the first of these possibilities with the function of olfactory cortex and the second with the observed response characteristics of place cells in hippocampus. We investigate both kinds of networks analytically and by computer simulations, and we link the models with experimental findings, exploring, in particular, how the spike timing dependence of the synaptic plasticity constrains the computational function of the network and vice versa.  相似文献   

13.
Brader JM  Senn W  Fusi S 《Neural computation》2007,19(11):2881-2912
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).  相似文献   

14.
The dynamics of cortical cognitive maps developed by self-organization must include the aspects of long and short-term memory. The behavior of such a neural network is characterized by an equation of neural activity as a fast phenomenon and an equation of synaptic modification as a slow part of the neural system. We present a new method of analyzing the dynamics of a biological relevant system with different time scales based on the theory of flow invariance. We are able to show the conditions under which the solutions of such a system are bounded being less restrictive than with the K-monotone theory, singular perturbation theory, or those based on supervised synaptic learning. We prove the existence and the uniqueness of the equilibrium. A strict Lyapunov function for the flow of a competitive neural system with different time scales is given and based on it we are able to prove the global exponential stability of the equilibrium point.  相似文献   

15.
We study analytically a model of long-term synaptic plasticity where synaptic changes are triggered by presynaptic spikes, postsynaptic spikes, and the time differences between presynaptic and postsynaptic spikes. The changes due to correlated input and output spikes are quantified by means of a learning window. We show that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron. Subtractive normalization of the synaptic weights (summed over all presynaptic inputs converging on a postsynaptic neuron) follows if, in addition, the mean input rates and the mean input correlations are identical at all synapses. If the integral over the learning window is positive, firing-rate stabilization requires a non-Hebbian component, whereas such a component is not needed if the integral of the learning window is negative. A negative integral corresponds to anti-Hebbian learning in a model with slowly varying firing rates. For spike-based learning, a strict distinction between Hebbian and anti-Hebbian rules is questionable since learning is driven by correlations on the timescale of the learning window. The correlations between presynaptic and postsynaptic firing are evaluated for a piecewise-linear Poisson model and for a noisy spiking neuron model with refractoriness. While a negative integral over the learning window leads to intrinsic rate stabilization, the positive part of the learning window picks up spatial and temporal correlations in the input.  相似文献   

16.
TP Lee  DV Buonomano 《Neural computation》2012,24(10):2579-2603
The discrimination of complex auditory stimuli relies on the spatiotemporal structure of spike patterns arriving in the cortex. While recordings from auditory areas reveal that many neurons are highly selective to specific spatiotemporal stimuli, the mechanisms underlying this selectivity are unknown. Using computer simulations, we show that selectivity can emerge in neurons in an entirely unsupervised manner. The model is based on recurrently connected spiking neurons and synapses that exhibit short-term synaptic plasticity. During a developmental stage, spoken digits were presented to the network; the only type of long-term plasticity present was a form of homeostatic synaptic plasticity. From an initially unresponsive state, training generated a high percentage of neurons that responded selectively to individual digits. Furthermore, units within the network exhibited a cardinal feature of vocalization-sensitive neurons in vivo: differential responses between forward and reverse stimulus presentations. Direction selectivity deteriorated significantly, however, if short-term synaptic plasticity was removed. These results establish that a simple form of homeostatic plasticity is capable of guiding recurrent networks into regimes in which complex stimuli can be discriminated. In addition, one computational function of short-term synaptic plasticity may be to provide an inherent temporal asymmetry, thus contributing to the characteristic forward-reverse selectivity.  相似文献   

17.
Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.  相似文献   

18.
Neurophysiological experiments show that the strength of synaptic connections can undergo substantial changes on a short time scale. These changes depend on the history of the presynaptic input. Using mean-field techniques, we study how short-time dynamics of synaptic connections influence the performance of attractor neural networks in terms of their memory capacity and capability to process external signals. For binary discrete-time as well as for firing rate continuous-time neural networks, the fixed points of the network dynamics are shown to be unaffected by synaptic dynamics. However, the stability of patterns changes considerably. Synaptic depression turns out to reduce the storage capacity. On the other hand, synaptic depression is shown to be advantageous for processing of pattern sequences. The analytical results on stability, size of the basins of attraction and on the switching between patterns are complemented by numerical simulations.  相似文献   

19.
We describe and discuss the properties of a binary neural network that can serve as a dynamic neural filter (DNF), which maps regions of input space into spatiotemporal sequences of neuronal activity. Both deterministic and stochastic dynamics are studied, allowing the investigation of the stability of spatiotemporal sequences under noisy conditions. We define a measure of the coding capacity of a DNF and develop an algorithm for constructing a DNF that can serve as a source of given codes. On the basis of this algorithm, we suggest using a minimal DNF capable of generating observed sequences as a measure of complexity of spatiotemporal data. This measure is applied to experimental observations in the locust olfactory system, whose reverberating local field potential provides a natural temporal scale allowing the use of a binary DNF. For random synaptic matrices, a DNF can generate very large cycles, thus becoming an efficient tool for producing spatiotemporal codes. The latter can be stabilized by applying to the parameters of the DNF a learning algorithm with suitable margins.  相似文献   

20.
Matsumoto N  Okada M 《Neural computation》2002,14(12):2883-2902
Recent biological experimental findings have shown that synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes. This determines whether long-term potentiation (LTP) or long-term depression (LTD) is induced. This synaptic plasticity has been called temporally asymmetric Hebbian plasticity (TAH). Many authors have numerically demonstrated that neural networks are capable of storing spatiotemporal patterns. However, the mathematical mechanism of the storage of spatiotemporal patterns is still unknown, and the effect of LTD is particularly unknown. In this article, we employ a simple neural network model and show that interference between LTP and LTD disappears in a sparse coding scheme. On the other hand, the covariance learning rule is known to be indispensable for the storage of sparse patterns. We also show that TAH has the same qualitative effect as the covariance rule when spatiotemporal patterns are embedded in the network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号