首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We present a spiking neuron model that allows for an analytic calculation of the correlations between pre- and postsynaptic spikes. The neuron model is a generalization of the integrate-and-fire model and equipped with a probabilistic spike-triggering mechanism. We show that under certain biologically plausible conditions, pre- and postsynaptic spike trains can be described simultaneously as an inhomogeneous Poisson process. Inspired by experimental findings, we develop a model for synaptic long-term plasticity that relies on the relative timing of pre- and post-synaptic action potentials. Being given an input statistics, we compute the stationary synaptic weights that result from the temporal correlations between the pre- and postsynaptic spikes. By means of both analytic calculations and computer simulations, we show that such a mechanism of synaptic plasticity is able to strengthen those input synapses that convey precisely timed spikes at the expense of synapses that deliver spikes with a broad temporal distribution. This may be of vital importance for any kind of information processing based on spiking neurons and temporal coding.  相似文献   

2.
Lüdtke N  Nelson ME 《Neural computation》2006,18(12):2879-2916
We study the encoding of weak signals in spike trains with interspike interval (ISI) correlations and the signals' subsequent detection in sensory neurons. Motivated by the observation of negative ISI correlations in auditory and electrosensory afferents, we assess the theoretical performance limits of an individual detector neuron receiving a weak signal distributed across multiple afferent inputs. We assess the functional role of ISI correlations in the detection process using statistical detection theory and derive two sequential likelihood ratio detector models: one for afferents with renewal statistics; the other for afferents with negatively correlated ISIs. We suggest a mechanism that might enable sensory neurons to implicitly compute conditional probabilities of presynaptic spikes by means of short-term synaptic plasticity. We demonstrate how this mechanism can enhance a postsynaptic neuron's sensitivity to weak signals by exploiting the correlation structure of the input spike trains. Our model not only captures fundamental aspects of early electrosensory signal processing in weakly electric fish, but may also bear relevance to the mammalian auditory system and other sensory modalities.  相似文献   

3.
Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on the higher-order statistics of the input ensemble. We constructed two statistical models of populations of spiking neurons that fired with the same rates and had identical pairwise correlations, but differed with regard to the higher-order interactions within the population. The first ensemble was characterized by clusters of spikes synchronized over the whole population. In the second ensemble, the size of spike clusters was, on average, proportional to the pairwise correlation. For both input models, we assessed the role of the size of the population, the firing rate, and the pairwise correlation on the output rate of two simple model neurons: a continuous firing-rate model and a conductance-based leaky integrate-and-fire neuron. An approximation to the mean output rate of the firing-rate neuron could be derived analytically with the help of shot noise theory. Interestingly, the essential features of the mean response of the two neuron models were similar. For both neuron models, the three input parameters played radically different roles with respect to the postsynaptic firing rate, depending on the interaction structure of the input. For instance, in the case of an ensemble with small and distributed spike clusters, the output firing rate was efficiently controlled by the size of the input population. In addition to the interaction structure, the ratio of inhibition to excitation was found to strongly modulate the effect of correlation on the postsynaptic firing rate.  相似文献   

4.
We propose a simple neural network model to understand the dynamics of temporal pulse coding. The model is composed of coincidence detector neurons with uniform synaptic efficacies and random pulse propagation delays. We also assume a global negative feedback mechanism which controls the network activity, leading to a fixed number of neurons firing within a certain time window. Due to this constraint, the network state becomes well defined and the dynamics equivalent to a piecewise nonlinear map. Numerical simulations of the model indicate that the latency of neuronal firing is crucial to the global network dynamics; when the timing of postsynaptic firing is less sensitive to perturbations in timing of presynaptic spikes, the network dynamics become stable and periodic, whereas increased sensitivity leads to instability and chaotic dynamics. Furthermore, we introduce a learning rule which decreases the Lyapunov exponent of an attractor and enlarges the basin of attraction.  相似文献   

5.
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.  相似文献   

6.
We studied the hypothesis that synaptic dynamics is controlled by three basic principles: (1) synapses adapt their weights so that neurons can effectively transmit information, (2) homeostatic processes stabilize the mean firing rate of the postsynaptic neuron, and (3) weak synapses adapt more slowly than strong ones, while maintenance of strong synapses is costly. Our results show that a synaptic update rule derived from these principles shares features, with spike-timing-dependent plasticity, is sensitive to correlations in the input and is useful for synaptic memory. Moreover, input selectivity (sharply tuned receptive fields) of postsynaptic neurons develops only if stimuli with strong features are presented. Sharply tuned neurons can coexist with unselective ones, and the distribution of synaptic weights can be unimodal or bimodal. The formulation of synaptic dynamics through an optimality criterion provides a simple graphical argument for the stability of synapses, necessary for synaptic memory.  相似文献   

7.
The precise times of occurrence of individual pre- and postsynaptic action potentials are known to play a key role in the modification of synaptic efficacy. Based on stimulation protocols of two synaptically connected neurons, we infer an algorithm that reproduces the experimental data by modifying the probability of vesicle discharge as a function of the relative timing of spikes in the pre- and postsynaptic neurons. The primary feature of this algorithm is an asymmetry with respect to the direction of synaptic modification depending on whether the presynaptic spikes precede or follow the postsynaptic spike. Specifically, if the presynaptic spike occurs up to 50 ms before the postsynaptic spike, the probability of vesicle discharge is upregulated, while the probability of vesicle discharge is downregulated if the presynaptic spike occurs up to 50 ms after the postsynaptic spike. When neurons fire irregularly with Poisson spike trains at constant mean firing rates, the probability of vesicle discharge converges toward a characteristic value determined by the pre- and postsynaptic firing rates. On the other hand, if the mean rates of the Poisson spike trains slowly change with time, our algorithm predicts modifications in the probability of release that generalize Hebbian and Bienenstock-Cooper-Munro rules. We conclude that the proposed spike-based synaptic learning algorithm provides a general framework for regulating neurotransmitter release probability.  相似文献   

8.
Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity (STDP). We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron's response variability to a given presynaptic input, causing the neuron's output to become more reliable in the face of noise. Using an objective function that minimizes response variability and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena, including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of unsupervised cortical adaptation.  相似文献   

9.
In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed.  相似文献   

10.
We propose a model of intrinsic plasticity for a continuous activation model neuron based on information theory. We then show how intrinsic and synaptic plasticity mechanisms interact and allow the neuron to discover heavy-tailed directions in the input. We also demonstrate that intrinsic plasticity may be an alternative explanation for the sliding threshold postulated in the BCM theory of synaptic plasticity. We present a theoretical analysis of the interaction of intrinsic plasticity with different Hebbian learning rules for the case of clustered inputs. Finally, we perform experiments on the "bars" problem, a popular nonlinear independent component analysis problem.  相似文献   

11.
Elliott T 《Neural computation》2008,20(9):2253-2307
In a recently proposed, stochastic model of spike-timing-dependent plasticity, we derived general expressions for the expected change in synaptic strength, DeltaS n, induced by a typical sequence of precisely n spikes. We found that the rules DeltaS n, n >or= 3, exhibit regions of parameter space in which stable, competitive interactions between afferents are present, leading to the activity-dependent segregation of afferents on their targets. The rules DeltaS n, however, allow an indefinite period of time to elapse for the occurrence of precisely n spikes, while most measurements of changes in synaptic strength are conducted over definite periods of time during which a potentially unknown number of spikes may occur. Here, therefore, we derive an expression, DeltaS(t), for the expected change in synaptic strength of a synapse experiencing an average sequence of spikes of typical length occurring during a fixed period of time, t. We find that the resulting synaptic plasticity rule Delta S(t) exhibits a number of remarkable properties. It is an entirely self-stabilizing learning rule in all regions of parameter space. Further, its parameter space is carved up into three distinct, contiguous regions in which the exhibited synaptic interactions undergo different transitions as the time t is increased. In one region, the synaptic dynamics change from noncompetitive to competitive to entirely depressing. In a second region, the dynamics change from noncompetitive to competitive without the second transition to entirely depressing dynamics. In a third region, the dynamics are always noncompetitive. The locations of these regions are not fixed in parameter space but may be modified by changing the mean presynaptic firing rates. Thus, neurons may be moved among these three different regions and so exhibit different sets of synaptic dynamics depending on their mean firing rates.  相似文献   

12.
The spike count distribution observed when recording from a variety of neurons in many different conditions has a fairly stereotypical shape, with a single mode at zero or close to a low average count, and a long, quasi-exponential tail to high counts. Such a distribution has been suggested to be the direct result of three simple facts: the firing frequency of a typical cortical neuron is close to linear in the summed input current entering the soma, above a threshold; the input current varies on several timescales, both faster and slower than the window used to count spikes; and the input distribution at any timescale can be taken to be approximately normal. The third assumption is violated by associative learning, which generates correlations between the synaptic weight vector on the dendritic tree of a neuron, and the input activity vectors it is repeatedly subject to. We show analytically that for a simple feed-forward model, the normal distribution of the slow components of the input current becomes the sum of two quasi-normal terms. The term important below threshold shifts with learning, while the term important above threshold does not shift but grows in width. These deviations from the standard distribution may be observable in appropriate recording experiments.  相似文献   

13.
Whether cortical neurons act as coincidence detectors or temporal integrators has implications for the way in which the cortex encodes information--by average firing rate or by precise timing of action potentials. In this study, we examine temporal coding by a simple passive-membrane model neuron responding to a full spectrum of multisynaptic input patterns, from highly coincident to temporally dispersed. The temporal precision of the model's action potentials varies continuously along the spectrum, depends very little on the number of synaptic inputs, and is shown to be tightly correlated with the mean slope of the membrane potential preceding the output spikes. These results are shown to be largely independent of the size of postsynaptic potentials, of random background synaptic activity, and of shape of the correlated multisynaptic input pattern. An experimental test involving membrane potential slope is suggested to help determine the basic operating mode of an observed cortical neuron.  相似文献   

14.
Florian RV 《Neural computation》2007,19(6):1468-1502
The persistent modification of synaptic efficacy as a function of the relative timing of pre- and postsynaptic spikes is a phenomenon known as spike-timing-dependent plasticity (STDP). Here we show that the modulation of STDP by a global reward signal leads to reinforcement learning. We first derive analytically learning rules involving reward-modulated spike-timing-dependent synaptic and intrinsic plasticity, by applying a reinforcement learning algorithm to the stochastic spike response model of spiking neurons. These rules have several features common to plasticity mechanisms experimentally found in the brain. We then demonstrate in simulations of networks of integrate-and-fire neurons the efficacy of two simple learning rules involving modulated STDP. One rule is a direct extension of the standard STDP model (modulated STDP), and the other one involves an eligibility trace stored at each synapse that keeps a decaying memory of the relationships between the recent pairs of pre- and postsynaptic spike pairs (modulated STDP with eligibility trace). This latter rule permits learning even if the reward signal is delayed. The proposed rules are able to solve the XOR problem with both rate coded and temporally coded input and to learn a target output firing-rate pattern. These learning rules are biologically plausible, may be used for training generic artificial spiking neural networks, regardless of the neural model used, and suggest the experimental investigation in animals of the existence of reward-modulated STDP.  相似文献   

15.
The temporal precision with which neurons respond to synaptic inputs has a direct bearing on the nature of the neural code. A characterization of the neuronal noise sources associated with different sub-cellular components (synapse, dendrite, soma, axon, and so on) is needed to understand the relationship between noise and information transfer. Here we study the effect of the unreliable, probabilistic nature of synaptic transmission on information transfer in the absence of interaction among presynaptic inputs. We derive theoretical lower bounds on the capacity of a simple model of a cortical synapse under two different paradigms. In signal estimation, the signal is assumed to be encoded in the mean firing rate of the presynaptic neuron, and the objective is to estimate the continuous input signal from the postsynaptic voltage. In signal detection, the input is binary, and the presence or absence of a presynaptic action potential is to be detected from the postsynaptic voltage. The efficacy of information transfer in synaptic transmission is characterized by deriving optimal strategies under these two paradigms. On the basis of parameter values derived from neocortex, we find that single cortical synapses cannot transmit information reliably, but redundancy obtained using a small number of multiple synapses leads to a significant improvement in the information capacity of synaptic transmission.  相似文献   

16.
Spike-timing-dependent plasticity (STDP) is described by long-term potentiation (LTP), when a presynaptic event precedes a postsynaptic event, and by long-term depression (LTD), when the temporal order is reversed. In this article, we present a biophysical model of STDP based on a differential Hebbian learning rule (ISO learning). This rule correlates presynaptically the NMDA channel conductance with the derivative of the membrane potential at the synapse as the postsynaptic signal. The model is able to reproduce the generic STDP weight change characteristic. We find that (1) The actual shape of the weight change curve strongly depends on the NMDA channel characteristics and on the shape of the membrane potential at the synapse. (2) The typical antisymmetrical STDP curve (LTD and LTP) can become similar to a standard Hebbian characteristic (LTP only) without having to change the learning rule. This occurs if the membrane depolarization has a shallow onset and is long lasting. (3) It is known that the membrane potential varies along the dendrite as a result of the active or passive backpropagation of somatic spikes or because of local dendritic processes. As a consequence, our model predicts that learning properties will be different at different locations on the dendritic tree. In conclusion, such site-specific synaptic plasticity would provide a neuron with powerful learning capabilities.  相似文献   

17.
This paper presents the finding of the research we conducted to evaluate the variability of signal release probability at Hebb’s presynaptic neuron under different firing frequencies in a dynamic stochastic neural network. A modeled neuron consisted of thousands of artificial units, called ‘transmitters’ or ‘receptors’ which formed dynamic stochastic synaptic connections between neurons. These artificial units were two-state stochastic computational units that updated their states according to the signal arriving time and their local excitation. An experiment was conducted with three stages by updating the firing frequency of Hebbian neuron at each stage. According to our results, synaptic redistribution has improved the signal transmission for the first few signals in the signal train by continuously increasing and decreasing the number of postsynaptic ‘active-receptors’ and presynaptic ‘active-transmitters’ within a short time period. In long-run, at low-firing frequency, it has increased the steady state efficacy of the synaptic connection between the Hebbian presynaptic and the postsynaptic neuron in terms of the signal release probability of ‘active-transmitters’ in the presynaptic neuron as observed in biology. This ‘low-firing’ frequency of the presynaptic neuron has been identified by the network by comparing it with the ongoing frequency oscillation of the network.  相似文献   

18.
Experimental evidence indicates that synaptic modification depends on the timing relationship between the presynaptic inputs and the output spikes that they generate. In this letter, results are presented for models of spike-timing-dependent plasticity (STDP) whose weight dynamics is determined by a stable fixed point. Four classes of STDP are identified on the basis of the time extent of their input-output interactions. The effect on the potentiation of synapses with different rates of input is investigated to elucidate the relationship of STDP with classical studies of long-term potentiation and depression and rate-based Hebbian learning. The selective potentiation of higher-rate synaptic inputs is found only for models where the time extent of the input-output interactions is input restricted (i.e., restricted to time domains delimited by adjacent synaptic inputs) and that have a time-asymmetric learning window with a longer time constant for depression than for potentiation. The analysis provides an account of learning dynamics determined by an input-selective stable fixed point. The effect of suppressive interspike interactions on STDP is also analyzed and shown to modify the synaptic dynamics.  相似文献   

19.
We study how the location of synaptic input influences the stablex firing states in coupled model neurons bursting rhythmically at the gamma frequencies (20-70 Hz). The model neuron consists of two compartments and generates one, two, three or four spikes in each burst depending on the intensity of input current and the maximum conductance of M-type potassium current. If the somata are connected by reciprocal excitatory synapses, we find strong correlations between the changes in the bursting mode and those in the stable phase-locked states of the coupled neurons. The stability of the in-phase phase-locked state (synchronously firing state) tends to change when the individual neurons change their bursting patterns. If, however, the synaptic connections are terminated on the dendritic compartments, no such correlated changes occur. In this case, the coupled bursting neurons do not show the in-phase phase-locked state in any bursting mode. These results indicate that synchronization behaviour of bursting neurons significantly depends on the synaptic location, unlike a coupled system of regular spiking neurons.  相似文献   

20.
We postulate that a simple, three-state synaptic switch governs changes in synaptic strength at individual synapses. Under this switch rule, we show that a variety of experimental results on timing-dependent plasticity can emerge from temporal and spatial averaging over multiple synapses and multiple spike pairings. In particular, we show that a critical window for the interaction of pre- and postsynaptic spikes emerges as an ensemble property of the collective system, with individual synapses exhibiting only a minimal form of spike coincidence detection. In addition, we show that a Bienenstock-Cooper-Munro-like, rate-based plasticity rule emerges directly from such a model. This demonstrates that two apparently separate forms of neuronal plasticity can emerge from a much simpler rule governing the plasticity of individual synapses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号