首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
We study analytically a model of long-term synaptic plasticity where synaptic changes are triggered by presynaptic spikes, postsynaptic spikes, and the time differences between presynaptic and postsynaptic spikes. The changes due to correlated input and output spikes are quantified by means of a learning window. We show that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron. Subtractive normalization of the synaptic weights (summed over all presynaptic inputs converging on a postsynaptic neuron) follows if, in addition, the mean input rates and the mean input correlations are identical at all synapses. If the integral over the learning window is positive, firing-rate stabilization requires a non-Hebbian component, whereas such a component is not needed if the integral of the learning window is negative. A negative integral corresponds to anti-Hebbian learning in a model with slowly varying firing rates. For spike-based learning, a strict distinction between Hebbian and anti-Hebbian rules is questionable since learning is driven by correlations on the timescale of the learning window. The correlations between presynaptic and postsynaptic firing are evaluated for a piecewise-linear Poisson model and for a noisy spiking neuron model with refractoriness. While a negative integral over the learning window leads to intrinsic rate stabilization, the positive part of the learning window picks up spatial and temporal correlations in the input.  相似文献   

2.
We present a spiking neuron model that allows for an analytic calculation of the correlations between pre- and postsynaptic spikes. The neuron model is a generalization of the integrate-and-fire model and equipped with a probabilistic spike-triggering mechanism. We show that under certain biologically plausible conditions, pre- and postsynaptic spike trains can be described simultaneously as an inhomogeneous Poisson process. Inspired by experimental findings, we develop a model for synaptic long-term plasticity that relies on the relative timing of pre- and post-synaptic action potentials. Being given an input statistics, we compute the stationary synaptic weights that result from the temporal correlations between the pre- and postsynaptic spikes. By means of both analytic calculations and computer simulations, we show that such a mechanism of synaptic plasticity is able to strengthen those input synapses that convey precisely timed spikes at the expense of synapses that deliver spikes with a broad temporal distribution. This may be of vital importance for any kind of information processing based on spiking neurons and temporal coding.  相似文献   

3.
In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed.  相似文献   

4.
A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6% more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces.  相似文献   

5.
We demonstrate that the BCM learning rule follows directly from STDP when pre- and postsynaptic neurons fire uncorrelated or weakly correlated Poisson spike trains, and only nearest-neighbor spike interactions are taken into account.  相似文献   

6.
Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article two computational methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse in a certain sense. With the help of these methods, one can compute, for example, the temporal pattern of a spike train (with a given number of spikes) that produces the largest sum of postsynaptic responses for a specific synapse. Several other applications are also discussed. To our surprise, we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature. Hence, our analysis provides a possible functional explanation for the experimentally observed regularity in the combination of specific types of synapses with specific types of neurons in neural circuits.  相似文献   

7.
In most neural network models, synapses are treated as static weights that change only with the slow time scales of learning. It is well known, however, that synapses are highly dynamic and show use-dependent plasticity over a wide range of time scales. Moreover, synaptic transmission is an inherently stochastic process: a spike arriving at a presynaptic terminal triggers the release of a vesicle of neurotransmitter from a release site with a probability that can be much less than one. We consider a simple model for dynamic stochastic synapses that can easily be integrated into common models for networks of integrate-and-fire neurons (spiking neurons). The parameters of this model have direct interpretations in terms of synaptic physiology. We investigate the consequences of the model for computing with individual spikes and demonstrate through rigorous theoretical results that the computational power of the network is increased through the use of dynamic synapses.  相似文献   

8.
Florian RV 《Neural computation》2007,19(6):1468-1502
The persistent modification of synaptic efficacy as a function of the relative timing of pre- and postsynaptic spikes is a phenomenon known as spike-timing-dependent plasticity (STDP). Here we show that the modulation of STDP by a global reward signal leads to reinforcement learning. We first derive analytically learning rules involving reward-modulated spike-timing-dependent synaptic and intrinsic plasticity, by applying a reinforcement learning algorithm to the stochastic spike response model of spiking neurons. These rules have several features common to plasticity mechanisms experimentally found in the brain. We then demonstrate in simulations of networks of integrate-and-fire neurons the efficacy of two simple learning rules involving modulated STDP. One rule is a direct extension of the standard STDP model (modulated STDP), and the other one involves an eligibility trace stored at each synapse that keeps a decaying memory of the relationships between the recent pairs of pre- and postsynaptic spike pairs (modulated STDP with eligibility trace). This latter rule permits learning even if the reward signal is delayed. The proposed rules are able to solve the XOR problem with both rate coded and temporally coded input and to learn a target output firing-rate pattern. These learning rules are biologically plausible, may be used for training generic artificial spiking neural networks, regardless of the neural model used, and suggest the experimental investigation in animals of the existence of reward-modulated STDP.  相似文献   

9.
This paper presents the finding of the research we conducted to evaluate the variability of signal release probability at Hebb’s presynaptic neuron under different firing frequencies in a dynamic stochastic neural network. A modeled neuron consisted of thousands of artificial units, called ‘transmitters’ or ‘receptors’ which formed dynamic stochastic synaptic connections between neurons. These artificial units were two-state stochastic computational units that updated their states according to the signal arriving time and their local excitation. An experiment was conducted with three stages by updating the firing frequency of Hebbian neuron at each stage. According to our results, synaptic redistribution has improved the signal transmission for the first few signals in the signal train by continuously increasing and decreasing the number of postsynaptic ‘active-receptors’ and presynaptic ‘active-transmitters’ within a short time period. In long-run, at low-firing frequency, it has increased the steady state efficacy of the synaptic connection between the Hebbian presynaptic and the postsynaptic neuron in terms of the signal release probability of ‘active-transmitters’ in the presynaptic neuron as observed in biology. This ‘low-firing’ frequency of the presynaptic neuron has been identified by the network by comparing it with the ongoing frequency oscillation of the network.  相似文献   

10.
Lüdtke N  Nelson ME 《Neural computation》2006,18(12):2879-2916
We study the encoding of weak signals in spike trains with interspike interval (ISI) correlations and the signals' subsequent detection in sensory neurons. Motivated by the observation of negative ISI correlations in auditory and electrosensory afferents, we assess the theoretical performance limits of an individual detector neuron receiving a weak signal distributed across multiple afferent inputs. We assess the functional role of ISI correlations in the detection process using statistical detection theory and derive two sequential likelihood ratio detector models: one for afferents with renewal statistics; the other for afferents with negatively correlated ISIs. We suggest a mechanism that might enable sensory neurons to implicitly compute conditional probabilities of presynaptic spikes by means of short-term synaptic plasticity. We demonstrate how this mechanism can enhance a postsynaptic neuron's sensitivity to weak signals by exploiting the correlation structure of the input spike trains. Our model not only captures fundamental aspects of early electrosensory signal processing in weakly electric fish, but may also bear relevance to the mammalian auditory system and other sensory modalities.  相似文献   

11.
Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on the higher-order statistics of the input ensemble. We constructed two statistical models of populations of spiking neurons that fired with the same rates and had identical pairwise correlations, but differed with regard to the higher-order interactions within the population. The first ensemble was characterized by clusters of spikes synchronized over the whole population. In the second ensemble, the size of spike clusters was, on average, proportional to the pairwise correlation. For both input models, we assessed the role of the size of the population, the firing rate, and the pairwise correlation on the output rate of two simple model neurons: a continuous firing-rate model and a conductance-based leaky integrate-and-fire neuron. An approximation to the mean output rate of the firing-rate neuron could be derived analytically with the help of shot noise theory. Interestingly, the essential features of the mean response of the two neuron models were similar. For both neuron models, the three input parameters played radically different roles with respect to the postsynaptic firing rate, depending on the interaction structure of the input. For instance, in the case of an ensemble with small and distributed spike clusters, the output firing rate was efficiently controlled by the size of the input population. In addition to the interaction structure, the ratio of inhibition to excitation was found to strongly modulate the effect of correlation on the postsynaptic firing rate.  相似文献   

12.
Stiber M 《Neural computation》2005,17(7):1577-1601
The effects of spike timing precision and dynamical behavior on error correction in spiking neurons were investigated. Stationary discharges-phase locked, quasiperiodic, or chaotic-were induced in a simulated neuron by presenting pacemaker presynaptic spike trains across a model of a prototypical inhibitory synapse. Reduced timing precision was modeled by jittering presynaptic spike times. Aftereffects of errors-in this communication, missed presynaptic spikes-were determined by comparing postsynaptic spike times between simulations identical except for the presence or absence of errors. Results show that the effects of an error vary greatly depending on the ongoing dynamical behavior. In the case of phase lockings, a high degree of presynaptic spike timing precision can provide significantly faster error recovery. For nonlocked behaviors, isolated missed spikes can have little or no discernible aftereffects (or even serve to paradoxically reduce uncertainty in postsynaptic spike timing), regardless of presynaptic imprecision. This suggests two possible categories of error correction: high-precision locking with rapid recovery and low-precision nonlocked with error immunity.  相似文献   

13.
Unitary event analysis is a new method for detecting episodes of synchronized neural activity (Riehle, Grün, Diesmann, & Aertsen, 1997). It detects time intervals that contain coincident firing at higher rates than would be expected if the neurons fired as independent inhomogeneous Poisson processes; all coincidences in such intervals are called unitary events (UEs). Changes in the frequency of UEs that are correlated with behavioral states may indicate synchronization of neural firing that mediates or represents the behavioral state. We show that UE analysis is subject to severe limitations due to the underlying discrete statistics of the number of coincident events. These limitations are particularly stringent for low (0-10 spikes/s) firing rates. Under these conditions, the frequency of UEs is a random variable with a large variation relative to its mean. The relative variation decreases with increasing firing rate, and we compute the lowest firing rate, at which the 95% confidence interval around the mean frequency of UEs excludes zero. This random variation in UE frequency makes interpretation of changes in UEs problematic for neurons with low firing rates. As a typical example, when analyzing 150 trials of an experiment using an averaging window 100 ms wide and a 5 ms coincidence window, firing rates should be greater than 7 spikes per second.  相似文献   

14.
Mikula S  Niebur E 《Neural computation》2003,15(10):2339-2358
In this letter, we extend our previous analytical results (Mikula & Niebur, 2003) for the coincidence detector by taking into account probabilistic frequency-dependent synaptic depression. We present a solution for the steady-state output rate of an ideal coincidence detector receiving an arbitrary number of input spike trains with identical binomial count distributions (which includes Poisson statistics as a special case) and identical arbitrary pairwise cross-correlations, from zero correlation (independent processes) to perfect correlation (identical processes). Synapses vary their efficacy probabilistically according to the observed depression mechanisms. Our results show that synaptic depression, if made sufficiently strong, will result in an inverted U-shaped curve for the output rate of a coincidence detector as a function of input rate. This leads to the counterintuitive prediction that higher presynaptic (input) rates may lead to lower postsynaptic (output) rates where the output rate may fall faster than the inverse of the input rate.  相似文献   

15.
Spike correlations between neurons are ubiquitous in the cortex, but their role is not understood. Here we describe the firing response of a leaky integrate-and-fire neuron (LIF) when it receives a temporarily correlated input generated by presynaptic correlated neuronal populations. Input correlations are characterized in terms of the firing rates, Fano factors, correlation coefficients, and correlation timescale of the neurons driving the target neuron. We show that the sum of the presynaptic spike trains cannot be well described by a Poisson process. In fact, the total input current has a nontrivial two-point correlation function described by two main parameters: the correlation timescale (how precise the input correlations are in time) and the correlation magnitude (how strong they are). Therefore, the total current generated by the input spike trains is not well described by a white noise gaussian process. Instead, we model the total current as a colored gaussian process with the same mean and two-point correlation function, leading to the formulation of the problem in terms of a Fokker-Planck equation. Solutions of the output firing rate are found in the limit of short and long correlation timescales. The solutions described here expand and improve on our previous results (Moreno, de la Rocha, Renart, & Parga, 2002) by presenting new analytical expressions for the output firing rate for general IF neurons, extending the validity of the results for arbitrarily large correlation magnitude, and by describing the differential effect of correlations on the mean-driven or noise-dominated firing regimes. Also the details of this novel formalism are given here for the first time. We employ numerical simulations to confirm the analytical solutions and study the firing response to sudden changes in the input correlations. We expect this formalism to be useful for the study of correlations in neuronal networks and their role in neural processing and information transmission.  相似文献   

16.
We present a solution for the steady-state output rate of an ideal coincidence detector receiving an arbitrary number of excitatory and inhibitory input spike trains. All excitatory spike trains have identical binomial count distributions (which includes Poisson statistics as a special case) and arbitrary pairwise cross correlations between them. The same applies to the inhibitory inputs, and the rates and correlation functions of excitatory and inhibitory populations may be the same or different from each other. Thus, for each population independently, the correlation may range from complete independence to perfect correlation (identical processes). We find that inhibition, if made sufficiently strong, will result in an inverted U-shaped curve for the output rate of a coincidence detector as a function of input rates for the case of identical inhibitory and excitory input rates. This leads to the prediction that higher presynaptic (input) rates may lead to lower postsynaptic (output) rates where the output rate may fall faster than the inverse of the input rate, and shows some qualitative similarities to the case of purely excitatory inputs with synaptic depression. In general, we find that including inhibition invariably and significantly increases the behavioral repertoire of the coincidence detector over the case of pure excitatory input.  相似文献   

17.
Miller P 《Neural computation》2006,18(6):1268-1317
Attractor networks are likely to underlie working memory and integrator circuits in the brain. It is unknown whether continuous quantities are stored in an analog manner or discretized and stored in a set of discrete attractors. In order to investigate the important issue of how to differentiate the two systems, here we compare the neuronal spiking activity that arises from a continuous (line) attractor with that from a series of discrete attractors. Stochastic fluctuations cause the position of the system along its continuous attractor to vary as a random walk, whereas in a discrete attractor, noise causes spontaneous transitions to occur between discrete states at random intervals. We calculate the statistics of spike trains of neurons firing as a Poisson process with rates that vary according to the underlying attractor network. Since individual neurons fire spikes probabilistically and since the state of the network as a whole drifts randomly, the spike trains of individual neurons follow a doubly stochastic (Poisson) point process. We compare the series of spike trains from the two systems using the autocorrelation function, Fano factor, and interspike interval (ISI) distribution. Although the variation in rate can be dramatically different, especially for short time intervals, surprisingly both the autocorrelation functions and Fano factors are identical, given appropriate scaling of the noise terms. Since the range of firing rates is limited in neurons, we also investigate systems for which the variation in rate is bounded by either rigid limits or because of leak to a single attractor state, such as the Ornstein-Uhlenbeck process. In these cases, the time dependence of the variance in rate can be different between discrete and continuous systems, so that in principle, these processes can be distinguished using second-order spike statistics.  相似文献   

18.
Masuda N  Aihara K 《Neural computation》2002,14(7):1599-1628
Interspike intervals of spikes emitted from an integrator neuron model of sensory neurons can encode input information represented as a continuous signal from a deterministic system. If a real brain uses spike timing as a means of information processing, other neurons receiving spatiotemporal spikes from such sensory neurons must also be capable of treating information included in deterministic interspike intervals. In this article, we examine functions of neurons modeling cortical neurons receiving spatiotemporal spikes from many sensory neurons. We show that such neuron models can encode stimulus information passed from the sensory model neurons in the form of interspike intervals. Each sensory neuron connected to the cortical neuron contributes equally to the information collection by the cortical neuron. Although the incident spike train to the cortical neuron is a superimposition of spike trains from many sensory neurons, it need not be decomposed into spike trains according to the input neurons. These results are also preserved for generalizations of sensory neurons such as a small amount of leak, noise, inhomogeneity in firing rates, or biases introduced in the phase distributions.  相似文献   

19.
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.  相似文献   

20.
We present a new technique for calculating the interspike intervals of integrate-and-fire neurons. There are two new components to this technique. First, the probability density of the summed potential is calculated by integrating over the distribution of arrival times of the afferent post-synaptic potentials (PSPs), rather than using conventional stochastic differential equation techniques. A general formulation of this technique is given in terms of the probability distribution of the inputs and the time course of the postsynaptic response. The expressions are evaluated in the gaussian approximation, which gives results that become more accurate for large numbers of small-amplitude PSPs. Second, the probability density of output spikes, which are generated when the potential reaches threshold, is given in terms of an integral involving a conditional probability density. This expression is a generalization of the renewal equation, but it holds for both leaky neurons and situations in which there is no time-translational invariance. The conditional probability density of the potential is calculated using the same technique of integrating over the distribution of arrival times of the afferent PSPs. For inputs with a Poisson distribution, the known analytic solutions for both the perfect integrator model and the Stein model (which incorporates membrane potential leakage) in the diffusion limit are obtained. The interspike interval distribution may also be calculated numerically for models that incorporate both membrane potential leakage and a finite rise time of the postsynaptic response. Plots of the relationship between input and output firing rates, as well as the coefficient of variation, are given, and inputs with varying rates and amplitudes, including inhibitory inputs, are analyzed. The results indicate that neurons functioning near their critical threshold, where the inputs are just sufficient to cause firing, display a large variability in their spike timings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号