首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Spike correlations between neurons are ubiquitous in the cortex, but their role is not understood. Here we describe the firing response of a leaky integrate-and-fire neuron (LIF) when it receives a temporarily correlated input generated by presynaptic correlated neuronal populations. Input correlations are characterized in terms of the firing rates, Fano factors, correlation coefficients, and correlation timescale of the neurons driving the target neuron. We show that the sum of the presynaptic spike trains cannot be well described by a Poisson process. In fact, the total input current has a nontrivial two-point correlation function described by two main parameters: the correlation timescale (how precise the input correlations are in time) and the correlation magnitude (how strong they are). Therefore, the total current generated by the input spike trains is not well described by a white noise gaussian process. Instead, we model the total current as a colored gaussian process with the same mean and two-point correlation function, leading to the formulation of the problem in terms of a Fokker-Planck equation. Solutions of the output firing rate are found in the limit of short and long correlation timescales. The solutions described here expand and improve on our previous results (Moreno, de la Rocha, Renart, & Parga, 2002) by presenting new analytical expressions for the output firing rate for general IF neurons, extending the validity of the results for arbitrarily large correlation magnitude, and by describing the differential effect of correlations on the mean-driven or noise-dominated firing regimes. Also the details of this novel formalism are given here for the first time. We employ numerical simulations to confirm the analytical solutions and study the firing response to sudden changes in the input correlations. We expect this formalism to be useful for the study of correlations in neuronal networks and their role in neural processing and information transmission.  相似文献   

2.
Correlations between neuronal spike trains affect network dynamics and population coding. Overlapping afferent populations and correlations between presynaptic spike trains introduce correlations between the inputs to downstream cells. To understand network activity and population coding, it is therefore important to understand how these input correlations are transferred to output correlations.Recent studies have addressed this question in the limit of many inputs with infinitesimal postsynaptic response amplitudes, where the total input can be approximated by gaussian noise. In contrast, we address the problem of correlation transfer by representing input spike trains as point processes, with each input spike eliciting a finite postsynaptic response. This approach allows us to naturally model synaptic noise and recurrent coupling and to treat excitatory and inhibitory inputs separately.We derive several new results that provide intuitive insights into the fundamental mechanisms that modulate the transfer of spiking correlations.  相似文献   

3.
Lüdtke N  Nelson ME 《Neural computation》2006,18(12):2879-2916
We study the encoding of weak signals in spike trains with interspike interval (ISI) correlations and the signals' subsequent detection in sensory neurons. Motivated by the observation of negative ISI correlations in auditory and electrosensory afferents, we assess the theoretical performance limits of an individual detector neuron receiving a weak signal distributed across multiple afferent inputs. We assess the functional role of ISI correlations in the detection process using statistical detection theory and derive two sequential likelihood ratio detector models: one for afferents with renewal statistics; the other for afferents with negatively correlated ISIs. We suggest a mechanism that might enable sensory neurons to implicitly compute conditional probabilities of presynaptic spikes by means of short-term synaptic plasticity. We demonstrate how this mechanism can enhance a postsynaptic neuron's sensitivity to weak signals by exploiting the correlation structure of the input spike trains. Our model not only captures fundamental aspects of early electrosensory signal processing in weakly electric fish, but may also bear relevance to the mammalian auditory system and other sensory modalities.  相似文献   

4.
The coherence between neural spike trains and local-field potential recordings, called spike-field coherence, is of key importance in many neuroscience studies. In this work, aside from questions of estimator performance, we demonstrate that theoretical spike-field coherence for a broad class of spiking models depends on the expected rate of spiking. This rate dependence confounds the phase locking of spike events to field-potential oscillations with overall neuron activity and is demonstrated analytically, for a large class of stochastic models, and in simulation. Finally, the relationship between the spike-field coherence and the intensity field coherence is detailed analytically. This latter quantity is independent of neuron firing rate and, under commonly found conditions, is proportional to the probability that a neuron spikes at a specific phase of field oscillation. Hence, intensity field coherence is a rate-independent measure and a candidate on which to base the appropriate statistical inference of spike field synchrony.  相似文献   

5.
A rate code assumes that a neuron's response is completely characterized by its time-varying mean firing rate. This assumption has successfully described neural responses in many systems. The noise in rate coding neurons can be quantified by the coherence function or the correlation coefficient between the neuron's deterministic time-varying mean rate and noise corrupted single spike trains. Because of the finite data size, the mean rate cannot be known exactly and must be approximated. We introduce novel unbiased estimators for the measures of coherence and correlation which are based on the extrapolation of the signal to noise ratio in the neural response to infinite data size. We then describe the application of these estimates to the validation of the class of stimulus-response models that assume that the mean firing rate captures all the information embedded in the neural response. We explain how these quantifiers can be used to separate response prediction errors that are due to inaccurate model assumptions from errors due to noise inherent in neuronal spike trains.  相似文献   

6.
Masuda N  Aihara K 《Neural computation》2002,14(7):1599-1628
Interspike intervals of spikes emitted from an integrator neuron model of sensory neurons can encode input information represented as a continuous signal from a deterministic system. If a real brain uses spike timing as a means of information processing, other neurons receiving spatiotemporal spikes from such sensory neurons must also be capable of treating information included in deterministic interspike intervals. In this article, we examine functions of neurons modeling cortical neurons receiving spatiotemporal spikes from many sensory neurons. We show that such neuron models can encode stimulus information passed from the sensory model neurons in the form of interspike intervals. Each sensory neuron connected to the cortical neuron contributes equally to the information collection by the cortical neuron. Although the incident spike train to the cortical neuron is a superimposition of spike trains from many sensory neurons, it need not be decomposed into spike trains according to the input neurons. These results are also preserved for generalizations of sensory neurons such as a small amount of leak, noise, inhomogeneity in firing rates, or biases introduced in the phase distributions.  相似文献   

7.
The purpose of this study was to obtain a better understanding of neuronal responses to correlated input, in particular focusing on the aspect of synchronization of neuronal activity. The first aim was to obtain an analytical expression for the coherence between the output spike train and correlated input and for the coherence between output spike trains of neurons with correlated input. For Poisson neurons, we could derive that the peak of the coherence between the correlated input and multi-unit activity increases proportionally with the square root of the number of neurons in the multi-unit recording. The coherence between two typical multi-unit recordings (2 to 10 single units) with partially correlated input increases proportionally with the number of units in the multi-unit recordings. The second aim of this study was to investigate to what extent the amplitude and signal-to-noise ratio of the coherence between input and output varied for single-unit versus multi-unit activity and how they are affected by the duration of the recording. The same problem was addressed for the coherence between two single-unit spike series and between two multi-unit spike series. The analytical results for the Poisson neuron and numerical simulations for the conductance-based leaky integrate-and-fire neuron and for the conductance-based Hodgkin-Huxley neuron show that the expectation value of the coherence function does not increase for a longer duration of the recording. The only effect of a longer duration of the spike recording is a reduction of the noise in the coherence function. The results of analytical derivations and computer simulations for model neurons show that the coherence for multi-unit activity is larger than that for single-unit activity. This is in agreement with the results of experimental data obtained from monkey visual cortex (V4). Finally, we show that multitaper techniques greatly contribute to a more accurate estimate of the coherence by reducing the bias and variance in the coherence estimate.  相似文献   

8.
Statistical models of neural activity are integral to modern neuroscience. Recently interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However, any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based on the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models that neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem and provide a practical step-by-step procedure for applying it to testing the sufficiency of neural population models. Using several simple analytically tractable models and more complex simulated and real data sets, we demonstrate that important features of the population activity can be detected only using the multivariate extension of the test.  相似文献   

9.
What causes a neuron to spike?   总被引:5,自引:0,他引:5  
The computation performed by a neuron can be formulated as a combination of dimensional reduction in stimulus space and the nonlinearity inherent in a spiking output. White noise stimulus and reverse correlation (the spike-triggered average and spike-triggered covariance) are often used in experimental neuroscience to "ask" neurons which dimensions in stimulus space they are sensitive to and to characterize the nonlinearity of the response. In this article, we apply reverse correlation to the simplest model neuron with temporal dynamics-the leaky integrate-and-fire model-and find that for even this simple case, standard techniques do not recover the known neural computation. To overcome this, we develop novel reverse-correlation techniques by selectively analyzing only "isolated" spikes and taking explicit account of the extended silences that precede these isolated spikes. We discuss the implications of our methods to the characterization of neural adaptation. Although these methods are developed in the context of the leaky integrate-and-fire model, our findings are relevant for the analysis of spike trains from real neurons.  相似文献   

10.
Some sensory tasks in the nervous system require highly precise spike trains to be generated in the presence of intrinsic neuronal noise. Collective enhancement of precision (CEP) can occur when spike trains of many neurons are pooled together into a more precise population discharge. We study CEP in a network of N model neurons connected by recurrent excitation. Each neuron is driven by a periodic inhibitory spike train with independent jitter in the spike arrival time. The network discharge is characterized by sigmaW, the dispersion in the spike times within one cycle, and sigmaB, the jitter in the network-averaged spike time between cycles. In an uncoupled network sigmaB approximately = 1/square root(N) and sigmaW is independent of N. In a strongly coupled network sigmaB approximately = 1/square root(log N) and sigmaW is close to zero. At intermediate coupling strengths, sigmaW is reduced, while sigmaB remains close to its uncoupled value. The population discharge then has optimal biophysical properties compared with the uncoupled network.  相似文献   

11.
Firing patterns of neurons are highly variable from trial to trial, even when we record a well-specified neuron exposed to identical stimuli under the same experimental conditions. The trial-to-trial variability of neuronal spike trains may represent some sort of information and provide important indications about neuronal properties. We propose a new method for quantifying the trial-to-trial variability of spike trains, and investigate how the characteristics of noisy neural network models affect the proposed measure. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

12.
Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on the higher-order statistics of the input ensemble. We constructed two statistical models of populations of spiking neurons that fired with the same rates and had identical pairwise correlations, but differed with regard to the higher-order interactions within the population. The first ensemble was characterized by clusters of spikes synchronized over the whole population. In the second ensemble, the size of spike clusters was, on average, proportional to the pairwise correlation. For both input models, we assessed the role of the size of the population, the firing rate, and the pairwise correlation on the output rate of two simple model neurons: a continuous firing-rate model and a conductance-based leaky integrate-and-fire neuron. An approximation to the mean output rate of the firing-rate neuron could be derived analytically with the help of shot noise theory. Interestingly, the essential features of the mean response of the two neuron models were similar. For both neuron models, the three input parameters played radically different roles with respect to the postsynaptic firing rate, depending on the interaction structure of the input. For instance, in the case of an ensemble with small and distributed spike clusters, the output firing rate was efficiently controlled by the size of the input population. In addition to the interaction structure, the ratio of inhibition to excitation was found to strongly modulate the effect of correlation on the postsynaptic firing rate.  相似文献   

13.
Information encoding and computation with spikes and bursts   总被引:3,自引:0,他引:3  
Neurons compute and communicate by transforming synaptic input patterns into output spike trains. The nature of this transformation depends crucially on the properties of voltage-gated conductances in neuronal membranes. These intrinsic membrane conductances can enable neurons to generate different spike patterns including brief, high-frequency bursts that are commonly observed in a variety of brain regions. Here we examine how the membrane conductances that generate bursts affect neural computation and encoding. We simulated a bursting neuron model driven by random current input signal and superposed noise. We consider two issues: the timing reliability of different spike patterns and the computation performed by the neuron. Statistical analysis of the simulated spike trains shows that the timing of bursts is much more precise than the timing of single spikes. Furthermore, the number of spikes per burst is highly robust to noise. Next we considered the computation performed by the neuron: how different features of the input current are mapped into specific output spike patterns. Dimensional reduction and statistical classification techniques were used to determine the stimulus features triggering different firing patterns. Our main result is that spikes, and bursts of different durations, code for different stimulus features, which can be quantified without a priori assumptions about those features. These findings lead us to propose that the biophysical mechanisms of spike generation enables individual neurons to encode different stimulus features into distinct spike patterns.  相似文献   

14.
Human information processing depends mainly on billions of neurons which constitute a complex neural network, and the information is transmitted in the form of neural spikes. In this paper, we propose a spiking neural network (SNN), named MD-SNN, with three key features: (1) using receptive field to encode spike trains from images; (2) randomly selecting partial spikes as inputs for each neuron to approach the absolute refractory period of the neuron; (3) using groups of neurons to make decisions. We test MD-SNN on the MNIST data set of handwritten digits, and results demonstrate that: (1) Different sizes of receptive fields influence classification results significantly. (2) Considering the neuronal refractory period in the SNN model, increasing the number of neurons in the learning layer could greatly reduce the training time, effectively reduce the probability of over-fitting, and improve the accuracy by 8.77%. (3) Compared with other SNN methods, MD-SNN achieves a better classification; compared with the convolution neural network, MD-SNN maintains flip and rotation invariance (the accuracy can remain at 90.44% on the test set), and it is more suitable for small sample learning (the accuracy can reach 80.15% for 1000 training samples, which is 7.8 times that of CNN).  相似文献   

15.
Stiber M 《Neural computation》2005,17(7):1577-1601
The effects of spike timing precision and dynamical behavior on error correction in spiking neurons were investigated. Stationary discharges-phase locked, quasiperiodic, or chaotic-were induced in a simulated neuron by presenting pacemaker presynaptic spike trains across a model of a prototypical inhibitory synapse. Reduced timing precision was modeled by jittering presynaptic spike times. Aftereffects of errors-in this communication, missed presynaptic spikes-were determined by comparing postsynaptic spike times between simulations identical except for the presence or absence of errors. Results show that the effects of an error vary greatly depending on the ongoing dynamical behavior. In the case of phase lockings, a high degree of presynaptic spike timing precision can provide significantly faster error recovery. For nonlocked behaviors, isolated missed spikes can have little or no discernible aftereffects (or even serve to paradoxically reduce uncertainty in postsynaptic spike timing), regardless of presynaptic imprecision. This suggests two possible categories of error correction: high-precision locking with rapid recovery and low-precision nonlocked with error immunity.  相似文献   

16.
Analyzing the dependencies between spike trains is an important step in understanding how neurons work in concert to represent biological signals. Usually this is done for pairs of neurons at a time using correlation-based techniques. Chornoboy, Schramm, and Karr (1988) proposed maximum likelihood methods for the simultaneous analysis of multiple pair-wise interactions among an ensemble of neurons. One of these methods is an iterative, continuous-time estimation algorithm for a network likelihood model formulated in terms of multiplicative conditional intensity functions. We devised a discrete-time version of this algorithm that includes a new, efficient computational strategy, a principled method to compute starting values, and a principled stopping criterion. In an analysis of simulated neural spike trains from ensembles of interacting neurons, the algorithm recovered the correct connectivity matrices and interaction parameters. In the analysis of spike trains from an ensemble of rat hippocampal place cells, the algorithm identified a connectivity matrix and interaction parameters consistent with the pattern of conjoined firing predicted by the overlap of the neurons' spatial receptive fields. These results suggest that the network likelihood model can be an efficient tool for the analysis of ensemble spiking activity.  相似文献   

17.
神经元膜电位的放电活动是神经编码的基础。然而,目前对于神经元电活动对神经信息的编码方式,至今尚未形成一个完整的认识。传统的编码理论认为神经系统以离散的动作电位放电序列进行信息的表达和传递,主要研究动作电位的发放频率和放电活动的时间模式。基于该理论,对神经元放电序列所携带的信息已经出现了一些定量的计算方法,但这些方法还很难应用到大规模神经元网络的计算当中。本研究以神经元的膜电位为研究对象,展示了如何量化膜电位序列所携带的信息,并将该计算结果与传统放电序列方沣的计算结果进行了对比分析,其结果取得了很好的一致性。本研究为神经活动信息量的定量计算提供了一种新的思路和方法。  相似文献   

18.
蔡荣太  吴庆祥 《计算机应用》2010,30(12):3327-3330
模拟生物信息处理机制,设计了一种用于红外目标提取的脉冲神经网络(SNN)。首先,利用输入层脉冲神经元将激励图像转化为脉冲序列;其次,采用中间层脉冲神经元输出脉冲的密度编码红外图像目标的轮廓像素和非目标轮廓像素;最后,根据输出层神经元输出脉冲的密度是否超过阈值提取红外目标。实验结果表明,设计的脉冲神经网络具有较好的红外目标提取性能,并且符合生物视觉信息处理机制。  相似文献   

19.
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.  相似文献   

20.
Masuda N  Aihara K 《Neural computation》2003,15(6):1341-1372
Neuronal information processing is often studied on the basis of spiking patterns. The relevant statistics such as firing rates calculated with the peri-stimulus time histogram are obtained by averaging spiking patterns over many experimental runs. However, animals should respond to one experimental stimulation in real situations, and what is available to the brain is not the trial statistics but the population statistics. Consequently, physiological ergodicity, namely, the consistency between trial averaging and population averaging, is implicitly assumed in the data analyses, although it does not trivially hold true. In this letter, we investigate how characteristics of noisy neural network models, such as single neuron properties, external stimuli, and synaptic inputs, affect the statistics of firing patterns. In particular, we show that how high membrane potential sensitivity to input fluctuations, inability of neurons to remember past inputs, external stimuli with large variability and temporally separated peaks, and relatively few contributions of synaptic inputs result in spike trains that are reproducible over many trials. The reproducibility of spike trains and synchronous firing are contrasted and related to the ergodicity issue. Several numerical calculations with neural network examples are carried out to support the theoretical results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号