首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model's validity prior to using it to make inferences about a particular neural system. Assessing goodness-of-fit is a challenging problem for point process neural spike train models, especially for histogram-based models such as perstimulus time histograms (PSTH) and rate functions estimated by spike train smoothing. The time-rescaling theorem is a well-known result in probability theory, which states that any point process with an integrable conditional intensity function may be transformed into a Poisson process with unit rate. We describe how the theorem may be used to develop goodness-of-fit tests for both parametric and histogram-based point process models of neural spike trains. We apply these tests in two examples: a comparison of PSTH, inhomogeneous Poisson, and inhomogeneous Markov interval models of neural spike trains from the supplementary eye field of a macque monkey and a comparison of temporal and spatial smoothers, inhomogeneous Poisson, inhomogeneous gamma, and inhomogeneous inverse gaussian models of rat hippocampal place cell spiking activity. To help make the logic behind the time-rescaling theorem more accessible to researchers in neuroscience, we present a proof using only elementary probability theory arguments. We also show how the theorem may be used to simulate a general point process model of a spike train. Our paradigm makes it possible to compare parametric and histogram-based neural spike train models directly. These results suggest that the time-rescaling theorem can be a valuable tool for neural spike train data analysis.  相似文献   

2.
Correlated neural activity has been observed at various signal levels (e.g., spike count, membrane potential, local field potential, EEG, fMRI BOLD). Most of these signals can be considered as superpositions of spike trains filtered by components of the neural system (synapses, membranes) and the measurement process. It is largely unknown how the spike train correlation structure is altered by this filtering and what the consequences for the dynamics of the system and for the interpretation of measured correlations are. In this study, we focus on linearly filtered spike trains and particularly consider correlations caused by overlapping presynaptic neuron populations. We demonstrate that correlation functions and statistical second-order measures like the variance, the covariance, and the correlation coefficient generally exhibit a complex dependence on the filter properties and the statistics of the presynaptic spike trains. We point out that both contributions can play a significant role in modulating the interaction strength between neurons or neuron populations. In many applications, the coherence allows a filter-independent quantification of correlated activity. In different network models, we discuss the estimation of network connectivity from the high-frequency coherence of simultaneous intracellular recordings of pairs of neurons.  相似文献   

3.
We consider a formal model of stimulus encoding with a circuit consisting of a bank of filters and an ensemble of integrate-and-fire neurons. Such models arise in olfactory systems, vision, and hearing. We demonstrate that bandlimited stimuli can be faithfully represented with spike trains generated by the ensemble of neurons. We provide a stimulus reconstruction scheme based on the spike times of the ensemble of neurons and derive conditions for perfect recovery. The key result calls for the spike density of the neural population to be above the Nyquist rate. We also show that recovery is perfect if the number of neurons in the population is larger than a threshold value. Increasing the number of neurons to achieve a faithful representation of the sensory world is consistent with basic neurobiological thought. Finally we demonstrate that in general, the problem of faithful recovery of stimuli from the spike train of single neurons is ill posed. The stimulus can be recovered, however, from the information contained in the spike train of a population of neurons.  相似文献   

4.
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.  相似文献   

5.
Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on the higher-order statistics of the input ensemble. We constructed two statistical models of populations of spiking neurons that fired with the same rates and had identical pairwise correlations, but differed with regard to the higher-order interactions within the population. The first ensemble was characterized by clusters of spikes synchronized over the whole population. In the second ensemble, the size of spike clusters was, on average, proportional to the pairwise correlation. For both input models, we assessed the role of the size of the population, the firing rate, and the pairwise correlation on the output rate of two simple model neurons: a continuous firing-rate model and a conductance-based leaky integrate-and-fire neuron. An approximation to the mean output rate of the firing-rate neuron could be derived analytically with the help of shot noise theory. Interestingly, the essential features of the mean response of the two neuron models were similar. For both neuron models, the three input parameters played radically different roles with respect to the postsynaptic firing rate, depending on the interaction structure of the input. For instance, in the case of an ensemble with small and distributed spike clusters, the output firing rate was efficiently controlled by the size of the input population. In addition to the interaction structure, the ratio of inhibition to excitation was found to strongly modulate the effect of correlation on the postsynaptic firing rate.  相似文献   

6.
As multi-electrode and imaging technology begin to provide us with simultaneous recordings of large neuronal populations, new methods for modelling such data must also be developed. We present a model of responses to repeated trials of a sensory stimulus based on thresholded Gaussian processes that allows for analysis and modelling of variability and covariability of population spike trains across multiple time scales. The model framework can be used to specify the values of many different variability measures including spike timing precision across trials, coefficient of variation of the interspike interval distribution, and Fano factor of spike counts for individual neurons, as well as signal and noise correlations and correlations of spike counts across multiple neurons. Using both simulated data and data from different stages of the mammalian auditory pathway, we demonstrate the range of possible independent manipulations of different variability measures, and explore how this range depends on the sensory stimulus. The model provides a powerful framework for the study of experimental and surrogate data and for analyzing dependencies between different statistical properties of neuronal populations.  相似文献   

7.
Correlations between neuronal spike trains affect network dynamics and population coding. Overlapping afferent populations and correlations between presynaptic spike trains introduce correlations between the inputs to downstream cells. To understand network activity and population coding, it is therefore important to understand how these input correlations are transferred to output correlations.Recent studies have addressed this question in the limit of many inputs with infinitesimal postsynaptic response amplitudes, where the total input can be approximated by gaussian noise. In contrast, we address the problem of correlation transfer by representing input spike trains as point processes, with each input spike eliciting a finite postsynaptic response. This approach allows us to naturally model synaptic noise and recurrent coupling and to treat excitatory and inhibitory inputs separately.We derive several new results that provide intuitive insights into the fundamental mechanisms that modulate the transfer of spiking correlations.  相似文献   

8.
Lüdtke N  Nelson ME 《Neural computation》2006,18(12):2879-2916
We study the encoding of weak signals in spike trains with interspike interval (ISI) correlations and the signals' subsequent detection in sensory neurons. Motivated by the observation of negative ISI correlations in auditory and electrosensory afferents, we assess the theoretical performance limits of an individual detector neuron receiving a weak signal distributed across multiple afferent inputs. We assess the functional role of ISI correlations in the detection process using statistical detection theory and derive two sequential likelihood ratio detector models: one for afferents with renewal statistics; the other for afferents with negatively correlated ISIs. We suggest a mechanism that might enable sensory neurons to implicitly compute conditional probabilities of presynaptic spikes by means of short-term synaptic plasticity. We demonstrate how this mechanism can enhance a postsynaptic neuron's sensitivity to weak signals by exploiting the correlation structure of the input spike trains. Our model not only captures fundamental aspects of early electrosensory signal processing in weakly electric fish, but may also bear relevance to the mammalian auditory system and other sensory modalities.  相似文献   

9.
10.
Koyama S  Kass RE 《Neural computation》2008,20(7):1776-1795
Mathematical models of neurons are widely used to improve understanding of neuronal spiking behavior. These models can produce artificial spike trains that resemble actual spike train data in important ways, but they are not very easy to apply to the analysis of spike train data. Instead, statistical methods based on point process models of spike trains provide a wide range of data-analytical techniques. Two simplified point process models have been introduced in the literature: the time-rescaled renewal process (TRRP) and the multiplicative inhomogeneous Markov interval (m-IMI) model. In this letter we investigate the extent to which the TRRP and m-IMI models are able to fit spike trains produced by stimulus-driven leaky integrate-and-fire (LIF) neurons. With a constant stimulus, the LIF spike train is a renewal process, and the m-IMI and TRRP models will describe accurately the LIF spike train variability. With a time-varying stimulus, the probability of spiking under all three of these models depends on both the experimental clock time relative to the stimulus and the time since the previous spike, but it does so differently for the LIF, m-IMI, and TRRP models. We assessed the distance between the LIF model and each of the two empirical models in the presence of a time-varying stimulus. We found that while lack of fit of a Poisson model to LIF spike train data can be evident even in small samples, the m-IMI and TRRP models tend to fit well, and much larger samples are required before there is statistical evidence of lack of fit of the m-IMI or TRRP models. We also found that when the mean of the stimulus varies across time, the m-IMI model provides a better fit to the LIF data than the TRRP, and when the variance of the stimulus varies across time, the TRRP provides the better fit.  相似文献   

11.
Jackson BS 《Neural computation》2004,16(10):2125-2195
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.  相似文献   

12.
We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrate-and-fire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can effectively reproduce a variety of spiking behaviors seen in vivo. We describe the maximum likelihood estimator for the model parameters, given only extracellular spike train responses (not intracellular voltage data). Specifically, we prove that the log-likelihood function is concave and thus has an essentially unique global maximum that can be found using gradient ascent techniques. We develop an efficient algorithm for computing the maximum likelihood solution, demonstrate the effectiveness of the resulting estimator with numerical simulations, and discuss a method of testing the model's validity using time-rescaling and density evolution techniques.  相似文献   

13.
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.  相似文献   

14.
A widely used signal processing paradigm is the state-space model. The state-space model is defined by two equations: an observation equation that describes how the hidden state or latent process is observed and a state equation that defines the evolution of the process through time. Inspired by neurophysiology experiments in which neural spiking activity is induced by an implicit (latent) stimulus, we develop an algorithm to estimate a state-space model observed through point process measurements. We represent the latent process modulating the neural spiking activity as a gaussian autoregressive model driven by an external stimulus. Given the latent process, neural spiking activity is characterized as a general point process defined by its conditional intensity function. We develop an approximate expectation-maximization (EM) algorithm to estimate the unobservable state-space process, its parameters, and the parameters of the point process. The EM algorithm combines a point process recursive nonlinear filter algorithm, the fixed interval smoothing algorithm, and the state-space covariance algorithm to compute the complete data log likelihood efficiently. We use a Kolmogorov-Smirnov test based on the time-rescaling theorem to evaluate agreement between the model and point process data. We illustrate the model with two simulated data examples: an ensemble of Poisson neurons driven by a common stimulus and a single neuron whose conditional intensity function is approximated as a local Bernoulli process.  相似文献   

15.
Neurons in sensory systems convey information about physical stimuli in their spike trains. In vitro, single neurons respond precisely and reliably to the repeated injection of the same fluctuating current, producing regions of elevated firing rate, termed events. Analysis of these spike trains reveals that multiple distinct spike patterns can be identified as trial-to-trial correlations between spike times (Fellous, Tiesinga, Thomas, & Sejnowski, 2004 ). Finding events in data with realistic spiking statistics is challenging because events belonging to different spike patterns may overlap. We propose a method for finding spiking events that uses contextual information to disambiguate which pattern a trial belongs to. The procedure can be applied to spike trains of the same neuron across multiple trials to detect and separate responses obtained during different brain states. The procedure can also be applied to spike trains from multiple simultaneously recorded neurons in order to identify volleys of near-synchronous activity or to distinguish between excitatory and inhibitory neurons. The procedure was tested using artificial data as well as recordings in vitro in response to fluctuating current waveforms.  相似文献   

16.
A rate code assumes that a neuron's response is completely characterized by its time-varying mean firing rate. This assumption has successfully described neural responses in many systems. The noise in rate coding neurons can be quantified by the coherence function or the correlation coefficient between the neuron's deterministic time-varying mean rate and noise corrupted single spike trains. Because of the finite data size, the mean rate cannot be known exactly and must be approximated. We introduce novel unbiased estimators for the measures of coherence and correlation which are based on the extrapolation of the signal to noise ratio in the neural response to infinite data size. We then describe the application of these estimates to the validation of the class of stimulus-response models that assume that the mean firing rate captures all the information embedded in the neural response. We explain how these quantifiers can be used to separate response prediction errors that are due to inaccurate model assumptions from errors due to noise inherent in neuronal spike trains.  相似文献   

17.
Information encoding and computation with spikes and bursts   总被引:3,自引:0,他引:3  
Neurons compute and communicate by transforming synaptic input patterns into output spike trains. The nature of this transformation depends crucially on the properties of voltage-gated conductances in neuronal membranes. These intrinsic membrane conductances can enable neurons to generate different spike patterns including brief, high-frequency bursts that are commonly observed in a variety of brain regions. Here we examine how the membrane conductances that generate bursts affect neural computation and encoding. We simulated a bursting neuron model driven by random current input signal and superposed noise. We consider two issues: the timing reliability of different spike patterns and the computation performed by the neuron. Statistical analysis of the simulated spike trains shows that the timing of bursts is much more precise than the timing of single spikes. Furthermore, the number of spikes per burst is highly robust to noise. Next we considered the computation performed by the neuron: how different features of the input current are mapped into specific output spike patterns. Dimensional reduction and statistical classification techniques were used to determine the stimulus features triggering different firing patterns. Our main result is that spikes, and bursts of different durations, code for different stimulus features, which can be quantified without a priori assumptions about those features. These findings lead us to propose that the biophysical mechanisms of spike generation enables individual neurons to encode different stimulus features into distinct spike patterns.  相似文献   

18.
A simple model of spike generation is described that gives rise to negative correlations in the interspike interval (ISI) sequence and leads to long-term spike train regularization. This regularization can be seen by examining the variance of the kth-order interval distribution for large k (the times between spike i and spike i + k). The variance is much smaller than would be expected if successive ISIs were uncorrelated. Such regularizing effects have been observed in the spike trains of electrosensory afferent nerve fibers and can lead to dramatic improvement in the detectability of weak signals encoded in the spike train data (Ratnam & Nelson, 2000). Here, we present a simple neural model in which negative ISI correlations and long-term spike train regularization arise from refractory effects associated with a dynamic spike threshold. Our model is derived from a more detailed model of electrosensory afferent dynamics developed recently by other investigators (Chacron, Longtin, St.-Hilaire, & Maler, 2000;Chacron, Longtin, & Maler, 2001). The core of this model is a dynamic spike threshold that is transiently elevated following a spike and subsequently decays until the next spike is generated. Here, we present a simplified version-the linear adaptive threshold model-that contains a single state variable and three free parameters that control the mean and coefficient of variation of the spontaneous ISI distribution and the frequency characteristics of the driven response. We show that refractory effects associated with the dynamic threshold lead to regularization of the spike train on long timescales. Furthermore, we show that this regularization enhances the detectability of weak signals encoded by the linear adaptive threshold model. Although inspired by properties of electrosensory afferent nerve fibers, such regularizing effects may play an important role in other neural systems where weak signals must be reliably detected in noisy spike trains. When modeling a neuronal system that exhibits this type of ISI correlation structure, the linear adaptive threshold model may provide a more appropriate starting point than conventional renewal process models that lack long-term regularizing effects.  相似文献   

19.
Niebur E 《Neural computation》2007,19(7):1720-1738
Recent technological advances as well as progress in theoretical understanding of neural systems have created a need for synthetic spike trains with controlled mean rate and pairwise cross-correlation. This report introduces and analyzes a novel algorithm for the generation of discretized spike trains with arbitrary mean rates and controlled cross correlation. Pairs of spike trains with any pairwise correlation can be generated, and higher-order correlations are compatible with common synaptic input. Relations between allowable mean rates and correlations within a population are discussed. The algorithm is highly efficient, its complexity increasing linearly with the number of spike trains generated and therefore inversely with the number of cross-correlated pairs.  相似文献   

20.
Masuda N  Aihara K 《Neural computation》2002,14(7):1599-1628
Interspike intervals of spikes emitted from an integrator neuron model of sensory neurons can encode input information represented as a continuous signal from a deterministic system. If a real brain uses spike timing as a means of information processing, other neurons receiving spatiotemporal spikes from such sensory neurons must also be capable of treating information included in deterministic interspike intervals. In this article, we examine functions of neurons modeling cortical neurons receiving spatiotemporal spikes from many sensory neurons. We show that such neuron models can encode stimulus information passed from the sensory model neurons in the form of interspike intervals. Each sensory neuron connected to the cortical neuron contributes equally to the information collection by the cortical neuron. Although the incident spike train to the cortical neuron is a superimposition of spike trains from many sensory neurons, it need not be decomposed into spike trains according to the input neurons. These results are also preserved for generalizations of sensory neurons such as a small amount of leak, noise, inhomogeneity in firing rates, or biases introduced in the phase distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号