首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A simple model of spike generation is described that gives rise to negative correlations in the interspike interval (ISI) sequence and leads to long-term spike train regularization. This regularization can be seen by examining the variance of the kth-order interval distribution for large k (the times between spike i and spike i + k). The variance is much smaller than would be expected if successive ISIs were uncorrelated. Such regularizing effects have been observed in the spike trains of electrosensory afferent nerve fibers and can lead to dramatic improvement in the detectability of weak signals encoded in the spike train data (Ratnam & Nelson, 2000). Here, we present a simple neural model in which negative ISI correlations and long-term spike train regularization arise from refractory effects associated with a dynamic spike threshold. Our model is derived from a more detailed model of electrosensory afferent dynamics developed recently by other investigators (Chacron, Longtin, St.-Hilaire, & Maler, 2000;Chacron, Longtin, & Maler, 2001). The core of this model is a dynamic spike threshold that is transiently elevated following a spike and subsequently decays until the next spike is generated. Here, we present a simplified version-the linear adaptive threshold model-that contains a single state variable and three free parameters that control the mean and coefficient of variation of the spontaneous ISI distribution and the frequency characteristics of the driven response. We show that refractory effects associated with the dynamic threshold lead to regularization of the spike train on long timescales. Furthermore, we show that this regularization enhances the detectability of weak signals encoded by the linear adaptive threshold model. Although inspired by properties of electrosensory afferent nerve fibers, such regularizing effects may play an important role in other neural systems where weak signals must be reliably detected in noisy spike trains. When modeling a neuronal system that exhibits this type of ISI correlation structure, the linear adaptive threshold model may provide a more appropriate starting point than conventional renewal process models that lack long-term regularizing effects.  相似文献   

2.
As multi-electrode and imaging technology begin to provide us with simultaneous recordings of large neuronal populations, new methods for modelling such data must also be developed. We present a model of responses to repeated trials of a sensory stimulus based on thresholded Gaussian processes that allows for analysis and modelling of variability and covariability of population spike trains across multiple time scales. The model framework can be used to specify the values of many different variability measures including spike timing precision across trials, coefficient of variation of the interspike interval distribution, and Fano factor of spike counts for individual neurons, as well as signal and noise correlations and correlations of spike counts across multiple neurons. Using both simulated data and data from different stages of the mammalian auditory pathway, we demonstrate the range of possible independent manipulations of different variability measures, and explore how this range depends on the sensory stimulus. The model provides a powerful framework for the study of experimental and surrogate data and for analyzing dependencies between different statistical properties of neuronal populations.  相似文献   

3.
Masuda N  Aihara K 《Neural computation》2002,14(7):1599-1628
Interspike intervals of spikes emitted from an integrator neuron model of sensory neurons can encode input information represented as a continuous signal from a deterministic system. If a real brain uses spike timing as a means of information processing, other neurons receiving spatiotemporal spikes from such sensory neurons must also be capable of treating information included in deterministic interspike intervals. In this article, we examine functions of neurons modeling cortical neurons receiving spatiotemporal spikes from many sensory neurons. We show that such neuron models can encode stimulus information passed from the sensory model neurons in the form of interspike intervals. Each sensory neuron connected to the cortical neuron contributes equally to the information collection by the cortical neuron. Although the incident spike train to the cortical neuron is a superimposition of spike trains from many sensory neurons, it need not be decomposed into spike trains according to the input neurons. These results are also preserved for generalizations of sensory neurons such as a small amount of leak, noise, inhomogeneity in firing rates, or biases introduced in the phase distributions.  相似文献   

4.
Neurons in sensory systems convey information about physical stimuli in their spike trains. In vitro, single neurons respond precisely and reliably to the repeated injection of the same fluctuating current, producing regions of elevated firing rate, termed events. Analysis of these spike trains reveals that multiple distinct spike patterns can be identified as trial-to-trial correlations between spike times (Fellous, Tiesinga, Thomas, & Sejnowski, 2004 ). Finding events in data with realistic spiking statistics is challenging because events belonging to different spike patterns may overlap. We propose a method for finding spiking events that uses contextual information to disambiguate which pattern a trial belongs to. The procedure can be applied to spike trains of the same neuron across multiple trials to detect and separate responses obtained during different brain states. The procedure can also be applied to spike trains from multiple simultaneously recorded neurons in order to identify volleys of near-synchronous activity or to distinguish between excitatory and inhibitory neurons. The procedure was tested using artificial data as well as recordings in vitro in response to fluctuating current waveforms.  相似文献   

5.
Correlated neural activity has been observed at various signal levels (e.g., spike count, membrane potential, local field potential, EEG, fMRI BOLD). Most of these signals can be considered as superpositions of spike trains filtered by components of the neural system (synapses, membranes) and the measurement process. It is largely unknown how the spike train correlation structure is altered by this filtering and what the consequences for the dynamics of the system and for the interpretation of measured correlations are. In this study, we focus on linearly filtered spike trains and particularly consider correlations caused by overlapping presynaptic neuron populations. We demonstrate that correlation functions and statistical second-order measures like the variance, the covariance, and the correlation coefficient generally exhibit a complex dependence on the filter properties and the statistics of the presynaptic spike trains. We point out that both contributions can play a significant role in modulating the interaction strength between neurons or neuron populations. In many applications, the coherence allows a filter-independent quantification of correlated activity. In different network models, we discuss the estimation of network connectivity from the high-frequency coherence of simultaneous intracellular recordings of pairs of neurons.  相似文献   

6.
Synchronous firing limits the amount of information that can be extracted by averaging the firing rates of similarly tuned neurons. Here, we show that the loss of such rate-coded information due to synchronous oscillations between retinal ganglion cells can be overcome by exploiting the information encoded by the correlations themselves. Two very different models, one based on axon-mediated inhibitory feedback and the other on oscillatory common input, were used to generate artificial spike trains whose synchronous oscillations were similar to those measured experimentally. Pooled spike trains were summed into a threshold detector whose output was classified using Bayesian discrimination. For a threshold detector with short summation times, realistic oscillatory input yielded superior discrimination of stimulus intensity compared to rate-matched Poisson controls. Even for summation times too long to resolve synchronous inputs, gamma band oscillations still contributed to improved discrimination by reducing the total spike count variability, or Fano factor. In separate experiments in which neurons were synchronized in a stimulus-dependent manner without attendant oscillations, the Fano factor increased markedly with stimulus intensity, implying that stimulus-dependent oscillations can offset the increased variability due to synchrony alone.  相似文献   

7.
It remains unclear whether the variability of neuronal spike trains in vivo arises due to biological noise sources or represents highly precise encoding of temporally varying synaptic input signals. Determining the variability of spike timing can provide fundamental insights into the nature of strategies used in the brain to represent and transmit information in the form of discrete spike trains. In this study, we employ a signal estimation paradigm to determine how variability in spike timing affects encoding of random time-varying signals. We assess this for two types of spiking models: an integrate-and-fire model with random threshold and a more biophysically realistic stochastic ion channel model. Using the coding fraction and mutual information as information-theoretic measures, we quantify the efficacy of optimal linear decoding of random inputs from the model outputs and study the relationship between efficacy and variability in the output spike train. Our findings suggest that variability does not necessarily hinder signal decoding for the biophysically plausible encoders examined and that the functional role of spiking variability depends intimately on the nature of the encoder and the signal processing task; variability can either enhance or impede decoding performance.  相似文献   

8.
Spike trains from cortical neurons show a high degree of irregularity, with coefficients of variation (CV) of their interspike interval (ISI) distribution close to or higher than one. It has been suggested that this irregularity might be a reflection of a particular dynamical state of the local cortical circuit in which excitation and inhibition balance each other. In this "balanced" state, the mean current to the neurons is below threshold, and firing is driven by current fluctuations, resulting in irregular Poisson-like spike trains. Recent data show that the degree of irregularity in neuronal spike trains recorded during the delay period of working memory experiments is the same for both low-activity states of a few Hz and for elevated, persistent activity states of a few tens of Hz. Since the difference between these persistent activity states cannot be due to external factors coming from sensory inputs, this suggests that the underlying network dynamics might support coexisting balanced states at different firing rates. We use mean field techniques to study the possible existence of multiple balanced steady states in recurrent networks of current-based leaky integrate-and-fire (LIF) neurons. To assess the degree of balance of a steady state, we extend existing mean-field theories so that not only the firing rate, but also the coefficient of variation of the interspike interval distribution of the neurons, are determined self-consistently. Depending on the connectivity parameters of the network, we find bistable solutions of different types. If the local recurrent connectivity is mainly excitatory, the two stable steady states differ mainly in the mean current to the neurons. In this case, the mean drive in the elevated persistent activity state is suprathreshold and typically characterized by low spiking irregularity. If the local recurrent excitatory and inhibitory drives are both large and nearly balanced, or even dominated by inhibition, two stable states coexist, both with subthreshold current drive. In this case, the spiking variability in both the resting state and the mnemonic persistent state is large, but the balance condition implies parameter fine-tuning. Since the degree of required fine-tuning increases with network size and, on the other hand, the size of the fluctuations in the afferent current to the cells increases for small networks, overall we find that fluctuation-driven persistent activity in the very simplified type of models we analyze is not a robust phenomenon. Possible implications of considering more realistic models are discussed.  相似文献   

9.
Jackson BS 《Neural computation》2004,16(10):2125-2195
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.  相似文献   

10.
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.  相似文献   

11.
We consider a formal model of stimulus encoding with a circuit consisting of a bank of filters and an ensemble of integrate-and-fire neurons. Such models arise in olfactory systems, vision, and hearing. We demonstrate that bandlimited stimuli can be faithfully represented with spike trains generated by the ensemble of neurons. We provide a stimulus reconstruction scheme based on the spike times of the ensemble of neurons and derive conditions for perfect recovery. The key result calls for the spike density of the neural population to be above the Nyquist rate. We also show that recovery is perfect if the number of neurons in the population is larger than a threshold value. Increasing the number of neurons to achieve a faithful representation of the sensory world is consistent with basic neurobiological thought. Finally we demonstrate that in general, the problem of faithful recovery of stimuli from the spike train of single neurons is ill posed. The stimulus can be recovered, however, from the information contained in the spike train of a population of neurons.  相似文献   

12.
Some sensory tasks in the nervous system require highly precise spike trains to be generated in the presence of intrinsic neuronal noise. Collective enhancement of precision (CEP) can occur when spike trains of many neurons are pooled together into a more precise population discharge. We study CEP in a network of N model neurons connected by recurrent excitation. Each neuron is driven by a periodic inhibitory spike train with independent jitter in the spike arrival time. The network discharge is characterized by sigmaW, the dispersion in the spike times within one cycle, and sigmaB, the jitter in the network-averaged spike time between cycles. In an uncoupled network sigmaB approximately = 1/square root(N) and sigmaW is independent of N. In a strongly coupled network sigmaB approximately = 1/square root(log N) and sigmaW is close to zero. At intermediate coupling strengths, sigmaW is reduced, while sigmaB remains close to its uncoupled value. The population discharge then has optimal biophysical properties compared with the uncoupled network.  相似文献   

13.
Miller P 《Neural computation》2006,18(6):1268-1317
Attractor networks are likely to underlie working memory and integrator circuits in the brain. It is unknown whether continuous quantities are stored in an analog manner or discretized and stored in a set of discrete attractors. In order to investigate the important issue of how to differentiate the two systems, here we compare the neuronal spiking activity that arises from a continuous (line) attractor with that from a series of discrete attractors. Stochastic fluctuations cause the position of the system along its continuous attractor to vary as a random walk, whereas in a discrete attractor, noise causes spontaneous transitions to occur between discrete states at random intervals. We calculate the statistics of spike trains of neurons firing as a Poisson process with rates that vary according to the underlying attractor network. Since individual neurons fire spikes probabilistically and since the state of the network as a whole drifts randomly, the spike trains of individual neurons follow a doubly stochastic (Poisson) point process. We compare the series of spike trains from the two systems using the autocorrelation function, Fano factor, and interspike interval (ISI) distribution. Although the variation in rate can be dramatically different, especially for short time intervals, surprisingly both the autocorrelation functions and Fano factors are identical, given appropriate scaling of the noise terms. Since the range of firing rates is limited in neurons, we also investigate systems for which the variation in rate is bounded by either rigid limits or because of leak to a single attractor state, such as the Ornstein-Uhlenbeck process. In these cases, the time dependence of the variance in rate can be different between discrete and continuous systems, so that in principle, these processes can be distinguished using second-order spike statistics.  相似文献   

14.
Recently, a great deal of attention has been paid tostochastic resonance as a new framework to understand sensory mechanisms of biological systems. Stochastic resonance explains important properties of sensory neurons that accurately detect weak input stimuli by using a small amount of internal noise. In particular, Collins et al. reported that a network of stochastic resonance neurons gives rise to a robust sensory function for detecting a variety of complex input signals. In this study, we investigate effectiveness of such stochastic resonance neural networks to chaotic input signals. Using the Rössler equations, we analyze the network's capability to detect chaotic dynamics. We also apply the stochastic resonance network systems to speech signals, and examine a plausibility of the stochastic resonance neural network as a possible model for the human auditory system.  相似文献   

15.
Analyzing the dependencies between spike trains is an important step in understanding how neurons work in concert to represent biological signals. Usually this is done for pairs of neurons at a time using correlation-based techniques. Chornoboy, Schramm, and Karr (1988) proposed maximum likelihood methods for the simultaneous analysis of multiple pair-wise interactions among an ensemble of neurons. One of these methods is an iterative, continuous-time estimation algorithm for a network likelihood model formulated in terms of multiplicative conditional intensity functions. We devised a discrete-time version of this algorithm that includes a new, efficient computational strategy, a principled method to compute starting values, and a principled stopping criterion. In an analysis of simulated neural spike trains from ensembles of interacting neurons, the algorithm recovered the correct connectivity matrices and interaction parameters. In the analysis of spike trains from an ensemble of rat hippocampal place cells, the algorithm identified a connectivity matrix and interaction parameters consistent with the pattern of conjoined firing predicted by the overlap of the neurons' spatial receptive fields. These results suggest that the network likelihood model can be an efficient tool for the analysis of ensemble spiking activity.  相似文献   

16.
We present a solution for the steady-state output rate of an ideal coincidence detector receiving an arbitrary number of excitatory and inhibitory input spike trains. All excitatory spike trains have identical binomial count distributions (which includes Poisson statistics as a special case) and arbitrary pairwise cross correlations between them. The same applies to the inhibitory inputs, and the rates and correlation functions of excitatory and inhibitory populations may be the same or different from each other. Thus, for each population independently, the correlation may range from complete independence to perfect correlation (identical processes). We find that inhibition, if made sufficiently strong, will result in an inverted U-shaped curve for the output rate of a coincidence detector as a function of input rates for the case of identical inhibitory and excitory input rates. This leads to the prediction that higher presynaptic (input) rates may lead to lower postsynaptic (output) rates where the output rate may fall faster than the inverse of the input rate, and shows some qualitative similarities to the case of purely excitatory inputs with synaptic depression. In general, we find that including inhibition invariably and significantly increases the behavioral repertoire of the coincidence detector over the case of pure excitatory input.  相似文献   

17.
Coincident firing of neurons projecting to a common target cell is likely to raise the probability of firing of this postsynaptic cell. Therefore, synchronized firing constitutes a significant event for postsynaptic neurons and is likely to play a role in neuronal information processing. Physiological data on synchronized firing in cortical networks are based primarily on paired recordings and cross-correlation analysis. However, pair-wise correlations among all inputs onto a postsynaptic neuron do not uniquely determine the distribution of simultaneous postsynaptic events. We develop a framework in order to calculate the amount of synchronous firing that, based on maximum entropy, should exist in a homogeneous neural network in which the neurons have known pair-wise correlations and higher-order structure is absent. According to the distribution of maximal entropy, synchronous events in which a large proportion of the neurons participates should exist even in the case of weak pair-wise correlations. Network simulations also exhibit these highly synchronous events in the case of weak pair-wise correlations. If such a group of neurons provides input to a common postsynaptic target, these network bursts may enhance the impact of this input, especially in the case of a high postsynaptic threshold. The proportion of neurons participating in synchronous bursts can be approximated by our method under restricted conditions. When these conditions are not fulfilled, the spike trains have less than maximal entropy, which is indicative of the presence of higher-order structure. In this situation, the degree of synchronicity cannot be derived from the pair-wise correlations.  相似文献   

18.
Rolling element bearings are widely used to support rotating components of a machine. Due to close space locations of components in the machine, a vibration signal caused by bearing localized defects is easily overwhelmed by other strong vibration signals. Extracting the bearing fault signal from a multi-component signal mixture is thus significant to detect early bearing fault features and prevent machine breakdown. In this paper, a bearing fault diagnosis method, named cyclic spike detection method, is proposed to extract the weak bearing fault features from a multi-component signal mixture. Firstly, the optimal center frequency and bandwidth of a complex Morlet wavelet filter are determined by a simplex-simulated annealing algorithm along with a maximum sparsity objective function. The filtered signal is then obtained by applying the optimal wavelet filter to the multi-component signal mixture. After that, a new adaptive local maximum selection method is proposed to make the filtered signal succinct. Only a few spikes are retained to reveal potential cyclic intervals caused by bearing localized defects. Two multi-component signal mixtures, including a simulated signal and a real vibration signal collected from an industrial machine, are used to validate the effectiveness of the proposed cyclic spike detection method. The results demonstrate that the proposed method can extract the weak bearing fault features from other strong masking vibration signals and noise.  相似文献   

19.
Statistical models of neural activity are integral to modern neuroscience. Recently interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However, any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based on the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models that neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem and provide a practical step-by-step procedure for applying it to testing the sufficiency of neural population models. Using several simple analytically tractable models and more complex simulated and real data sets, we demonstrate that important features of the population activity can be detected only using the multivariate extension of the test.  相似文献   

20.
Correlations between neuronal spike trains affect network dynamics and population coding. Overlapping afferent populations and correlations between presynaptic spike trains introduce correlations between the inputs to downstream cells. To understand network activity and population coding, it is therefore important to understand how these input correlations are transferred to output correlations.Recent studies have addressed this question in the limit of many inputs with infinitesimal postsynaptic response amplitudes, where the total input can be approximated by gaussian noise. In contrast, we address the problem of correlation transfer by representing input spike trains as point processes, with each input spike eliciting a finite postsynaptic response. This approach allows us to naturally model synaptic noise and recurrent coupling and to treat excitatory and inhibitory inputs separately.We derive several new results that provide intuitive insights into the fundamental mechanisms that modulate the transfer of spiking correlations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号