首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Theoretical and experimental studies of distributed neuronal representations of sensory and behavioral variables usually assume that the tuning of the mean firing rates is the main source of information. However, recent theoretical studies have investigated the effect of cross-correlations in the trial-to-trial fluctuations of the neuronal responses on the accuracy of the representation. Assuming that only the first-order statistics of the neuronal responses are tuned to the stimulus, these studies have shown that in the presence of correlations, similar to those observed experimentally in cortical ensembles of neurons, the amount of information in the population is limited, yielding nonzero error levels even in the limit of infinitely large populations of neurons. In this letter, we study correlated neuronal populations whose higher-order statistics, and in particular response variances, are also modulated by the stimulus. Weask two questions: Does the correlated noise limit the accuracy of the neuronal representation of the stimulus? and, How can a biological mechanism extract most of the information embedded in the higher-order statistics of the neuronal responses? Specifically, we address these questions in the context of a population of neurons coding an angular variable. We show that the information embedded in the variances grows linearly with the population size despite the presence of strong correlated noise. This information cannot be extracted by linear readout schemes, including the linear population vector. Instead, we propose a bilinear readout scheme that involves spatial decorrelation, quadratic nonlinearity, and population vector summation. We show that this nonlinear population vector scheme yields accurate estimates of stimulus parameters, with an efficiency that grows linearly with the population size. This code can be implemented using biologically plausible neurons.  相似文献   

2.
Response variability is often positively correlated in pairs of similarly tuned neurons in the visual cortex. Many authors have considered correlated variability to prevent postsynaptic neurons from averaging across large groups of inputs to obtain reliable stimulus estimates. However, a simple average of variability ignores nonlinearities in cortical signal integration. This study shows that feedforward divisive normalization of a neuron's inputs effectively decorrelates their variability. Furthermore, we show that optimal linear estimates of a stimulus parameter that are based on normalized inputs are more accurate than those based on nonnormalized inputs, due partly to reduced correlations, and that these estimates improve with increasing population size up to several thousand neurons. This suggests that neurons may possess a simple mechanism for substantially decorrelating noise in their inputs. Further work is needed to reconcile this conclusion with past evidence that correlated noise impairs visual perception.  相似文献   

3.
Shamir M 《Neural computation》2006,18(11):2719-2729
Empirical studies seem to support conflicting hypotheses with regard to the nature of the neural code. While some studies highlight the role of a distributed population code, others emphasize the possibility of a "single-best-cell" readout. One particularly interesting example of single-best-cell readout is provided by the winner-takes-all (WTA) approach. According to the WTA, every cell is characterized by one particular preferred stimulus, to which it responds maximally. The WTA estimate for the stimulus is defined as the preferred stimulus of the cell with the strongest response. From a theoretical point of view, not much is known about the efficiency of single-best-cell readout mechanisms, in contrast to the considerable existing theoretical knowledge on the efficiency of distributed population codes. In this work, we provide a basic theoretical framework for investigating single-best-cell readout mechanisms. We study the accuracy of the WTA readout. In particular, we are interested in how the WTA accuracy scales with the number of cells in the population. Using this framework, we show that for large neuronal populations, the WTA accuracy is dominated by the tail of the single-cell-response distribution. Furthermore, we find that although the WTA accuracy does improve when larger populations are considered, this improvement is extremely weak compared to other types of population codes. More precisely, we show that while the accuracy of a linear readout scales linearly with the population size, the accuracy of the WTA readout scales logarithmically with the number of cells in the population.  相似文献   

4.
Doi E  Lewicki MS 《Neural computation》2011,23(10):2498-2510
Robust coding has been proposed as a solution to the problem of minimizing decoding error in the presence of neural noise. Many real-world problems, however, have degradation in the input signal, not just in neural representations. This generalized problem is more relevant to biological sensory coding where internal noise arises from limited neural precision and external noise from distortion of sensory signal such as blurring and phototransduction noise. In this note, we show that the optimal linear encoder for this problem can be decomposed exactly into two serial processes that can be optimized separately. One is Wiener filtering, which optimally compensates for input degradation. The other is robust coding, which best uses the available representational capacity for signal transmission with a noisy population of linear neurons. We also present spectral analysis of the decomposition that characterizes how the reconstruction error is minimized under different input signal spectra, types and amounts of degradation, degrees of neural precision, and neural population sizes.  相似文献   

5.
We study the relationship between the accuracy of a large neuronal population in encoding periodic sensory stimuli and the width of the tuning curves of individual neurons in the population. By using general simple models of population activity, we show that when considering one or two periodic stimulus features, a narrow tuning width provides better population encoding accuracy. When encoding more than two periodic stimulus features, the information conveyed by the population is instead maximal for finite values of the tuning width. These optimal values are only weakly dependent on model parameters and are similar to the width of tuning to orientation or motion direction of real visual cortical neurons. A very large tuning width leads to poor encoding accuracy, whatever the number of stimulus features encoded. Thus, optimal coding of periodic stimuli is different from that of nonperiodic stimuli, which, as shown in previous studies, would require infinitely large tuning widths when coding more than two stimulus features.  相似文献   

6.
In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset. Here, we compare the performance of two prominent classes of memory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can outperform feedforward ones. Under a simple model in which neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli.  相似文献   

7.
There is strong anatomical and physiological evidence that neurons with large receptive fields located in higher visual areas are recurrently connected to neurons with smaller receptive fields in lower areas. We have previously described a minimal neuronal network architecture in which top-down attentional signals to large receptive field neurons can bias and selectively read out the bottom-up sensory information to small receptive field neurons (Hahnloser, Douglas, Mahowald, & Hepp, 1999). Here we study an enhanced model, where the role of attention is to recruit specific inter-areal feedback loops (e.g., drive neurons above firing threshold). We first illustrate the operation of recruitment on a simple example of visual stimulus selection. In the subsequent analysis, we find that attentional recruitment operates by dynamical modulation of signal amplification and response multistability. In particular, we find that attentional stimulus selection necessitates increased recruitment when the stimulus to be selected is of small contrast and of small distance away from distractor stimuli. The selectability of a low-contrast stimulus is dependent on the gain of attentional effects; for example, low-contrast stimuli can be selected only when attention enhances neural responses. However, the dependence of attentional selection on stimulus-distractor distance is not contingent on whether attention enhances or suppresses responses. The computational implications of attentional recruitment are that cortical circuits can behave as winner-take-all mechanisms of variable strength and can achieve close to optimal signal discrimination in the presence of external noise.  相似文献   

8.
Shu-Li Sun 《Automatica》2004,40(8):1447-1453
A unified multi-sensor optimal information fusion criterion weighted by scalars is presented in the linear minimum variance sense. The criterion considers the correlation among local estimation errors, only requires the computation of scalar weights, and avoids the computation of matrix weights so that the computational burden can obviously be reduced. Based on this fusion criterion and Kalman predictor, an optimal information fusion filter for the input white noise, which can be applied to seismic data processing in oil exploration, is given for discrete time-varying linear stochastic control systems measured by multiple sensors with correlated noises. It has a two-layer fusion structure. The first fusion layer has a netted parallel structure to determine the first-step prediction error cross-covariance for the state and the filtering error cross-covariance for the input white noise between any two sensors at each time step. The second fusion layer is the fusion center to determine the optimal scalar weights and obtain the optimal fusion filter for the input white noise. Two simulation examples for Bernoulli-Gaussian white noise filter show the effectiveness.  相似文献   

9.
A simple expression for a lower bound of Fisher information is derived for a network of recurrently connected spiking neurons that have been driven to a noise-perturbed steady state. We call this lower bound linear Fisher information, as it corresponds to the Fisher information that can be recovered by a locally optimal linear estimator. Unlike recent similar calculations, the approach used here includes the effects of nonlinear gain functions and correlated input noise and yields a surprisingly simple and intuitive expression that offers substantial insight into the sources of information degradation across successive layers of a neural network. Here, this expression is used to (1) compute the optimal (i.e., information-maximizing) firing rate of a neuron, (2) demonstrate why sharpening tuning curves by either thresholding or the action of recurrent connectivity is generally a bad idea, (3) show how a single cortical expansion is sufficient to instantiate a redundant population code that can propagate across multiple cortical layers with minimal information loss, and (4) show that optimal recurrent connectivity strongly depends on the covariance structure of the inputs to the network.  相似文献   

10.
Temporal coding of time-varying stimuli   总被引:1,自引:0,他引:1  
Shamir M  Sen K  Colburn HS 《Neural computation》2007,19(12):3239-3261
Temporal structure is an inherent property of various sensory inputs and motor outputs of the brain. For example, auditory stimuli are defined by the sound waveform. Temporal structure is also an important feature of certain visual stimuli, for example, the image on the retina of a fly during flight. In many cases, this temporal structure of the stimulus is being represented by a time-dependent neuronal activity that is locked to certain features of the stimulus. Here, we study the information capacity of the temporal code. In particular we are interested in the following questions. First, how does the information content of the code depend on the observation time of the cell's response, and what is the effect of temporal noise correlations on this information capacity? Second, what is the effect on the information content of reading the code with a finite temporal resolution for the neural response? We address these questions in the framework of a statistical model for the neuronal temporal response to a time-varying stimulus in a two-alternative forced-choice paradigm. We show that information content of the temporal response scales linearly with the overall time of the response, even in the presence of temporal noise correlations. More precisely, we find that positive temporal noise correlations have a scaling effect that decreases the information content. Nevertheless, the information content of the response continues to scale linearly with the observation time. We further show that finite temporal resolution is sufficient for obtaining most of the information from the cell's response. This finite timescale is related to the response properties of the cell.  相似文献   

11.
Recent experiments suggest that inhibitory networks of interneurons can synchronize the neuronal discharge in in vitro hippocampal slices. Subsequent theoretical work has shown that strong synchronization by mutual inhibition is only moderately robust against neuronal heterogeneities in the current drive, provided by activation of metabotropic glutamate receptors. In vivo neurons display greater variability in the interspike intervals due to the presence of synaptic noise. Noise and heterogeneity affect synchronization properties differently. In this paper we study, using model simulations, how robust synchronization can be in the presence of synaptic noise and neuronal heterogeneity. We find that stochastic weak synchronization (SWS) (i.e. when neurons spike within a short interval from each other, but not necessarily at each period) is produced with at least a minimum amount of noise and that it is much more robust than strong synchronization (i.e. when neurons spike at each period). The statistics produced by the SWS population discharge are consistent with previous experimental data. We also find robust SWS in the gamma-frequency range (20-80 Hz) for a stronger synaptic coupling compared with previous models and for networks with 10-1000 neurons.  相似文献   

12.
13.
Many experimental studies concerning the neuronal code are based on graded responses of neurons, given by the emitted number of spikes measured in a certain time window. Correspondingly, a large body of neural network theory deals with analogue neuron models and discusses their potential use for computation or function approximation. All physical signals, however, are of limited precision, and neuronal firing rates in cortex are relatively low. Here, we investigate the relevance of analogue signal processing with spikes in terms of optimal stimulus reconstruction and information theory. In particular, we derive optimal tuning functions taking the biological constraint of limited firing rates into account. It turns out that depending on the available decoding time T, optimal encoding undergoes a phase transition from discrete binary coding for small T towards analogue or quasi-analogue encoding for large T. The corresponding firing rate distributions are bimodal for all relevant T, in particular in the case of population coding.  相似文献   

14.
As neural activity is transmitted through the nervous system, neuronal noise degrades the encoded information and limits performance. It is therefore important to know how information loss can be prevented. We study this question in the context of neural population codes. Using Fisher information, we show how information loss in a layered network depends on the connectivity between the layers. We introduce an algorithm, reminiscent of the water filling algorithm for Shannon information that minimizes the loss. The optimal connection profile has a center-surround structure with a spatial extent closely matching the neurons' tuning curves. In addition, we show how the optimal connectivity depends on the correlation structure of the trial-to-trial variability in the neuronal responses. Our results explain how optimal communication of population codes requires the center-surround architectures found in the nervous system and provide explicit predictions on the connectivity parameters.  相似文献   

15.
The unified multisensor optimal information fusion criterion weighted by matrices is rederived in the linear minimum variance sense, where the assumption of normal distribution is avoided. Based on this fusion criterion, the optimal information fusion input white noise deconvolution estimators are presented for discrete time-varying linear stochastic control system with multiple sensors and correlated noises, which can be applied to seismic data processing in oil exploration. A three-layer fusion structure with fault tolerant property and reliability is given. The first fusion layer and the second fusion layer both have netted parallel structures to determine the first-step prediction error cross-covariance for the state and the estimation error cross-covariance for the input white noise between any two sensors at each time step, respectively. The third fusion layer is the fusion center to determine the optimal matrix weights and obtain the optimal fusion input white noise estimators. The simulation results for Bernoulli-Gaussian input white noise deconvolution estimators show the effectiveness.  相似文献   

16.
We study how neuronal connections in a population of spiking neurons affect the accuracy of stimulus estimation. Neurons in our model code for a one-dimensional orientation variable phi. Connectivity between two neurons depends on the absolute difference absolute value(phi - phi') between the preferred orientation of the two neurons. We derive an analytical expression of the activity profile for a population of neurons described by the spike response model with noisy threshold. We estimate the stimulus orientation and the trial-to-trial fluctuations using the population vector method. For stationary stimuli, uniform inhibitory connections produce a more reliable estimation of the stimulus than short-range excitatory connections with long-range inhibitions, although the latter interaction type produces a sharper tuning curve. These results are consistent with previous analytical studies of the Fisher information.  相似文献   

17.
We present an approach to obtain nonlinear information about neuronal response by computing multiple linear approximations. By calculating local linear approximations centered around particular stimuli, one can obtain insight into stimulus features that drive the response of highly nonlinear neurons, such as neurons highly selective to a small set of stimuli. We implement this approach based on stimulus-spike correlation (i.e., reverse correlation or spike-triggered average) methods. We illustrate the benefits of these linear approximations with a simplified two-dimensional model and a model of an auditory neuron that is highly selective to particular features of a song.  相似文献   

18.
Guigon E 《Neural computation》2003,15(9):2115-2127
The parametric variation in neuronal discharge according to the values of sensory or motor variables strongly influences the collective behavior of neuronal populations. A multitude of studies on the populations of broadly tuned neurons (e.g., cosine tuning) have led to such well-known computational principles as population coding, noise suppression, and line attractors. Much less is known about the properties of populations of monotonically tuned neurons. In this letter, we show that there exists an efficient weakly biased linear estimator for monotonic populations and that neural processing based on linear collective computation and least-square error learning in populations of intensity-coded neurons has specific generalization capacities.  相似文献   

19.
The capacity defines the ultimate fidelity limits of information transmission by any system. We derive the capacity of parallel Poisson process channels to judge the relative effectiveness of neural population structures. Because the Poisson process is equivalent to a Bernoulli process having small event probabilities, we infer the capacity of multi-channel Poisson models from their Bernoulli surrogates. For neural populations wherein each neuron has individual innervation, inter-neuron dependencies increase capacity, the opposite behavior of populations that share a single input. We use Shannon's rate-distortion theory to show that for Gaussian stimuli, the mean-squared error of the decoded stimulus decreases exponentially in both the population size and the maximal discharge rate. Detailed analysis shows that population coding is essential for accurate stimulus reconstruction. By modeling multi-neuron recordings as a sum of a neural population, we show that the resulting capacity is much less than the population's, reducing it to a level that can be less than provided with two separated neural responses. This result suggests that attempting neural control without spike sorting greatly reduces the achievable fidelity. In contrast, single-electrode neural stimulation does not incur any capacity deficit in comparison to stimulating individual neurons.  相似文献   

20.
Do simple cells in primary visual cortex form a tight frame?   总被引:1,自引:0,他引:1  
Sets of neuronal tuning curves, which describe the responses of neurons as functions of a stimulus, can serve as a basis for approximating other functions of stimulus parameters. In a function-approximating network, synaptic weights determined by a correlation-based Hebbian rule are closely related to the coefficients that result when a function is expanded in an orthogonal basis. Although neuronal tuning curves typically are not orthogonal functions, the relationship between function approximation and correlation-based synaptic weights can be retained if the tuning curves satisfy the conditions of a tight frame. We examine whether the spatial receptive fields of simple cells in cat and monkey primary visual cortex (V1) form a tight frame, allowing them to serve as a basis for constructing more complicated extrastriate receptive fields using correlation-based synaptic weights. Our calculations show that the set of V1 simple cell receptive fields is not tight enough to account for the acuity observed psychophysically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号