首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
A network of neurons with dendritic dynamics is analyzed in this paper. Two stable regimes of the complete network can coexist under continuous weak stimulation: the oscillatory synchronized regime and the quiet regime, where all neurons stop firing completely. It is shown that a single control pulse can calm a single neuron as well as the whole network, and the network stays in the quiet regime as long as the weak stimulation is turned on. It is also demonstrated that the same control technique can be effectively used to calm a random Erdös–Renyi network of dendritic neurons. Moreover, it appears that the random network of dendritic neurons can evolve into the quiet regime without applying any external pulse-based control techniques.  相似文献   

2.
Neurons that sustain elevated firing in the absence of stimuli have been found in many neural systems. In graded persistent activity, neurons can sustain firing at many levels, suggesting a widely found type of network dynamics in which networks can relax to any one of a continuum of stationary states. The reproduction of these findings in model networks of nonlinear neurons has turned out to be nontrivial. A particularly insightful model has been the "bump attractor," in which a continuous attractor emerges through an underlying symmetry in the network connectivity matrix. This model, however, cannot account for data in which the persistent firing of neurons is a monotonic -- rather than a bell-shaped -- function of a stored variable. Here, we show that the symmetry used in the bump attractor network can be employed to create a whole family of continuous attractor networks, including those with monotonic tuning. Our design is based on tuning the external inputs to networks that have a connectivity matrix with Toeplitz symmetry. In particular, we provide a complete analytical solution of a line attractor network with monotonic tuning and show that for many other networks, the numerical tuning of synaptic weights reduces to the computation of a single parameter.  相似文献   

3.
We present a dynamical theory of integrate-and-fire neurons with strong synaptic coupling. We show how phase-locked states that are stable in the weak coupling regime can destabilize as the coupling is increased, leading to states characterized by spatiotemporal variations in the interspike intervals (ISIs). The dynamics is compared with that of a corresponding network of analog neurons in which the outputs of the neurons are taken to be mean firing rates. A fundamental result is that for slow interactions, there is good agreement between the two models (on an appropriately defined timescale). Various examples of desynchronization in the strong coupling regime are presented. First, a globally coupled network of identical neurons with strong inhibitory coupling is shown to exhibit oscillator death in which some of the neurons suppress the activity of others. However, the stability of the synchronous state persists for very large networks and fast synapses. Second, an asymmetric network with a mixture of excitation and inhibition is shown to exhibit periodic bursting patterns. Finally, a one-dimensional network of neurons with long-range interactions is shown to desynchronize to a state with a spatially periodic pattern of mean firing rates across the network. This is modulated by deterministic fluctuations of the instantaneous firing rate whose size is an increasing function of the speed of synaptic response.  相似文献   

4.
Brunel N  Hansel D 《Neural computation》2006,18(5):1066-1110
GABAergic interneurons play a major role in the emergence of various types of synchronous oscillatory patterns of activity in the central nervous system. Motivated by these experimental facts, modeling studies have investigated mechanisms for the emergence of coherent activity in networks of inhibitory neurons. However, most of these studies have focused either when the noise in the network is absent or weak or in the opposite situation when it is strong. Hence, a full picture of how noise affects the dynamics of such systems is still lacking. The aim of this letter is to provide a more comprehensive understanding of the mechanisms by which the asynchronous states in large, fully connected networks of inhibitory neurons are destabilized as a function of the noise level. Three types of single neuron models are considered: the leaky integrate-and-fire (LIF) model, the exponential integrate-and-fire (EIF), model and conductance-based models involving sodium and potassium Hodgkin-Huxley (HH) currents. We show that in all models, the instabilities of the asynchronous state can be classified in two classes. The first one consists of clustering instabilities, which exist in a restricted range of noise. These instabilities lead to synchronous patterns in which the population of neurons is broken into clusters of synchronously firing neurons. The irregularity of the firing patterns of the neurons is weak. The second class of instabilities, termed oscillatory firing rate instabilities, exists at any value of noise. They lead to cluster state at low noise. As the noise is increased, the instability occurs at larger coupling, and the pattern of firing that emerges becomes more irregular. In the regime of high noise and strong coupling, these instabilities lead to stochastic oscillations in which neurons fire in an approximately Poisson way with a common instantaneous probability of firing that oscillates in time.  相似文献   

5.
Coincident firing of neurons projecting to a common target cell is likely to raise the probability of firing of this postsynaptic cell. Therefore, synchronized firing constitutes a significant event for postsynaptic neurons and is likely to play a role in neuronal information processing. Physiological data on synchronized firing in cortical networks are based primarily on paired recordings and cross-correlation analysis. However, pair-wise correlations among all inputs onto a postsynaptic neuron do not uniquely determine the distribution of simultaneous postsynaptic events. We develop a framework in order to calculate the amount of synchronous firing that, based on maximum entropy, should exist in a homogeneous neural network in which the neurons have known pair-wise correlations and higher-order structure is absent. According to the distribution of maximal entropy, synchronous events in which a large proportion of the neurons participates should exist even in the case of weak pair-wise correlations. Network simulations also exhibit these highly synchronous events in the case of weak pair-wise correlations. If such a group of neurons provides input to a common postsynaptic target, these network bursts may enhance the impact of this input, especially in the case of a high postsynaptic threshold. The proportion of neurons participating in synchronous bursts can be approximated by our method under restricted conditions. When these conditions are not fulfilled, the spike trains have less than maximal entropy, which is indicative of the presence of higher-order structure. In this situation, the degree of synchronicity cannot be derived from the pair-wise correlations.  相似文献   

6.
With spatially organized neural networks, we examined how bias and noise inputs with spatial structure result in different network states such as bumps, localized oscillations, global oscillations, and localized synchronous firing that may be relevant to, for example, orientation selectivity. To this end, we used networks of McCulloch-Pitts neurons, which allow theoretical predictions, and verified the obtained results with numerical simulations. Spatial inputs, no matter whether they are bias inputs or shared noise inputs, affect only firing activities with resonant spatial frequency. The component of noise that is independent for different neurons increases the linearity of the neural system and gives rise to less spatial mode mixing and less bistability of population activities.  相似文献   

7.
Cortical neurons are predominantly excitatory and highly interconnected. In spite of this, the cortex is remarkably stable: normal brains do not exhibit the kind of runaway excitation one might expect of such a system. How does the cortex maintain stability in the face of this massive excitatory feedback? More importantly, how does it do so during computations, which necessarily involve elevated firing rates? Here we address these questions in the context of attractor networks-networks that exhibit multiple stable states, or memories. We find that such networks can be stabilized at the relatively low firing rates observed in vivo if two conditions are met: (1) the background state, where all neurons are firing at low rates, is inhibition dominated, and (2) the fraction of neurons involved in a memory is above some threshold, so that there is sufficient coupling between the memory neurons and the background. This allows "dynamical stabilization" of the attractors, meaning feedback from the pool of background neurons stabilizes what would otherwise be an unstable state. We suggest that dynamical stabilization may be a strategy used for a broad range of computations, not just those involving attractors.  相似文献   

8.
Spike trains from cortical neurons show a high degree of irregularity, with coefficients of variation (CV) of their interspike interval (ISI) distribution close to or higher than one. It has been suggested that this irregularity might be a reflection of a particular dynamical state of the local cortical circuit in which excitation and inhibition balance each other. In this "balanced" state, the mean current to the neurons is below threshold, and firing is driven by current fluctuations, resulting in irregular Poisson-like spike trains. Recent data show that the degree of irregularity in neuronal spike trains recorded during the delay period of working memory experiments is the same for both low-activity states of a few Hz and for elevated, persistent activity states of a few tens of Hz. Since the difference between these persistent activity states cannot be due to external factors coming from sensory inputs, this suggests that the underlying network dynamics might support coexisting balanced states at different firing rates. We use mean field techniques to study the possible existence of multiple balanced steady states in recurrent networks of current-based leaky integrate-and-fire (LIF) neurons. To assess the degree of balance of a steady state, we extend existing mean-field theories so that not only the firing rate, but also the coefficient of variation of the interspike interval distribution of the neurons, are determined self-consistently. Depending on the connectivity parameters of the network, we find bistable solutions of different types. If the local recurrent connectivity is mainly excitatory, the two stable steady states differ mainly in the mean current to the neurons. In this case, the mean drive in the elevated persistent activity state is suprathreshold and typically characterized by low spiking irregularity. If the local recurrent excitatory and inhibitory drives are both large and nearly balanced, or even dominated by inhibition, two stable states coexist, both with subthreshold current drive. In this case, the spiking variability in both the resting state and the mnemonic persistent state is large, but the balance condition implies parameter fine-tuning. Since the degree of required fine-tuning increases with network size and, on the other hand, the size of the fluctuations in the afferent current to the cells increases for small networks, overall we find that fluctuation-driven persistent activity in the very simplified type of models we analyze is not a robust phenomenon. Possible implications of considering more realistic models are discussed.  相似文献   

9.
We studied a simple random recurrent inhibitory network. Despite its simplicity, the dynamics was so rich that activity patterns of neurons evolved with time without recurrence due to random recurrent connections among neurons. The sequence of activity patterns was generated by the trigger of an external signal, and the generation was stable against noise. Moreover, the same sequence was reproducible using a strong transient signal, that is, the sequence generation could be reset. Therefore, a time passage from the trigger of an external signal could be represented by the sequence of activity patterns, suggesting that this model could work as an internal clock. The model could generate different sequences of activity patterns by providing different external signals; thus, spatiotemporal information could be represented by this model. Moreover, it was possible to speed up and slow down the sequence generation.  相似文献   

10.
We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of integrate-and-fire neurons with spike adaptation. The dependence of this state on the different network parameters is investigated, and it is shown that this mechanism is robust against inhomogeneities, sparseness of the connectivity, and noise. In networks of two populations, one excitatory and one inhibitory, we show that decreasing the inhibitory feedback can cause the network to switch from a tonically active, asynchronous state to the synchronized bursting state. Finally, we show that the same mechanism also causes synchronized burst activity in networks of more realistic conductance-based model neurons.  相似文献   

11.
R.T.  P.A.   《Neurocomputing》2008,71(7-9):1373-1387
The impact of stability and synchronization of electrical activity on the structure of random brain networks with a distribution of connection strengths is investigated using a physiological model of brain activity. Connection strength is measured by the gain of the connection, which describes the effect of changes in the firing rate of neurons in one component on the neurons of another component. The stability of a network is calculated from the eigenvalue spectrum of the network's matrix of gains. Using random matrix theory, we predict and numerically verify the eigenvalue spectrum of randomly connected networks with gain values determined by a probability distribution. From the eigenvalue spectrum, the probability that a network is stable is calculated and shown to constrain the structural and physiological parameters of the network. In particular, stability constrains the variance of the gains. The complex vector of component amplitudes, or mode, corresponding to each dispersion root is an eigenvector of the network's gain matrix and is used to calculate the synchronization of each component's electrical activity. Synchronization is shown to decrease as the variance of the connection gain increases and inhibitory connections become more likely. Brain networks with large gain variance are shown to have multiple eigenvalues close to the stability boundary and to be partially synchronized. Such a network would have multiple partially synchronized modes strongly excited by a stimulus.  相似文献   

12.
The high-conductance state of cortical networks   总被引:3,自引:0,他引:3  
We studied the dynamics of large networks of spiking neurons with conductance-based (nonlinear) synapses and compared them to networks with current-based (linear) synapses. For systems with sparse and inhibition-dominated recurrent connectivity, weak external inputs induced asynchronous irregular firing at low rates. Membrane potentials fluctuated a few millivolts below threshold, and membrane conductances were increased by a factor 2 to 5 with respect to the resting state. This combination of parameters characterizes the ongoing spiking activity typically recorded in the cortex in vivo. Many aspects of the asynchronous irregular state in conductance-based networks could be sufficiently well characterized with a simple numerical mean field approach. In particular, it correctly predicted an intriguing property of conductance-based networks that does not appear to be shared by current-based models: they exhibit states of low-rate asynchronous irregular activity that persist for some period of time even in the absence of external inputs and without cortical pacemakers. Simulations of larger networks (up to 350,000 neurons) demonstrated that the survival time of self-sustained activity increases exponentially with network size.  相似文献   

13.
Xie X  Seung HS 《Neural computation》2003,15(2):441-454
Backpropagation and contrastive Hebbian learning are two methods of training networks with hidden neurons. Backpropagation computes an error signal for the output neurons and spreads it over the hidden neurons. Contrastive Hebbian learning involves clamping the output neurons at desired values and letting the effect spread through feedback connections over the entire network. To investigate the relationship between these two forms of learning, we consider a special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added. In this case, the change in network state caused by clamping the output neurons turns out to be the same as the error signal spread by backpropagation, except for a scalar prefactor. This suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks.  相似文献   

14.
Karsten  Andreas  Bernd  Ana D.  Thomas 《Neurocomputing》2008,71(7-9):1694-1704
Biologically plausible excitatory neural networks develop a persistent synchronized pattern of activity depending on spontaneous activity and synaptic refractoriness (short term depression). By fixed synaptic weights synchronous bursts of oscillatory activity are stable and involve the whole network. In our modeling study we investigate the effect of a dynamic Hebbian-like learning mechanism, spike-timing-dependent plasticity (STDP), on the changes of synaptic weights depending on synchronous activity and network connection strategies (small-world topology). We show that STDP modifies the weights of synaptic connections in such a way that synchronization of neuronal activity is considerably weakened. Networks with a higher proportion of long connections can sustain a higher level of synchronization in spite of STDP influence. The resulting distribution of the synaptic weights in single neurons depends both on the global statistics of firing dynamics and on the number of incoming and outgoing connections.  相似文献   

15.
生物神经网络的同步被认为在大脑神经信息的处理过程中发挥了重要作用.本文在Hodgkin-Huxley(HH)神经元网络模型中考虑树突整合效应,得到修正后的DHH(Dendritic-integration-rule-based HH)神经元网络模型,研究了网络的放电和同步特性.首先以三个抑制性神经元构成的耦合系统为例,发现树突整合效应的加入提高了神经元的放电阈值;然后分别建立全局耦合的抑制性和兴奋性神经元网络,发现大的耦合强度能够诱导抑制性和兴奋性神经元网络达到几乎完全同步的状态,并且对神经元的放电幅值有较大的影响;更有趣的是,当树突整合系数为某一值时,抑制性神经元网络的同步达到最高,而兴奋性神经网络的同步达到最低.  相似文献   

16.
Stochastic dynamics of a finite-size spiking neural network   总被引:4,自引:0,他引:4  
Soula H  Chow CC 《Neural computation》2007,19(12):3262-3292
We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance, and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network depends on only the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes.  相似文献   

17.
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.  相似文献   

18.
Fast oscillations and in particular gamma-band oscillation (20-80 Hz) are commonly observed during brain function and are at the center of several neural processing theories. In many cases, mathematical analysis of fast oscillations in neural networks has been focused on the transition between irregular and oscillatory firing viewed as an instability of the asynchronous activity. But in fact, brain slice experiments as well as detailed simulations of biological neural networks have produced a large corpus of results concerning the properties of fully developed oscillations that are far from this transition point. We propose here a mathematical approach to deal with nonlinear oscillations in a network of heterogeneous or noisy integrate-and-fire neurons connected by strong inhibition. This approach involves limited mathematical complexity and gives a good sense of the oscillation mechanism, making it an interesting tool to understand fast rhythmic activity in simulated or biological neural networks. A surprising result of our approach is that under some conditions, a change of the strength of inhibition only weakly influences the period of the oscillation. This is in contrast to standard theoretical and experimental models of interneuron network gamma oscillations (ING), where frequency tightly depends on inhibition strength, but it is similar to observations made in some in vitro preparations in the hippocampus and the olfactory bulb and in some detailed network models. This result is explained by the phenomenon of suppression that is known to occur in strongly coupled oscillating inhibitory networks but had not yet been related to the behavior of oscillation frequency.  相似文献   

19.
The synchrony of neurons in extrastriate visual cortex is modulated by selective attention even when there are only small changes in firing rate (Fries, Reynolds, Rorie, & Desimone, 2001). We used Hodgkin-Huxley type models of cortical neurons to investigate the mechanism by which the degree of synchrony can be modulated independently of changes in firing rates. The synchrony of local networks of model cortical interneurons interacting through GABA(A) synapses was modulated on a fast timescale by selectively activating a fraction of the interneurons. The activated interneurons became rapidly synchronized and suppressed the activity of the other neurons in the network but only if the network was in a restricted range of balanced synaptic background activity. During stronger background activity, the network did not synchronize, and for weaker background activity, the network synchronized but did not return to an asynchronous state after synchronizing. The inhibitory output of the network blocked the activity of pyramidal neurons during asynchronous network activity, and during synchronous network activity, it enhanced the impact of the stimulus-related activity of pyramidal cells on receiving cortical areas (Salinas & Sejnowski, 2001). Synchrony by competition provides a mechanism for controlling synchrony with minor alterations in rate, which could be useful for information processing. Because traditional methods such as cross-correlation and the spike field coherence require several hundred milliseconds of recordings and cannot measure rapid changes in the degree of synchrony, we introduced a new method to detect rapid changes in the degree of coincidence and precision of spike timing.  相似文献   

20.
This letter studies the properties of the random neural networks (RNNs) with state-dependent firing neurons. It is assumed that the times between successive signal emissions of a neuron are dependent on the neuron potential. Under certain conditions, the networks keep the simple product form of stationary solutions and exhibit enhanced capacity of adjusting the probability distribution of the neuron states. It is demonstrated that desired associative memory states can be stored in the networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号