首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We introduce and test a system for simulating networks of conductance-based neuron models using analog circuits. At the single-cell level, we use custom-designed analog circuits (ASICs) that simulate two types of spiking neurons based on Hodgkin-Huxley like dynamics: "regular spiking" excitatory neurons with spike-frequency adaptation, and "fast spiking" inhibitory neurons. Synaptic interactions are mediated by conductance-based synaptic currents described by kinetic models. Connectivity and plasticity rules are implemented digitally through a real time interface between a computer and a PCI board containing the ASICs. We show a prototype system of a few neurons interconnected with synapses undergoing spike-timing dependent plasticity (STDP), and compare this system with numerical simulations. We use this system to evaluate the effect of parameter dispersion on the behavior of small circuits of neurons. It is shown that, although the exact spike timings are not precisely emulated by the ASIC neurons, the behavior of small networks with STDP matches that of numerical simulations. Thus, this mixed analog-digital architecture provides a valuable tool for real-time simulations of networks of neurons with STDP. They should be useful for any real-time application, such as hybrid systems interfacing network models with biological neurons.  相似文献   

2.
We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the "address-event representation" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.  相似文献   

3.
The emergence of synchrony in the activity of large, heterogeneous networks of spiking neurons is investigated. We define the robustness of synchrony by the critical disorder at which the asynchronous state becomes linearly unstable. We show that at low firing rates, synchrony is more robust in excitatory networks than in inhibitory networks, but excitatory networks cannot display any synchrony when the average firing rate becomes too high. We introduce a new regime where all inputs, external and internal, are strong and have opposite effects that cancel each other when averaged. In this regime, the robustness of synchrony is strongly enhanced, and robust synchrony can be achieved at a high firing rate in inhibitory networks. On the other hand, in excitatory networks, synchrony remains limited in frequency due to the intrinsic instability of strong recurrent excitation.  相似文献   

4.
We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of integrate-and-fire neurons with spike adaptation. The dependence of this state on the different network parameters is investigated, and it is shown that this mechanism is robust against inhomogeneities, sparseness of the connectivity, and noise. In networks of two populations, one excitatory and one inhibitory, we show that decreasing the inhibitory feedback can cause the network to switch from a tonically active, asynchronous state to the synchronized bursting state. Finally, we show that the same mechanism also causes synchronized burst activity in networks of more realistic conductance-based model neurons.  相似文献   

5.
Brunel N  Hansel D 《Neural computation》2006,18(5):1066-1110
GABAergic interneurons play a major role in the emergence of various types of synchronous oscillatory patterns of activity in the central nervous system. Motivated by these experimental facts, modeling studies have investigated mechanisms for the emergence of coherent activity in networks of inhibitory neurons. However, most of these studies have focused either when the noise in the network is absent or weak or in the opposite situation when it is strong. Hence, a full picture of how noise affects the dynamics of such systems is still lacking. The aim of this letter is to provide a more comprehensive understanding of the mechanisms by which the asynchronous states in large, fully connected networks of inhibitory neurons are destabilized as a function of the noise level. Three types of single neuron models are considered: the leaky integrate-and-fire (LIF) model, the exponential integrate-and-fire (EIF), model and conductance-based models involving sodium and potassium Hodgkin-Huxley (HH) currents. We show that in all models, the instabilities of the asynchronous state can be classified in two classes. The first one consists of clustering instabilities, which exist in a restricted range of noise. These instabilities lead to synchronous patterns in which the population of neurons is broken into clusters of synchronously firing neurons. The irregularity of the firing patterns of the neurons is weak. The second class of instabilities, termed oscillatory firing rate instabilities, exists at any value of noise. They lead to cluster state at low noise. As the noise is increased, the instability occurs at larger coupling, and the pattern of firing that emerges becomes more irregular. In the regime of high noise and strong coupling, these instabilities lead to stochastic oscillations in which neurons fire in an approximately Poisson way with a common instantaneous probability of firing that oscillates in time.  相似文献   

6.
Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical integration approaches because the timing of spikes is treated exactly. The drawback of such event-driven methods is that in order to be efficient, the membrane equations must be solvable analytically, or at least provide simple analytic approximations for the state variables describing the system. This requirement prevents, in general, the use of conductance-based synaptic interactions within the framework of event-driven simulations and, thus, the investigation of network paradigms where synaptic conductances are important. We propose here a number of extensions of the classical leaky IF neuron model involving approximations of the membrane equation with conductance-based synaptic current, which lead to simple analytic expressions for the membrane state, and therefore can be used in the event-driven framework. These conductance-based IF (gIF) models are compared to commonly used models, such as the leaky IF model or biophysical models in which conductances are explicitly integrated. All models are compared with respect to various spiking response properties in the presence of synaptic activity, such as the spontaneous discharge statistics, the temporal precision in resolving synaptic inputs, and gain modulation under in vivo-like synaptic bombardment. Being based on the passive membrane equation with fixed-threshold spike generation, the proposed gIF models are situated in between leaky IF and biophysical models but are much closer to the latter with respect to their dynamic behavior and response characteristics, while still being nearly as computationally efficient as simple IF neuron models. gIF models should therefore provide a useful tool for efficient and precise simulation of large-scale neuronal networks with realistic, conductance-based synaptic interactions.  相似文献   

7.
Spike trains from cortical neurons show a high degree of irregularity, with coefficients of variation (CV) of their interspike interval (ISI) distribution close to or higher than one. It has been suggested that this irregularity might be a reflection of a particular dynamical state of the local cortical circuit in which excitation and inhibition balance each other. In this "balanced" state, the mean current to the neurons is below threshold, and firing is driven by current fluctuations, resulting in irregular Poisson-like spike trains. Recent data show that the degree of irregularity in neuronal spike trains recorded during the delay period of working memory experiments is the same for both low-activity states of a few Hz and for elevated, persistent activity states of a few tens of Hz. Since the difference between these persistent activity states cannot be due to external factors coming from sensory inputs, this suggests that the underlying network dynamics might support coexisting balanced states at different firing rates. We use mean field techniques to study the possible existence of multiple balanced steady states in recurrent networks of current-based leaky integrate-and-fire (LIF) neurons. To assess the degree of balance of a steady state, we extend existing mean-field theories so that not only the firing rate, but also the coefficient of variation of the interspike interval distribution of the neurons, are determined self-consistently. Depending on the connectivity parameters of the network, we find bistable solutions of different types. If the local recurrent connectivity is mainly excitatory, the two stable steady states differ mainly in the mean current to the neurons. In this case, the mean drive in the elevated persistent activity state is suprathreshold and typically characterized by low spiking irregularity. If the local recurrent excitatory and inhibitory drives are both large and nearly balanced, or even dominated by inhibition, two stable states coexist, both with subthreshold current drive. In this case, the spiking variability in both the resting state and the mnemonic persistent state is large, but the balance condition implies parameter fine-tuning. Since the degree of required fine-tuning increases with network size and, on the other hand, the size of the fluctuations in the afferent current to the cells increases for small networks, overall we find that fluctuation-driven persistent activity in the very simplified type of models we analyze is not a robust phenomenon. Possible implications of considering more realistic models are discussed.  相似文献   

8.
We investigate through theoretical analysis and computer simulations the consequences of unreliable synapses for fast analog computations in networks of spiking neurons, with analog variables encoded by the current firing activities of pools of spiking neurons. Our results suggest a possible functional role for the well-established unreliability of synaptic transmission on the network level. We also investigate computations on time series and Hebbian learning in this context of space-rate coding in networks of spiking neurons with unreliable synapses.  相似文献   

9.
Golomb D  Hansel D 《Neural computation》2000,12(5):1095-1139
The prevalence of coherent oscillations in various frequency ranges in the central nervous system raises the question of the mechanisms that synchronize large populations of neurons. We study synchronization in models of large networks of spiking neurons with random sparse connectivity. Synchrony occurs only when the average number of synapses, M, that a cell receives is larger than a critical value, Mc. Below Mc, the system is in an asynchronous state. In the limit of weak coupling, assuming identical neurons, we reduce the model to a system of phase oscillators that are coupled via an effective interaction, gamma. In this framework, we develop an approximate theory for sparse networks of identical neurons to estimate Mc analytically from the Fourier coefficients of gamma. Our approach relies on the assumption that the dynamics of a neuron depend mainly on the number of cells that are presynaptic to it. We apply this theory to compute Mc for a model of inhibitory networks of integrate-and-fire (I&F) neurons as a function of the intrinsic neuronal properties (e.g., the refractory period Tr), the synaptic time constants, and the strength of the external stimulus, Iext. The number Mc is found to be nonmonotonous with the strength of Iext. For Tr = 0, we estimate the minimum value of Mc over all the parameters of the model to be 363.8. Above Mc, the neurons tend to fire in smeared one-cluster states at high firing rates and smeared two-or-more-cluster states at low firing rates. Refractoriness decreases Mc at intermediate and high firing rates. These results are compared to numerical simulations. We show numerically that systems with different sizes, N, behave in the same way provided the connectivity, M, is such that 1/Meff = 1/M - 1/N remains constant when N varies. This allows extrapolating the large N behavior of a network from numerical simulations of networks of relatively small sizes (N = 800 in our case). We find that our theory predicts with remarkable accuracy the value of Mc and the patterns of synchrony above Mc, provided the synaptic coupling is not too large. We also study the strong coupling regime of inhibitory sparse networks. All of our simulations demonstrate that increasing the coupling strength reduces the level of synchrony of the neuronal activity. Above a critical coupling strength, the network activity is asynchronous. We point out a fundamental limitation for the mechanisms of synchrony relying on inhibition alone, if heterogeneities in the intrinsic properties of the neurons and spatial fluctuations in the external input are also taken into account.  相似文献   

10.
The balanced random network model attracts considerable interest because it explains the irregular spiking activity at low rates and large membrane potential fluctuations exhibited by cortical neurons in vivo. In this article, we investigate to what extent this model is also compatible with the experimentally observed phenomenon of spike-timing-dependent plasticity (STDP). Confronted with the plethora of theoretical models for STDP available, we reexamine the experimental data. On this basis, we propose a novel STDP update rule, with a multiplicative dependence on the synaptic weight for depression, and a power law dependence for potentiation. We show that this rule, when implemented in large, balanced networks of realistic connectivity and sparseness, is compatible with the asynchronous irregular activity regime. The resultant equilibrium weight distribution is unimodal with fluctuating individual weight trajectories and does not exhibit development of structure. We investigate the robustness of our results with respect to the relative strength of depression. We introduce synchronous stimulation to a group of neurons and demonstrate that the decoupling of this group from the rest of the network is so severe that it cannot effectively control the spiking of other neurons, even those with the highest convergence from this group.  相似文献   

11.
F Azhar  WS Anderson 《Neural computation》2012,24(10):2655-2677
The characterization of coordinated activity in neuronal populations has received renewed interest in the light of advancing experimental techniques that allow recordings from multiple units simultaneously. Across both in vitro and in vivo preparations, nearby neurons show coordinated responses when spontaneously active and when subject to external stimuli. Recent work (Truccolo, Hochberg, & Donoghue, 2010 ) has connected these coordinated responses to behavior, showing that small ensembles of neurons in arm-related areas of sensorimotor cortex can reliably predict single-neuron spikes in behaving monkeys and humans. We investigate this phenomenon using an analogous point process model, showing that in the case of a computational model of cortex responding to random background inputs, one is similarly able to predict the future state of a single neuron by considering its own spiking history, together with the spiking histories of randomly sampled ensembles of nearby neurons. This model exhibits realistic cortical architecture and displays bursting episodes in the two distinct connectivity schemes studied. We conjecture that the baseline predictability we find in these instances is characteristic of locally connected networks more broadly considered.  相似文献   

12.
13.
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.  相似文献   

14.
A population formulation of neuronal activity is employed to study an excitatory network of (spiking) neurons receiving external input as well as recurrent feedback. At relatively low levels of feedback, the network exhibits time stationary asynchronous behavior. A stability analysis of this time stationary state leads to an analytical criterion for the critical gain at which time asynchronous behavior becomes unstable. At instability the dynamics can undergo a supercritical Hopf bifurcation and the population passes to a synchronous state. Under different conditions it can pass to synchrony through a subcritical Hopf bifurcation. And at high gain a network can reach a runaway state, in finite time, after which the network no longer supports bounded solutions.The introduction of time delayed feedback leads to a rich range of phenomena. For example, for a given external input, increasing gain produces transition from asynchrony, to synchrony, to asynchrony and finally can lead to divergence. Time delay is also shown to strongly mollify the amplitude of synchronous oscillations. Perhaps, of general importance, is the result that synchronous behavior can exist only for a narrow range of time delays, which range is an order of magnitude smaller than periods of oscillation.  相似文献   

15.
In model networks of E-cells and I-cells (excitatory and inhibitory neurons, respectively), synchronous rhythmic spiking often comes about from the interplay between the two cell groups: the E-cells synchronize the I-cells and vice versa. Under ideal conditions-homogeneity in relevant network parameters and all-to-all connectivity, for instance-this mechanism can yield perfect synchronization. We find that approximate, imperfect synchronization is possible even with very sparse, random connectivity. The crucial quantity is the expected number of inputs per cell. As long as it is large enough (more precisely, as long as the variance of the total number of synaptic inputs per cell is small enough), tight synchronization is possible. The desynchronizing effect of random connectivity can be reduced by strengthening the E --> I synapses. More surprising, it cannot be reduced by strengthening the I --> E synapses. However, the decay time constant of inhibition plays an important role. Faster decay yields tighter synchrony. In particular, in models in which the inhibitory synapses are assumed to be instantaneous, the effects of sparse, random connectivity cannot be seen.  相似文献   

16.
本文介绍一种能模拟单个神经元、触突以及系统的模拟器Simulator。用它模拟在生物界所熟知的一种无脊椎软体类似海参的食肉类动作Pleuro-branchea。它的各种形为如吞食食物,遇敌退缩等用Fortran77版本编程并在Sun3/50工作站上进行模拟获得了满意结果。  相似文献   

17.
In most neural network models, synapses are treated as static weights that change only with the slow time scales of learning. It is well known, however, that synapses are highly dynamic and show use-dependent plasticity over a wide range of time scales. Moreover, synaptic transmission is an inherently stochastic process: a spike arriving at a presynaptic terminal triggers the release of a vesicle of neurotransmitter from a release site with a probability that can be much less than one. We consider a simple model for dynamic stochastic synapses that can easily be integrated into common models for networks of integrate-and-fire neurons (spiking neurons). The parameters of this model have direct interpretations in terms of synaptic physiology. We investigate the consequences of the model for computing with individual spikes and demonstrate through rigorous theoretical results that the computational power of the network is increased through the use of dynamic synapses.  相似文献   

18.
In this letter, we study the effect of a unique initial stimulation on random recurrent networks of leaky integrate-and-fire neurons. Indeed, given a stochastic connectivity, this so-called spontaneous mode exhibits various nontrivial dynamics. This study is based on a mathematical formalism that allows us to examine the variability of the afterward dynamics according to the parameters of the weight distribution. Under the independence hypothesis (e.g., in the case of very large networks), we are able to compute the average number of neurons that fire at a given time-the spiking activity. In accordance with numerical simulations, we prove that this spiking activity reaches a steady state. We characterize this steady state and explore the transients.  相似文献   

19.
We investigate theoretically the conditions for the emergence of synchronous activity in large networks, consisting of two populations of extensively connected neurons, one excitatory and one inhibitory. The neurons are modeled with quadratic integrate-and-fire dynamics, which provide a very good approximation for the subthreshold behavior of a large class of neurons. In addition to their synaptic recurrent inputs, the neurons receive a tonic external input that varies from neuron to neuron. Because of its relative simplicity, this model can be studied analytically. We investigate the stability of the asynchronous state (AS) of the network with given average firing rates of the two populations. First, we show that the AS can remain stable even if the synaptic couplings are strong. Then we investigate the conditions under which this state can be destabilized. We show that this can happen in four generic ways. The first is a saddle-node bifurcation, which leads to another state with different average firing rates. This bifurcation, which occurs for strong enough recurrent excitation, does not correspond to the emergence of synchrony. In contrast, in the three other instability mechanisms, Hopf bifurcations, which correspond to the emergence of oscillatory synchronous activity, occur. We show that these mechanisms can be differentiated by the firing patterns they generate and their dependence on the mutual interactions of the inhibitory neurons and cross talk between the two populations. We also show that besides these codimension 1 bifurcations, the system can display several codimension 2 bifurcations: Takens-Bogdanov, Gavrielov-Guckenheimer, and double Hopf bifurcations.  相似文献   

20.
Orientation tuning in a ring of pulse-coupled integrate-and-fire (IF) neurons is analyzed in terms of spontaneous pattern formation. It is shown how the ring bifurcates from a synchronous state to a non-phase-locked state whose spike trains are characterized by clustered but irregular fluctuations of the interspike intervals (ISIs). The separation of these clusters in phase space results in a localized peak of activity as measured by the time-averaged firing rate of the neurons. This generates a sharp orientation tuning curve that can lock to a slowly rotating, weakly tuned external stimulus. Under certain conditions, the peak can slowly rotate even to a fixed external stimulus. The ring also exhibits hysteresis due to the subcritical nature of the bifurcation to sharp orientation tuning. Such behavior is shown to be consistent with a corresponding analog version of the IF model in the limit of slow synaptic interactions. For fast synapses, the deterministic fluctuations of the ISIs associated with the tuning curve can support a coefficient of variation of order unity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号