首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical integration approaches because the timing of spikes is treated exactly. The drawback of such event-driven methods is that in order to be efficient, the membrane equations must be solvable analytically, or at least provide simple analytic approximations for the state variables describing the system. This requirement prevents, in general, the use of conductance-based synaptic interactions within the framework of event-driven simulations and, thus, the investigation of network paradigms where synaptic conductances are important. We propose here a number of extensions of the classical leaky IF neuron model involving approximations of the membrane equation with conductance-based synaptic current, which lead to simple analytic expressions for the membrane state, and therefore can be used in the event-driven framework. These conductance-based IF (gIF) models are compared to commonly used models, such as the leaky IF model or biophysical models in which conductances are explicitly integrated. All models are compared with respect to various spiking response properties in the presence of synaptic activity, such as the spontaneous discharge statistics, the temporal precision in resolving synaptic inputs, and gain modulation under in vivo-like synaptic bombardment. Being based on the passive membrane equation with fixed-threshold spike generation, the proposed gIF models are situated in between leaky IF and biophysical models but are much closer to the latter with respect to their dynamic behavior and response characteristics, while still being nearly as computationally efficient as simple IF neuron models. gIF models should therefore provide a useful tool for efficient and precise simulation of large-scale neuronal networks with realistic, conductance-based synaptic interactions.  相似文献   

2.
A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6% more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces.  相似文献   

3.
4.
We introduce and test a system for simulating networks of conductance-based neuron models using analog circuits. At the single-cell level, we use custom-designed analog circuits (ASICs) that simulate two types of spiking neurons based on Hodgkin-Huxley like dynamics: "regular spiking" excitatory neurons with spike-frequency adaptation, and "fast spiking" inhibitory neurons. Synaptic interactions are mediated by conductance-based synaptic currents described by kinetic models. Connectivity and plasticity rules are implemented digitally through a real time interface between a computer and a PCI board containing the ASICs. We show a prototype system of a few neurons interconnected with synapses undergoing spike-timing dependent plasticity (STDP), and compare this system with numerical simulations. We use this system to evaluate the effect of parameter dispersion on the behavior of small circuits of neurons. It is shown that, although the exact spike timings are not precisely emulated by the ASIC neurons, the behavior of small networks with STDP matches that of numerical simulations. Thus, this mixed analog-digital architecture provides a valuable tool for real-time simulations of networks of neurons with STDP. They should be useful for any real-time application, such as hybrid systems interfacing network models with biological neurons.  相似文献   

5.
We present a new technique, based on a proposed event-based strategy (Mattia & Del Giudice, 2000), for efficiently simulating large networks of simple model neurons. The strategy was based on the fact that interactions among neurons occur by means of events that are well localized in time (the action potentials) and relatively rare. In the interval between two of these events, the state variables associated with a model neuron or a synapse evolved deterministically and in a predictable way. Here, we extend the event-driven simulation strategy to the case in which the dynamics of the state variables in the inter-event intervals are stochastic. This extension captures both the situation in which the simulated neurons are inherently noisy and the case in which they are embedded in a very large network and receive a huge number of random synaptic inputs. We show how to effectively include the impact of large background populations into neuronal dynamics by means of the numerical evaluation of the statistical properties of single-model neurons under random current injection. The new simulation strategy allows the study of networks of interacting neurons with an arbitrary number of external afferents and inherent stochastic dynamics.  相似文献   

6.
In a previous paper (Rudolph & Destexhe, 2006), we proposed various models, the gIF neuron models, of analytical integrate-and-fire (IF) neurons with conductance-based (COBA) dynamics for use in event-driven simulations. These models are based on an analytical approximation of the differential equation describing the IF neuron with exponential synaptic conductances and were successfully tested with respect to their response to random and oscillating inputs. Because they are analytical and mathematically simple, the gIF models are best suited for fast event-driven simulation strategies. However, the drawback of such models is they rely on a nonrealistic postsynaptic potential (PSP) time course, consisting of a discontinuous jump followed by a decay governed by the membrane time constant. Here, we address this limitation by conceiving an analytical approximation of the COBA IF neuron model with the full PSP time course. The subthreshold and suprathreshold response of this gIF4 model reproduces remarkably well the postsynaptic responses of the numerically solved passive membrane equation subject to conductance noise, while gaining at least two orders of magnitude in computational performance. Although the analytical structure of the gIF4 model is more complex than that of its predecessors due to the necessity of calculating future spike times, a simple and fast algorithmic implementation for use in large-scale neural network simulations is proposed.  相似文献   

7.
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.  相似文献   

8.
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.  相似文献   

9.
The simulation of spiking neural networks (SNNs) is known to be a very time-consuming task. This limits the size of SNN that can be simulated in reasonable time or forces users to overly limit the complexity of the neuron models. This is one of the driving forces behind much of the recent research on event-driven simulation strategies. Although event-driven simulation allows precise and efficient simulation of certain spiking neuron models, it is not straightforward to generalize the technique to more complex neuron models, mostly because the firing time of these neuron models is computationally expensive to evaluate. Most solutions proposed in literature concentrate on algorithms that can solve this problem efficiently. However, these solutions do not scale well when more state variables are involved in the neuron model, which is, for example, the case when multiple synaptic time constants for each neuron are used. In this letter, we show that an exact prediction of the firing time is not required in order to guarantee exact simulation results. Several techniques are presented that try to do the least possible amount of work to predict the firing times. We propose an elegant algorithm for the simulation of leaky integrate-and-fire (LIF) neurons with an arbitrary number of (unconstrained) synaptic time constants, which is able to combine these algorithmic techniques efficiently, resulting in very high simulation speed. Moreover, our algorithm is highly independent of the complexity (i.e., number of synaptic time constants) of the underlying neuron model.  相似文献   

10.
Population density methods provide promising time-saving alternatives to direct Monte Carlo simulations of neuronal network activity, in which one tracks the state of thousands of individual neurons and synapses. A population density method has been found to be roughly a hundred times faster than direct simulation for various test networks of integrate-and-fire model neurons with instantaneous excitatory and inhibitory post-synaptic conductances. In this method, neurons are grouped into large populations of similar neurons. For each population, one calculates the evolution of a probability density function (PDF) which describes the distribution of neurons over state space. The population firing rate is then given by the total flux of probability across the threshold voltage for firing an action potential. Extending the method beyond instantaneous synapses is necessary for obtaining accurate results, because synaptic kinetics play an important role in network dynamics. Embellishments incorporating more realistic synaptic kinetics for the underlying neuron model increase the dimension of the PDF, which was one-dimensional in the instantaneous synapse case. This increase in dimension causes a substantial increase in computation time to find the exact PDF, decreasing the computational speed advantage of the population density method over direct Monte Carlo simulation. We report here on a one-dimensional model of the PDF for neurons with arbitrary synaptic kinetics. The method is more accurate than the mean-field method in the steady state, where the mean-field approximation works best, and also under dynamic-stimulus conditions. The method is much faster than direct simulations. Limitations of the method are demonstrated, and possible improvements are discussed.  相似文献   

11.
12.
We introduce a model of generalized Hebbian learning and retrieval in oscillatory neural networks modeling cortical areas such as hippocampus and olfactory cortex. Recent experiments have shown that synaptic plasticity depends on spike timing, especially on synapses from excitatory pyramidal cells, in hippocampus, and in sensory and cerebellar cortex. Here we study how such plasticity can be used to form memories and input representations when the neural dynamics are oscillatory, as is common in the brain (particularly in the hippocampus and olfactory cortex). Learning is assumed to occur in a phase of neural plasticity, in which the network is clamped to external teaching signals. By suitable manipulation of the nonlinearity of the neurons or the oscillation frequencies during learning, the model can be made, in a retrieval phase, either to categorize new inputs or to map them, in a continuous fashion, onto the space spanned by the imprinted patterns. We identify the first of these possibilities with the function of olfactory cortex and the second with the observed response characteristics of place cells in hippocampus. We investigate both kinds of networks analytically and by computer simulations, and we link the models with experimental findings, exploring, in particular, how the spike timing dependence of the synaptic plasticity constrains the computational function of the network and vice versa.  相似文献   

13.
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.  相似文献   

14.
15.
Brader JM  Senn W  Fusi S 《Neural computation》2007,19(11):2881-2912
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).  相似文献   

16.
在计算神经科学领域,大规模神经元网络的并行仿真对探索和揭示生物大脑中信息传递机制有着重要作用。为加速大规模神经元网络仿真,提出一种模块独立性强、耦合度低的基于突触递质-受体离子通道动力学的神经元网络的并行算法。通过分析化学突触信息传递机理及递质分子、受体离子通道动力学特征,提出了递质-受体计算分离的思想,增强了突触前神经元引起的递质分子浓度计算与突触后绑定状态的受体浓度计算之间的独立性,降低突触电流计算中突触前神经元状态和突触后神经元状态之间的耦合度。基于上述思想,设计并实现了一种生物神经网络并行算法。仿真结果表明了该算法的高效性。  相似文献   

17.
Reinforcement learning, spike-time-dependent plasticity, and the BCM rule   总被引:1,自引:0,他引:1  
Baras D  Meir R 《Neural computation》2007,19(8):2245-2279
Learning agents, whether natural or artificial, must update their internal parameters in order to improve their behavior over time. In reinforcement learning, this plasticity is influenced by an environmental signal, termed a reward, that directs the changes in appropriate directions. We apply a recently introduced policy learning algorithm from machine learning to networks of spiking neurons and derive a spike-time-dependent plasticity rule that ensures convergence to a local optimum of the expected average reward. The approach is applicable to a broad class of neuronal models, including the Hodgkin-Huxley model. We demonstrate the effectiveness of the derived rule in several toy problems. Finally, through statistical analysis, we show that the synaptic plasticity rule established is closely related to the widely used BCM rule, for which good biological evidence exists.  相似文献   

18.
Karsten  Andreas  Bernd  Ana D.  Thomas 《Neurocomputing》2008,71(7-9):1694-1704
Biologically plausible excitatory neural networks develop a persistent synchronized pattern of activity depending on spontaneous activity and synaptic refractoriness (short term depression). By fixed synaptic weights synchronous bursts of oscillatory activity are stable and involve the whole network. In our modeling study we investigate the effect of a dynamic Hebbian-like learning mechanism, spike-timing-dependent plasticity (STDP), on the changes of synaptic weights depending on synchronous activity and network connection strategies (small-world topology). We show that STDP modifies the weights of synaptic connections in such a way that synchronization of neuronal activity is considerably weakened. Networks with a higher proportion of long connections can sustain a higher level of synchronization in spite of STDP influence. The resulting distribution of the synaptic weights in single neurons depends both on the global statistics of firing dynamics and on the number of incoming and outgoing connections.  相似文献   

19.
Small networks of cultured hippocampal neurons respond to transient stimulation with rhythmic network activity (reverberation) that persists for several seconds, constituting an in vitro model of synchrony, working memory, and seizure. This mode of activity has been shown theoretically and experimentally to depend on asynchronous neurotransmitter release (an essential feature of the developing hippocampus) and is supported by a variety of developing neuronal networks despite variability in the size of populations (10-200 neurons) and in patterns of synaptic connectivity. It has previously been reported in computational models that "small-world" connection topology is ideal for the propagation of similar modes of network activity, although this has been shown only for neurons utilizing synchronous (phasic) synaptic transmission. We investigated how topological constraints on synaptic connectivity could shape the stability of reverberations in small networks that also use asynchronous synaptic transmission. We found that reverberation duration in such networks was resistant to changes in topology and scaled poorly with network size. However, normalization of synaptic drive, by reducing the variance of synaptic input across neurons, stabilized reverberation in such networks. Our results thus suggest that the stability of both normal and pathological states in developing networks might be shaped by variance-normalizing constraints on synaptic drive. We offer an experimental prediction for the consequences of such regulation on the behavior of small networks.  相似文献   

20.
Large‐scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low‐cost solution for many high‐performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large‐scale networks composed of biologically realistic Hodgkin–Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad‐core CPU. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号