首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Exact simulation of integrate-and-fire models with exponential currents   总被引:3,自引:0,他引:3  
Brette R 《Neural computation》2007,19(10):2604-2609
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.  相似文献   

2.
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.  相似文献   

3.
Stochastic dynamics of a finite-size spiking neural network   总被引:4,自引:0,他引:4  
Soula H  Chow CC 《Neural computation》2007,19(12):3262-3292
We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance, and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network depends on only the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes.  相似文献   

4.
Brette R 《Neural computation》2006,18(8):2004-2027
Computational neuroscience relies heavily on the simulation of large networks of neuron models. There are essentially two simulation strategies: (1) using an approximation method (e.g., Runge-Kutta) with spike times binned to the time step and (2) calculating spike times exactly in an event-driven fashion. In large networks, the computation time of the best algorithm for either strategy scales linearly with the number of synapses, but each strategy has its own assets and constraints: approximation methods can be applied to any model but are inexact; exact simulation avoids numerical artifacts but is limited to simple models. Previous work has focused on improving the accuracy of approximation methods. In this article, we extend the range of models that can be simulated exactly to a more realistic model: an integrate-and-fire model with exponential synaptic conductances.  相似文献   

5.
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.  相似文献   

6.
A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6% more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces.  相似文献   

7.
Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical integration approaches because the timing of spikes is treated exactly. The drawback of such event-driven methods is that in order to be efficient, the membrane equations must be solvable analytically, or at least provide simple analytic approximations for the state variables describing the system. This requirement prevents, in general, the use of conductance-based synaptic interactions within the framework of event-driven simulations and, thus, the investigation of network paradigms where synaptic conductances are important. We propose here a number of extensions of the classical leaky IF neuron model involving approximations of the membrane equation with conductance-based synaptic current, which lead to simple analytic expressions for the membrane state, and therefore can be used in the event-driven framework. These conductance-based IF (gIF) models are compared to commonly used models, such as the leaky IF model or biophysical models in which conductances are explicitly integrated. All models are compared with respect to various spiking response properties in the presence of synaptic activity, such as the spontaneous discharge statistics, the temporal precision in resolving synaptic inputs, and gain modulation under in vivo-like synaptic bombardment. Being based on the passive membrane equation with fixed-threshold spike generation, the proposed gIF models are situated in between leaky IF and biophysical models but are much closer to the latter with respect to their dynamic behavior and response characteristics, while still being nearly as computationally efficient as simple IF neuron models. gIF models should therefore provide a useful tool for efficient and precise simulation of large-scale neuronal networks with realistic, conductance-based synaptic interactions.  相似文献   

8.
Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.  相似文献   

9.
Mikula S  Niebur E 《Neural computation》2008,20(11):2637-2661
We provide analytical solutions for mean firing rates and cross-correlations of coincidence detector neurons in recurrent networks with excitatory or inhibitory connectivity, with rate-modulated steady-state spiking inputs. We use discrete-time finite-state Markov chains to represent network state transition probabilities, which are subsequently used to derive exact analytical solutions for mean firing rates and cross-correlations. As illustrated in several examples, the method can be used for modeling cortical microcircuits and clarifying single-neuron and population coding mechanisms. We also demonstrate that increasing firing rates do not necessarily translate into increasing cross-correlations, though our results do support the contention that firing rates and cross-correlations are likely to be coupled. Our analytical solutions underscore the complexity of the relationship between firing rates and cross-correlations.  相似文献   

10.
Real-time computing platform for spiking neurons (RT-spike)   总被引:1,自引:0,他引:1  
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.  相似文献   

11.
Síma J  Sgall J 《Neural computation》2005,17(12):2635-2647
We study the computational complexity of training a single spiking neuron N with binary coded inputs and output that, in addition to adaptive weights and a threshold, has adjustable synaptic delays. A synchronization technique is introduced so that the results concerning the nonlearnability of spiking neurons with binary delays are generalized to arbitrary real-valued delays. In particular, the consistency problem for N with programmable weights, a threshold, and delays, and its approximation version are proven to be NP-complete. It follows that the spiking neurons with arbitrary synaptic delays are not properly PAC learnable and do not allow robust learning unless RP = NP. In addition, the representation problem for N, a question whether an n-variable Boolean function given in DNF (or as a disjunction of O(n) threshold gates) can be computed by a spiking neuron, is shown to be coNP-hard.  相似文献   

12.
Kelly RC  Kass RE 《Neural computation》2012,24(8):2007-2032
Several authors have previously discussed the use of log-linear models, often called maximum entropy models, for analyzing spike train data to detect synchrony. The usual log-linear modeling techniques, however, do not allow time-varying firing rates that typically appear in stimulus-driven (or action-driven) neurons, nor do they incorporate non-Poisson history effects or covariate effects. We generalize the usual approach, combining point-process regression models of individual neuron activity with log-linear models of multiway synchronous interaction. The methods are illustrated with results found in spike trains recorded simultaneously from primary visual cortex. We then assess the amount of data needed to reliably detect multiway spiking.  相似文献   

13.
We explore homeostasis in a silicon integrate-and-fire neuron. The neuron adapts its firing rate over time periods on the order of seconds or minutes so that it returns to its spontaneous firing rate after a sustained perturbation. Homeostasis is implemented via two schemes. One scheme looks at the presynaptic activity and adapts the synaptic weight depending on the presynaptic spiking rate. The second scheme adapts the synaptic "threshold" depending on the neuron's activity. The threshold is lowered if the neuron's activity decreases over a long time and is increased for prolonged increase in postsynaptic activity. The presynaptic adaptation mechanism models the contrast adaptation responses observed in simple cortical cells. To obtain the long adaptation timescales we require, we used floating-gates. Otherwise, the capacitors we would have to use would be of such a size that we could not integrate them and so we could not incorporate such long-time adaptation mechanisms into a very large-scale integration (VLSI) network of neurons. The circuits for the adaptation mechanisms have been implemented in a 2-/spl mu/m double-poly CMOS process with a bipolar option. The results shown here are measured from a chip fabricated in this process.  相似文献   

14.
In a previous paper (Rudolph & Destexhe, 2006), we proposed various models, the gIF neuron models, of analytical integrate-and-fire (IF) neurons with conductance-based (COBA) dynamics for use in event-driven simulations. These models are based on an analytical approximation of the differential equation describing the IF neuron with exponential synaptic conductances and were successfully tested with respect to their response to random and oscillating inputs. Because they are analytical and mathematically simple, the gIF models are best suited for fast event-driven simulation strategies. However, the drawback of such models is they rely on a nonrealistic postsynaptic potential (PSP) time course, consisting of a discontinuous jump followed by a decay governed by the membrane time constant. Here, we address this limitation by conceiving an analytical approximation of the COBA IF neuron model with the full PSP time course. The subthreshold and suprathreshold response of this gIF4 model reproduces remarkably well the postsynaptic responses of the numerically solved passive membrane equation subject to conductance noise, while gaining at least two orders of magnitude in computational performance. Although the analytical structure of the gIF4 model is more complex than that of its predecessors due to the necessity of calculating future spike times, a simple and fast algorithmic implementation for use in large-scale neural network simulations is proposed.  相似文献   

15.
Masuda N  Aihara K 《Neural computation》2003,15(6):1341-1372
Neuronal information processing is often studied on the basis of spiking patterns. The relevant statistics such as firing rates calculated with the peri-stimulus time histogram are obtained by averaging spiking patterns over many experimental runs. However, animals should respond to one experimental stimulation in real situations, and what is available to the brain is not the trial statistics but the population statistics. Consequently, physiological ergodicity, namely, the consistency between trial averaging and population averaging, is implicitly assumed in the data analyses, although it does not trivially hold true. In this letter, we investigate how characteristics of noisy neural network models, such as single neuron properties, external stimuli, and synaptic inputs, affect the statistics of firing patterns. In particular, we show that how high membrane potential sensitivity to input fluctuations, inability of neurons to remember past inputs, external stimuli with large variability and temporally separated peaks, and relatively few contributions of synaptic inputs result in spike trains that are reproducible over many trials. The reproducibility of spike trains and synchronous firing are contrasted and related to the ergodicity issue. Several numerical calculations with neural network examples are carried out to support the theoretical results.  相似文献   

16.
要通过人工神经网络来模拟神经系统的功能并对实际问题进行求解,构建合适的脉冲神经元模型非常重要。为了使研究者了解此问题的研究进展,对目前的单房室脉冲神经元建模方法进行了综述。根据复杂程度将这些模型分为三类:具有生物可解释性的生理模型,具有脉冲生成机制的非线性模型和具有固定阈值的线性模型。对各类不同建模方法进行了阐述和分析,并讨论了各自的优缺点。  相似文献   

17.
A mixed-signal very large scale integration (VLSI) chip for large scale emulation of spiking neural networks is presented. The chip contains 2400 silicon neurons with fully programmable and reconfigurable synaptic connectivity. Each neuron implements a discrete-time model of a single-compartment cell. The model allows for analog membrane dynamics and an arbitrary number of synaptic connections, each with tunable conductance and reversal potential. The array of silicon neurons functions as an address-event (AE) transceiver, with incoming and outgoing spikes communicated over an asynchronous event-driven digital bus. Address encoding and conflict resolution of spiking events are implemented via a randomized arbitration scheme that ensures balanced servicing of event requests across the array. Routing of events is implemented externally using dynamically programmable random-access memory that stores a postsynaptic address, the conductance, and the reversal potential of each synaptic connection. Here, we describe the silicon neuron circuits, present experimental data characterizing the 3 mm times 3 mm chip fabricated in 0.5-mum complementary metal-oxide-semiconductor (CMOS) technology, and demonstrate its utility by configuring the hardware to emulate a model of attractor dynamics and waves of neural activity during sleep in rat hippocampus  相似文献   

18.
Barak O  Tsodyks M 《Neural computation》2006,18(10):2343-2358
Recognizing specific spatiotemporal patterns of activity, which take place at timescales much larger than the synaptic transmission and membrane time constants, is a demand from the nervous system exemplified, for instance, by auditory processing. We consider the total synaptic input that a single readout neuron receives on presentation of spatiotemporal spiking input patterns. Relying on the monotonic relation between the mean and the variance of a neuron's input current and its spiking output, we derive learning rules that increase the variance of the input current evoked by learned patterns relative to that obtained from random background patterns. We demonstrate that the model can successfully recognize a large number of patterns and exhibits a slow deterioration in performance with increasing number of learned patterns. In addition, robustness to time warping of the input patterns is revealed to be an emergent property of the model. Using a leaky integrate-and-fire realization of the readout neuron, we demonstrate that the above results also apply when considering spiking output.  相似文献   

19.
A previously developed method for efficiently simulating complex networks of integrate-and-fire neurons was specialized to the case in which the neurons have fast unitary postsynaptic conductances. However, inhibitory synaptic conductances are often slower than excitatory ones for cortical neurons, and this difference can have a profound effect on network dynamics that cannot be captured with neurons that have only fast synapses. We thus extend the model to include slow inhibitory synapses. In this model, neurons are grouped into large populations of similar neurons. For each population, we calculate the evolution of a probability density function (PDF), which describes the distribution of neurons over state-space. The population firing rate is given by the flux of probability across the threshold voltage for firing an action potential. In the case of fast synaptic conductances, the PDF was one-dimensional, as the state of a neuron was completely determined by its transmembrane voltage. An exact extension to slow inhibitory synapses increases the dimension of the PDF to two or three, as the state of a neuron now includes the state of its inhibitory synaptic conductance. However, by assuming that the expected value of a neuron's inhibitory conductance is independent of its voltage, we derive a reduction to a one-dimensional PDF and avoid increasing the computational complexity of the problem. We demonstrate that although this assumption is not strictly valid, the results of the reduced model are surprisingly accurate.  相似文献   

20.
We study how the location of synaptic input influences the stablex firing states in coupled model neurons bursting rhythmically at the gamma frequencies (20-70 Hz). The model neuron consists of two compartments and generates one, two, three or four spikes in each burst depending on the intensity of input current and the maximum conductance of M-type potassium current. If the somata are connected by reciprocal excitatory synapses, we find strong correlations between the changes in the bursting mode and those in the stable phase-locked states of the coupled neurons. The stability of the in-phase phase-locked state (synchronously firing state) tends to change when the individual neurons change their bursting patterns. If, however, the synaptic connections are terminated on the dendritic compartments, no such correlated changes occur. In this case, the coupled bursting neurons do not show the in-phase phase-locked state in any bursting mode. These results indicate that synchronization behaviour of bursting neurons significantly depends on the synaptic location, unlike a coupled system of regular spiking neurons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号