首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.  相似文献   

2.
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.  相似文献   

3.
A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6% more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces.  相似文献   

4.
The simulation of spiking neural networks (SNNs) is known to be a very time-consuming task. This limits the size of SNN that can be simulated in reasonable time or forces users to overly limit the complexity of the neuron models. This is one of the driving forces behind much of the recent research on event-driven simulation strategies. Although event-driven simulation allows precise and efficient simulation of certain spiking neuron models, it is not straightforward to generalize the technique to more complex neuron models, mostly because the firing time of these neuron models is computationally expensive to evaluate. Most solutions proposed in literature concentrate on algorithms that can solve this problem efficiently. However, these solutions do not scale well when more state variables are involved in the neuron model, which is, for example, the case when multiple synaptic time constants for each neuron are used. In this letter, we show that an exact prediction of the firing time is not required in order to guarantee exact simulation results. Several techniques are presented that try to do the least possible amount of work to predict the firing times. We propose an elegant algorithm for the simulation of leaky integrate-and-fire (LIF) neurons with an arbitrary number of (unconstrained) synaptic time constants, which is able to combine these algorithmic techniques efficiently, resulting in very high simulation speed. Moreover, our algorithm is highly independent of the complexity (i.e., number of synaptic time constants) of the underlying neuron model.  相似文献   

5.
Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical integration approaches because the timing of spikes is treated exactly. The drawback of such event-driven methods is that in order to be efficient, the membrane equations must be solvable analytically, or at least provide simple analytic approximations for the state variables describing the system. This requirement prevents, in general, the use of conductance-based synaptic interactions within the framework of event-driven simulations and, thus, the investigation of network paradigms where synaptic conductances are important. We propose here a number of extensions of the classical leaky IF neuron model involving approximations of the membrane equation with conductance-based synaptic current, which lead to simple analytic expressions for the membrane state, and therefore can be used in the event-driven framework. These conductance-based IF (gIF) models are compared to commonly used models, such as the leaky IF model or biophysical models in which conductances are explicitly integrated. All models are compared with respect to various spiking response properties in the presence of synaptic activity, such as the spontaneous discharge statistics, the temporal precision in resolving synaptic inputs, and gain modulation under in vivo-like synaptic bombardment. Being based on the passive membrane equation with fixed-threshold spike generation, the proposed gIF models are situated in between leaky IF and biophysical models but are much closer to the latter with respect to their dynamic behavior and response characteristics, while still being nearly as computationally efficient as simple IF neuron models. gIF models should therefore provide a useful tool for efficient and precise simulation of large-scale neuronal networks with realistic, conductance-based synaptic interactions.  相似文献   

6.
In a previous paper (Rudolph & Destexhe, 2006), we proposed various models, the gIF neuron models, of analytical integrate-and-fire (IF) neurons with conductance-based (COBA) dynamics for use in event-driven simulations. These models are based on an analytical approximation of the differential equation describing the IF neuron with exponential synaptic conductances and were successfully tested with respect to their response to random and oscillating inputs. Because they are analytical and mathematically simple, the gIF models are best suited for fast event-driven simulation strategies. However, the drawback of such models is they rely on a nonrealistic postsynaptic potential (PSP) time course, consisting of a discontinuous jump followed by a decay governed by the membrane time constant. Here, we address this limitation by conceiving an analytical approximation of the COBA IF neuron model with the full PSP time course. The subthreshold and suprathreshold response of this gIF4 model reproduces remarkably well the postsynaptic responses of the numerically solved passive membrane equation subject to conductance noise, while gaining at least two orders of magnitude in computational performance. Although the analytical structure of the gIF4 model is more complex than that of its predecessors due to the necessity of calculating future spike times, a simple and fast algorithmic implementation for use in large-scale neural network simulations is proposed.  相似文献   

7.
In this paper, we describe a new Synaptic Plasticity Activity Rule (SAPR) developed for use in networks of spiking neurons. Such networks can be used for simulations of physiological experiments as well as for other computations like image analysis. Most synaptic plasticity rules use artificially defined functions to modify synaptic connection strengths. In contrast, our rule makes use of the existing postsynaptic potential values to compute the value of adjustment. The network of spiking neurons we consider consists of excitatory and inhibitory neurons. Each neuron is implemented as an integrate-and-fire model that accurately mimics the behavior of biological neurons. To test performance of our new plasticity rule we designed a model of a biologically-inspired signal processing system, and used it for object detection in eye images of diabetic retinopathy patients, and lung images of cystic fibrosis patients. The results show that the network detects the edges of objects within an image, essentially segmenting it. Our ultimate goal, however, is not the development of an image segmentation tool that would be more efficient than nonbiological algorithms, but developing a physiologically correct neural network model that could be applied to a wide range of neurological experiments. We decided to validate the SAPR by using it in a network of spiking neurons for image segmentation because it is easy to visually assess the results. An important thing is that image segmentation is done in an entirely unsupervised way.  相似文献   

8.
9.
脉冲神经元可以被用于处理生物刺激并且可以解释大脑复杂的智能行为。脉冲神经网络以非常逼近生物的神经元模型作为处理单元,可以直接用来仿真脑科学中发现的神经网络计算模型,输出的脉冲信号还可与生物神经系统对接。而小波变换是一个非常有利的时频分析工具,它可以有效的压缩图像并且提取图像的特征。本文中将提出一种与人类视觉系统的开/关神经元阵列相结合的脉冲神经网络,来实现针对视觉图像的快速小波变换。仿真结果显示,这个脉冲神经网络可以很好地保留视觉图像的关键特征。  相似文献   

10.
In this letter, we study the effect of a unique initial stimulation on random recurrent networks of leaky integrate-and-fire neurons. Indeed, given a stochastic connectivity, this so-called spontaneous mode exhibits various nontrivial dynamics. This study is based on a mathematical formalism that allows us to examine the variability of the afterward dynamics according to the parameters of the weight distribution. Under the independence hypothesis (e.g., in the case of very large networks), we are able to compute the average number of neurons that fire at a given time-the spiking activity. In accordance with numerical simulations, we prove that this spiking activity reaches a steady state. We characterize this steady state and explore the transients.  相似文献   

11.
Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.  相似文献   

12.
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.  相似文献   

13.
Dayhoff JE 《Neural computation》2007,19(9):2433-2467
We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.  相似文献   

14.
Population density methods provide promising time-saving alternatives to direct Monte Carlo simulations of neuronal network activity, in which one tracks the state of thousands of individual neurons and synapses. A population density method has been found to be roughly a hundred times faster than direct simulation for various test networks of integrate-and-fire model neurons with instantaneous excitatory and inhibitory post-synaptic conductances. In this method, neurons are grouped into large populations of similar neurons. For each population, one calculates the evolution of a probability density function (PDF) which describes the distribution of neurons over state space. The population firing rate is then given by the total flux of probability across the threshold voltage for firing an action potential. Extending the method beyond instantaneous synapses is necessary for obtaining accurate results, because synaptic kinetics play an important role in network dynamics. Embellishments incorporating more realistic synaptic kinetics for the underlying neuron model increase the dimension of the PDF, which was one-dimensional in the instantaneous synapse case. This increase in dimension causes a substantial increase in computation time to find the exact PDF, decreasing the computational speed advantage of the population density method over direct Monte Carlo simulation. We report here on a one-dimensional model of the PDF for neurons with arbitrary synaptic kinetics. The method is more accurate than the mean-field method in the steady state, where the mean-field approximation works best, and also under dynamic-stimulus conditions. The method is much faster than direct simulations. Limitations of the method are demonstrated, and possible improvements are discussed.  相似文献   

15.
Synchronization plays important role in generation of brain activity patterns. Experimental data show that neurons demonstrate more reproducible activity for noise-like input than for constant current injection, and that effect can not be reproduced by standard oversimplified Firing-Rate (FR) models. The paper proposes a modification of FR model which reproduces these kinds of activity. The FR model approximates the firing rate of an infinite number of leaky integrate-and-fire neurons, considered as a population, and in contrary to conventional models it accounts for not only a steady-state firing regime but a fast rising excitation as well. Comparison of our simulations with the experimental data shows that the synchronous firing of the neuronal population strongly depends on the synchrony of neuronal states just before spiking. This effect is reproduced by the proposed FR model in contrary to the conventional FR models and is in agreement with the direct Monte-Carlo simulation of individual neurons.  相似文献   

16.
Simple model of spiking neurons   总被引:18,自引:0,他引:18  
A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC.  相似文献   

17.
Touboul J 《Neural computation》2011,23(7):1704-1742
Bidimensional spiking models are garnering a lot of attention for their simplicity and their ability to reproduce various spiking patterns of cortical neurons and are used particularly for large network simulations. These models describe the dynamics of the membrane potential by a nonlinear differential equation that blows up in finite time, coupled to a second equation for adaptation. Spikes are emitted when the membrane potential blows up or reaches a cutoff θ. The precise simulation of the spike times and of the adaptation variable is critical, for it governs the spike pattern produced and is hard to compute accurately because of the exploding nature of the system at the spike times. We thoroughly study the precision of fixed time-step integration schemes for this type of model and demonstrate that these methods produce systematic errors that are unbounded, as the cutoff value is increased, in the evaluation of the two crucial quantities: the spike time and the value of the adaptation variable at this time. Precise evaluation of these quantities therefore involves very small time steps and long simulation times. In order to achieve a fixed absolute precision in a reasonable computational time, we propose here a new algorithm to simulate these systems based on a variable integration step method that either integrates the original ordinary differential equation or the equation of the orbits in the phase plane, and compare this algorithm with fixed time-step Euler scheme and other more accurate simulation algorithms.  相似文献   

18.
Exact simulation of integrate-and-fire models with exponential currents   总被引:3,自引:0,他引:3  
Brette R 《Neural computation》2007,19(10):2604-2609
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.  相似文献   

19.
We present a spiking neuron model that allows for an analytic calculation of the correlations between pre- and postsynaptic spikes. The neuron model is a generalization of the integrate-and-fire model and equipped with a probabilistic spike-triggering mechanism. We show that under certain biologically plausible conditions, pre- and postsynaptic spike trains can be described simultaneously as an inhomogeneous Poisson process. Inspired by experimental findings, we develop a model for synaptic long-term plasticity that relies on the relative timing of pre- and post-synaptic action potentials. Being given an input statistics, we compute the stationary synaptic weights that result from the temporal correlations between the pre- and postsynaptic spikes. By means of both analytic calculations and computer simulations, we show that such a mechanism of synaptic plasticity is able to strengthen those input synapses that convey precisely timed spikes at the expense of synapses that deliver spikes with a broad temporal distribution. This may be of vital importance for any kind of information processing based on spiking neurons and temporal coding.  相似文献   

20.
In recent years, both multilayer perceptrons and networks of spiking neurons have been used in applications ranging from detailed models of specific cortical areas to image processing. A more challenging application is to find solutions to functional equations in order to gain insights to underlying phenomena. Finding the roots of real valued monotonically increasing function mappings is the solution to a particular class of functional equation. Furthermore, spiking neural network approaches in solving problems described by functional equations, may be an useful tool to provide important insights to how different regions of the brain may co-ordinate signaling within and between modalities, thus providing a possible basis to construct a theory of brain function. In this letter, we present for the first time a spiking neural network architecture based on integrate-and-fire units and delays, that is capable of calculating the functional or iterative root of nonlinear functions, by solving a particular class of functional equation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号