首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6% more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces.  相似文献   

2.
Many biological neural network models face the problem of scalability because of the limited computational power of today's computers. Thus, it is difficult to assess the efficiency of these models to solve complex problems such as image processing. Here, we describe how this problem can be tackled using event-driven computation. Only the neurons that emit a discharge are processed and, as long as the average spike discharge rate is low, millions of neurons and billions of connections can be modelled. We describe the underlying computation and implementation of such a mechanism in SpikeNET, our neural network simulation package. The type of model one can build is not only biologically compliant, it is also computationally efficient as 400 000 synaptic weights can be propagated per second on a standard desktop computer. In addition, for large networks, we can set very small time steps (< 0.01 ms) without significantly increasing the computation time. As an example, this method is applied to solve complex cognitive tasks such as face recognition in natural images.  相似文献   

3.
We present a new technique, based on a proposed event-based strategy (Mattia & Del Giudice, 2000), for efficiently simulating large networks of simple model neurons. The strategy was based on the fact that interactions among neurons occur by means of events that are well localized in time (the action potentials) and relatively rare. In the interval between two of these events, the state variables associated with a model neuron or a synapse evolved deterministically and in a predictable way. Here, we extend the event-driven simulation strategy to the case in which the dynamics of the state variables in the inter-event intervals are stochastic. This extension captures both the situation in which the simulated neurons are inherently noisy and the case in which they are embedded in a very large network and receive a huge number of random synaptic inputs. We show how to effectively include the impact of large background populations into neuronal dynamics by means of the numerical evaluation of the statistical properties of single-model neurons under random current injection. The new simulation strategy allows the study of networks of interacting neurons with an arbitrary number of external afferents and inherent stochastic dynamics.  相似文献   

4.
In this paper, we describe a new Synaptic Plasticity Activity Rule (SAPR) developed for use in networks of spiking neurons. Such networks can be used for simulations of physiological experiments as well as for other computations like image analysis. Most synaptic plasticity rules use artificially defined functions to modify synaptic connection strengths. In contrast, our rule makes use of the existing postsynaptic potential values to compute the value of adjustment. The network of spiking neurons we consider consists of excitatory and inhibitory neurons. Each neuron is implemented as an integrate-and-fire model that accurately mimics the behavior of biological neurons. To test performance of our new plasticity rule we designed a model of a biologically-inspired signal processing system, and used it for object detection in eye images of diabetic retinopathy patients, and lung images of cystic fibrosis patients. The results show that the network detects the edges of objects within an image, essentially segmenting it. Our ultimate goal, however, is not the development of an image segmentation tool that would be more efficient than nonbiological algorithms, but developing a physiologically correct neural network model that could be applied to a wide range of neurological experiments. We decided to validate the SAPR by using it in a network of spiking neurons for image segmentation because it is easy to visually assess the results. An important thing is that image segmentation is done in an entirely unsupervised way.  相似文献   

5.
We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.  相似文献   

6.
A supervised learning rule for Spiking Neural Networks (SNNs) is presented that can cope with neurons that spike multiple times. The rule is developed by extending the existing SpikeProp algorithm which could only be used for one spike per neuron. The problem caused by the discontinuity in the spike process is counteracted with a simple but effective rule, which makes the learning process more efficient. Our learning rule is successfully tested on a classification task of Poisson spike trains. We also applied the algorithm on a temporal version of the XOR problem and show that it is possible to learn this classical problem using only one spiking neuron making use of a hair-trigger situation.  相似文献   

7.
Notable advances in the understanding of neural processing were made when sensory systems were investigated from the viewpoint of adaptation to the statistical structure of their input space. For this purpose, mathematical methods for data representation were used. Here, we point out that emphasis on the input structure has been at the cost of the biological plausibility of the corresponding neuron models which process the natural stimuli. The signal transformation of the data representation methods does not correspond well to the signal transformations happening at the single-cell level in neural systems. Hence, we now propose data representation by means of spiking neuron models. We formulate the data representation problem as an optimization problem and derive the fundamental quantities for an iterative learning scheme. This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January 25–27, 2007  相似文献   

8.
Joshi P  Maass W 《Neural computation》2005,17(8):1715-1738
How can complex movements that take hundreds of milliseconds be generated by stereotypical neural microcircuits consisting of spiking neurons with a much faster dynamics? We show that linear readouts from generic neural microcircuit models can be trained to generate basic arm movements. Such movement generation is independent of the arm model used and the type of feedback that the circuit receives. We demonstrate this by considering two different models of a two-jointed arm, a standard model from robotics and a standard model from biology, that each generates different kinds of feedback. Feedback that arrives with biologically realistic delays of 50 to 280 ms turns out to give rise to the best performance. If a feedback with such desirable delay is not available, the neural microcircuit model also achieves good performance if it uses internally generated estimates of such feedback. Existing methods for movement generation in robotics that take the particular dynamics of sensors and actuators into account (embodiment of motor systems) are taken one step further with this approach, which provides methods for also using the embodiment of motion generation circuitry, that is, the inherent dynamics and spatial structure of neural circuits, for the generation of movement.  相似文献   

9.
Dynamics of spiking neurons with electrical coupling   总被引:1,自引:0,他引:1  
Chow CC  Kopell N 《Neural computation》2000,12(7):1643-1678
We analyze the existence and stability of phase-locked states of neurons coupled electrically with gap junctions. We show that spike shape and size, along with driving current (which affects network frequency), play a large role in which phase-locked modes exist and are stable. Our theory makes predictions about biophysical models using spikes of different shapes, and we present simulations to confirm the predictions. We also analyze a large system of all-to-all coupled neurons and show that the splay-phase state can exist only for a certain range of frequencies.  相似文献   

10.
Solving graph algorithms with networks of spiking neurons   总被引:1,自引:0,他引:1  
Spatio-temporal coding that combines spatial constraints with temporal sequencing is of great interest to brain-like circuit modelers. In this paper we present some new ideas of how these types of circuits can self-organize. We introduce a temporal correlation rule based on the time difference between the firing of neurons. With the aid of this rule we show an analogy between a graph and a network of spiking neurons. The shortest path, clustering based on the nearest neighbor, and the minimal spanning tree algorithms are solved using the proposed approach.  相似文献   

11.
We present a dynamical theory of integrate-and-fire neurons with strong synaptic coupling. We show how phase-locked states that are stable in the weak coupling regime can destabilize as the coupling is increased, leading to states characterized by spatiotemporal variations in the interspike intervals (ISIs). The dynamics is compared with that of a corresponding network of analog neurons in which the outputs of the neurons are taken to be mean firing rates. A fundamental result is that for slow interactions, there is good agreement between the two models (on an appropriately defined timescale). Various examples of desynchronization in the strong coupling regime are presented. First, a globally coupled network of identical neurons with strong inhibitory coupling is shown to exhibit oscillator death in which some of the neurons suppress the activity of others. However, the stability of the synchronous state persists for very large networks and fast synapses. Second, an asymmetric network with a mixture of excitation and inhibition is shown to exhibit periodic bursting patterns. Finally, a one-dimensional network of neurons with long-range interactions is shown to desynchronize to a state with a spatially periodic pattern of mean firing rates across the network. This is modulated by deterministic fluctuations of the instantaneous firing rate whose size is an increasing function of the speed of synaptic response.  相似文献   

12.
Simple model of spiking neurons   总被引:18,自引:0,他引:18  
A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC.  相似文献   

13.
In a previous paper (Rudolph & Destexhe, 2006), we proposed various models, the gIF neuron models, of analytical integrate-and-fire (IF) neurons with conductance-based (COBA) dynamics for use in event-driven simulations. These models are based on an analytical approximation of the differential equation describing the IF neuron with exponential synaptic conductances and were successfully tested with respect to their response to random and oscillating inputs. Because they are analytical and mathematically simple, the gIF models are best suited for fast event-driven simulation strategies. However, the drawback of such models is they rely on a nonrealistic postsynaptic potential (PSP) time course, consisting of a discontinuous jump followed by a decay governed by the membrane time constant. Here, we address this limitation by conceiving an analytical approximation of the COBA IF neuron model with the full PSP time course. The subthreshold and suprathreshold response of this gIF4 model reproduces remarkably well the postsynaptic responses of the numerically solved passive membrane equation subject to conductance noise, while gaining at least two orders of magnitude in computational performance. Although the analytical structure of the gIF4 model is more complex than that of its predecessors due to the necessity of calculating future spike times, a simple and fast algorithmic implementation for use in large-scale neural network simulations is proposed.  相似文献   

14.
Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of narrow tuning in the dimension to be encoded, to increase the single-neuron Fisher information, and broad tuning in all other dimensions, to increase the number of active neurons. Extremely narrow tuning without sufficient receptive field overlap will severely worsen the coding. This implies the existence of an optimal tuning width for the feature to be encoded. Empirically, only a subset of all stimulus features will normally be accessible. In this case, relative encoding errors can be calculated that yield a criterion for the function of a neural population based on the measured tuning curves.  相似文献   

15.
The emergence of synchrony in the activity of large, heterogeneous networks of spiking neurons is investigated. We define the robustness of synchrony by the critical disorder at which the asynchronous state becomes linearly unstable. We show that at low firing rates, synchrony is more robust in excitatory networks than in inhibitory networks, but excitatory networks cannot display any synchrony when the average firing rate becomes too high. We introduce a new regime where all inputs, external and internal, are strong and have opposite effects that cancel each other when averaged. In this regime, the robustness of synchrony is strongly enhanced, and robust synchrony can be achieved at a high firing rate in inhibitory networks. On the other hand, in excitatory networks, synchrony remains limited in frequency due to the intrinsic instability of strong recurrent excitation.  相似文献   

16.
A network of leaky integrate-and-fire (IAF) neurons is proposed to segment gray-scale images. The network architecture with local competition between neurons that encode segment assignments of image blocks is motivated by a histogram clustering approach to image segmentation. Lateral excitatory connections between neighboring image sites yield a local smoothing of segments. The mean firing rate of class membership neurons encodes the image segmentation. A weight modification scheme is proposed that estimates segment-specific prototypical histograms. The robustness properties of the network implementation make it amenable to an analog VLSI realization. Results on synthetic and real-world images demonstrate the effectiveness of the architecture.  相似文献   

17.
Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical integration approaches because the timing of spikes is treated exactly. The drawback of such event-driven methods is that in order to be efficient, the membrane equations must be solvable analytically, or at least provide simple analytic approximations for the state variables describing the system. This requirement prevents, in general, the use of conductance-based synaptic interactions within the framework of event-driven simulations and, thus, the investigation of network paradigms where synaptic conductances are important. We propose here a number of extensions of the classical leaky IF neuron model involving approximations of the membrane equation with conductance-based synaptic current, which lead to simple analytic expressions for the membrane state, and therefore can be used in the event-driven framework. These conductance-based IF (gIF) models are compared to commonly used models, such as the leaky IF model or biophysical models in which conductances are explicitly integrated. All models are compared with respect to various spiking response properties in the presence of synaptic activity, such as the spontaneous discharge statistics, the temporal precision in resolving synaptic inputs, and gain modulation under in vivo-like synaptic bombardment. Being based on the passive membrane equation with fixed-threshold spike generation, the proposed gIF models are situated in between leaky IF and biophysical models but are much closer to the latter with respect to their dynamic behavior and response characteristics, while still being nearly as computationally efficient as simple IF neuron models. gIF models should therefore provide a useful tool for efficient and precise simulation of large-scale neuronal networks with realistic, conductance-based synaptic interactions.  相似文献   

18.
Dissipative particle dynamics (DPD) simulation is implemented on multiple GPUs by using NVIDIA’s Compute Unified Device Architecture (CUDA) in this paper. Data communication between each GPU is executed based on the POSIX thread. Compared with the single-GPU implementation, this implementation can provide faster computation speed and more storage space to perform simulations on a significant larger system. In benchmark, the performance of GPUs is compared with that of Material Studio running on a single CPU core. We can achieve more than 90x speedup by using three C2050 GPUs to perform simulations on an 80∗80∗80 system. This implementation is applied to the study on the dispersancy of lubricant succinimide dispersants. A series of simulations are performed on lubricant–soot–dispersant systems to study the impact factors including concentration and interaction with lubricant on the dispersancy, and the simulation results are agreed with the study in our present work.  相似文献   

19.
Firing rates and synchronous firing are often simultaneously relevant signals, and they independently or cooperatively represent external sensory inputs, cognitive events, and environmental situations such as body position. However, how rates and synchrony comodulate and which aspects of inputs are effectively encoded, particularly in the presence of dynamical inputs, are unanswered questions. We examine theoretically how mixed information in dynamic mean input and noise input is represented by dynamic population firing rates and synchrony. In a subthreshold regime, amplitudes of spatially uncorrelated noise are encoded up to a fairly high input frequency, but this requires both rate and synchrony output channels. In a suprathreshold regime, means and common noise amplitudes can be simultaneously and separately encoded by rates and synchrony, respectively, but the input frequency for which this is possible has a lower limit.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号