首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We present a multichip, mixed-signal VLSI system for spike-based vision processing. The system consists of an 80 x 60 pixel neuromorphic retina and a 4800 neuron silicon cortex with 4,194,304 synapses. Its functionality is illustrated with experimental data on multiple components of an attention-based hierarchical model of cortical object recognition, including feature coding, salience detection, and foveation. This model exploits arbitrary and reconfigurable connectivity between cells in the multichip architecture, achieved by asynchronously routing neural spike events within and between chips according to a memory-based look-up table. Synaptic parameters, including conductance and reversal potential, are also stored in memory and are used to dynamically configure synapse circuits within the silicon neurons.  相似文献   

2.
Neuron-synapse IC chip-set for large-scale chaotic neural networks.   总被引:1,自引:0,他引:1  
We propose a neuron-synapse integrated circuit (IC) chip-set for large-scale chaotic neural networks. We use switched-capacitor (SC) circuit techniques to implement a three-internal-state transiently-chaotic neural network model. The SC chaotic neuron chip faithfully reproduces complex chaotic dynamics in real numbers through continuous state variables of the analog circuitry. We can digitally control most of the model parameters by means of programmable capacitive arrays embedded in the SC chaotic neuron chip. Since the output of the neuron is transfered into a digital pulse according to the all-or-nothing property of an axon, we design a synapse chip with digital circuits. We propose a memory-based synapse circuit architecture to achieve a rapid calculation of a vast number of weighted summations. Both of the SC neuron and the digital synapse circuits have been fabricated as IC forms. We have tested these IC chips extensively, and confirmed the functions and performance of the chip-set. The proposed neuron-synapse IC chip-set makes it possible to construct a scalable and reconfigurable large-scale chaotic neural network with 10000 neurons and 10000/sup 2/ synaptic connections.  相似文献   

3.
Stochastic dynamics of a finite-size spiking neural network   总被引:4,自引:0,他引:4  
Soula H  Chow CC 《Neural computation》2007,19(12):3262-3292
We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance, and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network depends on only the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes.  相似文献   

4.
相较于第1代和第2代神经网络,第3代神经网络的脉冲神经网络是一种更加接近于生物神经网络的模型,因此更具有生物可解释性和低功耗性。基于脉冲神经元模型,脉冲神经网络可以通过脉冲信号的形式模拟生物信号在神经网络中的传播,通过脉冲神经元的膜电位变化来发放脉冲序列,脉冲序列通过时空联合表达不仅传递了空间信息还传递了时间信息。当前面向模式识别任务的脉冲神经网络模型性能还不及深度学习,其中一个重要原因在于脉冲神经网络的学习方法不成熟,深度学习中神经网络的人工神经元是基于实数形式的输出,这使得其可以使用全局性的反向传播算法对深度神经网络的参数进行训练,脉冲序列是二值性的离散输出,这直接导致对脉冲神经网络的训练存在一定困难,如何对脉冲神经网络进行高效训练是一个具有挑战的研究问题。本文首先总结了脉冲神经网络研究领域中的相关学习算法,然后对其中主要的方法:直接监督学习、无监督学习的算法以及ANN2SNN的转换算法进行分析介绍,并对其中代表性的工作进行对比分析,最后基于对当前主流方法的总结,对未来更高效、更仿生的脉冲神经网络参数学习方法进行展望。  相似文献   

5.
Real-time computing platform for spiking neurons (RT-spike)   总被引:1,自引:0,他引:1  
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.  相似文献   

6.
In this paper, we describe a new Synaptic Plasticity Activity Rule (SAPR) developed for use in networks of spiking neurons. Such networks can be used for simulations of physiological experiments as well as for other computations like image analysis. Most synaptic plasticity rules use artificially defined functions to modify synaptic connection strengths. In contrast, our rule makes use of the existing postsynaptic potential values to compute the value of adjustment. The network of spiking neurons we consider consists of excitatory and inhibitory neurons. Each neuron is implemented as an integrate-and-fire model that accurately mimics the behavior of biological neurons. To test performance of our new plasticity rule we designed a model of a biologically-inspired signal processing system, and used it for object detection in eye images of diabetic retinopathy patients, and lung images of cystic fibrosis patients. The results show that the network detects the edges of objects within an image, essentially segmenting it. Our ultimate goal, however, is not the development of an image segmentation tool that would be more efficient than nonbiological algorithms, but developing a physiologically correct neural network model that could be applied to a wide range of neurological experiments. We decided to validate the SAPR by using it in a network of spiking neurons for image segmentation because it is easy to visually assess the results. An important thing is that image segmentation is done in an entirely unsupervised way.  相似文献   

7.
We explore homeostasis in a silicon integrate-and-fire neuron. The neuron adapts its firing rate over time periods on the order of seconds or minutes so that it returns to its spontaneous firing rate after a sustained perturbation. Homeostasis is implemented via two schemes. One scheme looks at the presynaptic activity and adapts the synaptic weight depending on the presynaptic spiking rate. The second scheme adapts the synaptic "threshold" depending on the neuron's activity. The threshold is lowered if the neuron's activity decreases over a long time and is increased for prolonged increase in postsynaptic activity. The presynaptic adaptation mechanism models the contrast adaptation responses observed in simple cortical cells. To obtain the long adaptation timescales we require, we used floating-gates. Otherwise, the capacitors we would have to use would be of such a size that we could not integrate them and so we could not incorporate such long-time adaptation mechanisms into a very large-scale integration (VLSI) network of neurons. The circuits for the adaptation mechanisms have been implemented in a 2-/spl mu/m double-poly CMOS process with a bipolar option. The results shown here are measured from a chip fabricated in this process.  相似文献   

8.
Adaptive WTA With an Analog VLSI Neuromorphic Learning Chip   总被引:1,自引:0,他引:1  
In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system  相似文献   

9.
Síma J  Sgall J 《Neural computation》2005,17(12):2635-2647
We study the computational complexity of training a single spiking neuron N with binary coded inputs and output that, in addition to adaptive weights and a threshold, has adjustable synaptic delays. A synchronization technique is introduced so that the results concerning the nonlearnability of spiking neurons with binary delays are generalized to arbitrary real-valued delays. In particular, the consistency problem for N with programmable weights, a threshold, and delays, and its approximation version are proven to be NP-complete. It follows that the spiking neurons with arbitrary synaptic delays are not properly PAC learnable and do not allow robust learning unless RP = NP. In addition, the representation problem for N, a question whether an n-variable Boolean function given in DNF (or as a disjunction of O(n) threshold gates) can be computed by a spiking neuron, is shown to be coNP-hard.  相似文献   

10.
小脑对运动的调控和对环境的适应性是人体完成快速精准运动的关键,模拟并研究小脑的运行机制将为控制复杂多变的机器人模型提供更有效的方法.鉴于此,遵循神经元数量的真实生物比率,构建大规模小脑脉冲神经网络模型,模拟大脑中小脑的真实结构、信息传递方式和学习机制,实现对机械臂的误差纠正控制,同时依据系统在不同控制任务下的控制结果,得到不同突触可塑性对小脑网络控制效果的影响规律.为了进一步增加小脑控制系统的生物真实性,以更贴近人脑的并行运算方式在现场可编程门阵列(field programmable gate array,FPGA)平台上实现所构建的模型,并进行相应的资源优化,增加可实现的网络规模.FPGA实现结果显示,系统能够成功完成基于小脑误差纠正功能的自适应类脑机械臂控制,可以验证小脑的真实细胞动力学和大规模颗粒层提供的高容错性,并提供兼顾小脑应用功能实现和理论研究的平台.  相似文献   

11.
A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6% more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces.  相似文献   

12.
13.
A previously developed method for efficiently simulating complex networks of integrate-and-fire neurons was specialized to the case in which the neurons have fast unitary postsynaptic conductances. However, inhibitory synaptic conductances are often slower than excitatory ones for cortical neurons, and this difference can have a profound effect on network dynamics that cannot be captured with neurons that have only fast synapses. We thus extend the model to include slow inhibitory synapses. In this model, neurons are grouped into large populations of similar neurons. For each population, we calculate the evolution of a probability density function (PDF), which describes the distribution of neurons over state-space. The population firing rate is given by the flux of probability across the threshold voltage for firing an action potential. In the case of fast synaptic conductances, the PDF was one-dimensional, as the state of a neuron was completely determined by its transmembrane voltage. An exact extension to slow inhibitory synapses increases the dimension of the PDF to two or three, as the state of a neuron now includes the state of its inhibitory synaptic conductance. However, by assuming that the expected value of a neuron's inhibitory conductance is independent of its voltage, we derive a reduction to a one-dimensional PDF and avoid increasing the computational complexity of the problem. We demonstrate that although this assumption is not strictly valid, the results of the reduced model are surprisingly accurate.  相似文献   

14.
An analog CMOS chip set for implementations of artificial neural networks (ANNs) has been fabricated and tested. The chip set consists of two cascadable chips: a neuron chip and a synapse chip. Neurons on the neuron chips can be interconnected at random via synapses on the synapse chips thus implementing an ANN with arbitrary topology. The neuron test chip contains an array of 4 neurons with well defined hyperbolic tangent activation functions which is implemented by using parasitic lateral bipolar transistors. The synapse test chip is a cascadable 4x4 matrix-vector multiplier with variable, 10-b resolution matrix elements. The propagation delay of the test chips was measured to 2.6 mus per layer.  相似文献   

15.
16.
The speed and reliability of mammalian perception indicate that cortical computations can rely on very few action potentials per involved neuron. Together with the stochasticity of single-spike events in cortex, this appears to imply that large populations of redundant neurons are needed for rapid computations with action potentials. Here we demonstrate that very fast and precise computations can be realized also in small networks of stochastically spiking neurons. We present a generative network model for which we derive biologically plausible algorithms that perform spike-by-spike updates of the neuron's internal states and adaptation of its synaptic weights from maximizing the likelihood of the observed spike patterns. Paradigmatic computational tasks demonstrate the online performance and learning efficiency of our framework. The potential relevance of our approach as a model for cortical computation is discussed.  相似文献   

17.
The simulation of spiking neural networks (SNNs) is known to be a very time-consuming task. This limits the size of SNN that can be simulated in reasonable time or forces users to overly limit the complexity of the neuron models. This is one of the driving forces behind much of the recent research on event-driven simulation strategies. Although event-driven simulation allows precise and efficient simulation of certain spiking neuron models, it is not straightforward to generalize the technique to more complex neuron models, mostly because the firing time of these neuron models is computationally expensive to evaluate. Most solutions proposed in literature concentrate on algorithms that can solve this problem efficiently. However, these solutions do not scale well when more state variables are involved in the neuron model, which is, for example, the case when multiple synaptic time constants for each neuron are used. In this letter, we show that an exact prediction of the firing time is not required in order to guarantee exact simulation results. Several techniques are presented that try to do the least possible amount of work to predict the firing times. We propose an elegant algorithm for the simulation of leaky integrate-and-fire (LIF) neurons with an arbitrary number of (unconstrained) synaptic time constants, which is able to combine these algorithmic techniques efficiently, resulting in very high simulation speed. Moreover, our algorithm is highly independent of the complexity (i.e., number of synaptic time constants) of the underlying neuron model.  相似文献   

18.
We study how the location of synaptic input influences the stablex firing states in coupled model neurons bursting rhythmically at the gamma frequencies (20-70 Hz). The model neuron consists of two compartments and generates one, two, three or four spikes in each burst depending on the intensity of input current and the maximum conductance of M-type potassium current. If the somata are connected by reciprocal excitatory synapses, we find strong correlations between the changes in the bursting mode and those in the stable phase-locked states of the coupled neurons. The stability of the in-phase phase-locked state (synchronously firing state) tends to change when the individual neurons change their bursting patterns. If, however, the synaptic connections are terminated on the dendritic compartments, no such correlated changes occur. In this case, the coupled bursting neurons do not show the in-phase phase-locked state in any bursting mode. These results indicate that synchronization behaviour of bursting neurons significantly depends on the synaptic location, unlike a coupled system of regular spiking neurons.  相似文献   

19.
The design of a chaotic neuron model is proposed and implemented in a CMOS very large scale integration (VLSI) chip. The transfer function of the neuron is defined as a piecewise linear (PWL) N-shaped function. In this paper, the new concept of the baseline function is introduced. It is the mapping of the neuron state to the neuron output. It is used to control the chaotic behavior of collective neurons. The chaotic behavior is analyzed and verified by Lyapunov exponents. An analog CMOS chip was designed to implement the theory and it was fabricated through the MOSIS program. The measurement diagnoses of the chip is demonstrated.  相似文献   

20.
Neurological disorders affect millions of people which influence their cognitive and/or motor capabilities. The realization of a prosthesis must consider the biological activity of the cells and the connection between machine and biological cells. Biomimetic neural network is one solution in front of neurological diseases. The neuron replacement should be processed by reproducing the timing and the shape of the spike. Several mathematical equations which model neural activities exist. The most biologically plausible one is the Hodgkin–Huxley (HH) model. The connection between electrical devices and living cells require a tunable real-time system. The field programmable gate array (FPGA) is a nice component including flexibility, speed and stability. Here, we propose an implementation of HH neurons in FPGA serving as a presage for a modulating network opening a large scale of possibilities such as damage cells replacement and the study of the effect of the cells disease on the neural network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号