首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Polychronization: computation with spikes   总被引:10,自引:0,他引:10  
We present a minimal spiking network that can polychronize, that is, exhibit reproducible time-locked but not synchronous firing patterns with millisecond precision, as in synfire braids. The network consists of cortical spiking neurons with axonal conduction delays and spike-timing-dependent plasticity (STDP); a ready-to-use MATLAB code is included. It exhibits sleeplike oscillations, gamma (40 Hz) rhythms, conversion of firing rates to spike timings, and other interesting regimes. Due to the interplay between the delays and STDP, the spiking neurons spontaneously self-organize into groups and generate patterns of stereotypical polychronous activity. To our surprise, the number of coexisting polychronous groups far exceeds the number of neurons in the network, resulting in an unprecedented memory capacity of the system. We speculate on the significance of polychrony to the theory of neuronal group selection (TNGS, neural Darwinism), cognitive neural computations, binding and gamma rhythm, mechanisms of attention, and consciousness as "attention to memories."  相似文献   

2.
Spike timing-dependent plasticity (STDP) is a learning rule that modifies the strength of a neuron's synapses as a function of the precise temporal relations between input and output spikes. In many brains areas, temporal aspects of spike trains have been found to be highly reproducible. How will STDP affect a neuron's behavior when it is repeatedly presented with the same input spike pattern? We show in this theoretical study that repeated inputs systematically lead to a shaping of the neuron's selectivity, emphasizing its very first input spikes, while steadily decreasing the postsynaptic response latency. This was obtained under various conditions of background noise, and even under conditions where spiking latencies and firing rates, or synchrony, provided conflicting informations. The key role of first spikes demonstrated here provides further support for models using a single wave of spikes to implement rapid neural processing.  相似文献   

3.
程龙  刘洋 《控制与决策》2018,33(5):923-937
脉冲神经网络是目前最具有生物解释性的人工神经网络,是类脑智能领域的核心组成部分.首先介绍各类常用的脉冲神经元模型以及前馈和循环型脉冲神经网络结构;然后介绍脉冲神经网络的时间编码方式,在此基础上,系统地介绍脉冲神经网络的学习算法,包括无监督学习和监督学习算法,其中监督学习算法按照梯度下降算法、结合STDP规则的算法和基于脉冲序列卷积核的算法3大类别分别展开详细介绍和总结;接着列举脉冲神经网络在控制领域、模式识别领域和类脑智能研究领域的应用,并在此基础上介绍各国脑计划中,脉冲神经网络与神经形态处理器相结合的案例;最后分析脉冲神经网络目前所存在的困难和挑战.  相似文献   

4.
Florian RV 《Neural computation》2007,19(6):1468-1502
The persistent modification of synaptic efficacy as a function of the relative timing of pre- and postsynaptic spikes is a phenomenon known as spike-timing-dependent plasticity (STDP). Here we show that the modulation of STDP by a global reward signal leads to reinforcement learning. We first derive analytically learning rules involving reward-modulated spike-timing-dependent synaptic and intrinsic plasticity, by applying a reinforcement learning algorithm to the stochastic spike response model of spiking neurons. These rules have several features common to plasticity mechanisms experimentally found in the brain. We then demonstrate in simulations of networks of integrate-and-fire neurons the efficacy of two simple learning rules involving modulated STDP. One rule is a direct extension of the standard STDP model (modulated STDP), and the other one involves an eligibility trace stored at each synapse that keeps a decaying memory of the relationships between the recent pairs of pre- and postsynaptic spike pairs (modulated STDP with eligibility trace). This latter rule permits learning even if the reward signal is delayed. The proposed rules are able to solve the XOR problem with both rate coded and temporally coded input and to learn a target output firing-rate pattern. These learning rules are biologically plausible, may be used for training generic artificial spiking neural networks, regardless of the neural model used, and suggest the experimental investigation in animals of the existence of reward-modulated STDP.  相似文献   

5.
脉冲神经网络是一种基于生物的网络模型,它的输入输出为具有时间特性的脉冲序列,其运行机制相比其他传统人工神经网络更加接近于生物神经网络。神经元之间通过脉冲序列传递信息,这些信息通过脉冲的激发时间编码能够更有效地发挥网络的学习性能。脉冲神经元的时间特性导致了其工作机制较为复杂,而spiking神经元的敏感性反映了当神经元输入发生扰动时输出的spike的变化情况,可以作为研究神经元内部工作机制的工具。不同于传统的神经网络,spiking神经元敏感性定义为输出脉冲的变化时刻个数与运行时间长度的比值,能直接反映出输入扰动对输出的影响程度。通过对不同形式的输入扰动敏感性的分析,可以看出spiking神经元的敏感性较为复杂,当全体突触发生扰动时,神经元为定值,而当部分突触发生扰动时,不同突触的扰动会导致不同大小的神经元敏感性。  相似文献   

6.
The objective of this work is to use a multi-core embedded platform as computing architectures for neural applications relevant to neuromorphic engineering: e.g., robotics, and artificial and spiking neural networks. Recently, it has been shown how spike-timing-dependent plasticity (STDP) can play a key role in pattern recognition. In particular, multiple repeating arbitrary spatio-temporal spike patterns hidden in spike trains can be robustly detected and learned by multiple neurons equipped with spike-timing-dependent plasticity listening to the incoming spike trains. This paper presents an implementation on a biological time scale of STDP algorithm to localize a repeating spatio-temporal spike patterns on a multi-core embedded platform.  相似文献   

7.
随着深度学习在训练成本、泛化能力、可解释性以及可靠性等方面的不足日益突出,类脑计算已成为下一代人工智能的研究热点。脉冲神经网络能更好地模拟生物神经元的信息传递方式,且具有计算能力强、功耗低等特点,在模拟人脑学习、记忆、推理、判断和决策等复杂信息方面具有重要的潜力。本文对脉冲神经网络从以下几个方面进行总结:首先阐述脉冲神经网络的基本结构和工作原理;在结构优化方面,从脉冲神经网络的编码方式、脉冲神经元改进、拓扑结构、训练算法以及结合其他算法这5个方面进行总结;在训练算法方面,从基于反向传播方法、基于脉冲时序依赖可塑性规则方法、人工神经网络转脉冲神经网络和其他学习算法这4个方面进行总结;针对脉冲神经网络的不足与发展,从监督学习和无监督学习两方面剖析;最后,将脉冲神经网络应用到类脑计算和仿生任务中。本文对脉冲神经网络的基本原理、编码方式、网络结构和训练算法进行了系统归纳,对脉冲神经网络的研究发展具有一定的积极意义。  相似文献   

8.
《Advanced Robotics》2013,27(10):1177-1199
A novel integrative learning architecture based on a reinforcement learning schemata model (RLSM) with a spike timing-dependent plasticity (STDP) network is described. This architecture models operant conditioning with discriminative stimuli in an autonomous agent engaged in multiple reinforcement learning tasks. The architecture consists of two constitutional learning architectures: RLSM and STDP. RLSM is an incremental modular reinforcement learning architecture, and it makes an autonomous agent acquire several behavioral concepts incrementally through continuous interactions with its environment and/or caregivers. STDP is a learning rule of neuronal plasticity found in cerebral cortices and the hippocampus of the human brain. STDP is a temporally asymmetric learning rule that contrasts with the Hebbian learning rule. We found that STDP enabled an autonomous robot to associate auditory input with its acquired behaviors and to select reinforcement learning modules more effectively. Auditory signals interpreted based on the acquired behaviors were revealed to correspond to 'signs' of required behaviors and incoming situations. This integrative learning architecture was evaluated in the context of on-line modular learning.  相似文献   

9.
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.  相似文献   

10.
O Araki  K Aihara 《Neural computation》2001,13(12):2799-2822
Although various means of information representation in the cortex have been considered, the fundamental mechanism for such representation is not well understood. The relation between neural network dynamics and properties of information representation needs to be examined. We examined spatial pattern properties of mean firing rates and spatiotemporal spikes in an interconnected spiking neural network model. We found that whereas the spatiotemporal spike patterns are chaotic and unstable, the spatial patterns of mean firing rates (SPMFR) are steady and stable, reflecting the internal structure of synaptic weights. Interestingly, the chaotic instability contributes to fast stabilization of the SPMFR. Findings suggest that there are two types of network dynamics behind neuronal spiking: internally-driven dynamics and externally driven dynamics. When the internally driven dynamics dominate, spikes are relatively more chaotic and independent of external inputs; the SPMFR are steady and stable. When the externally driven dynamics dominate, the spiking patterns are relatively more dependent on the spatiotemporal structure of external inputs. These emergent properties of information representation imply that the brain may adopt a dual coding system. Recent experimental data suggest that internally driven and externally driven dynamics coexist and work together in the cortex.  相似文献   

11.
A spiking neural network that learns temporal sequences is described. A sparse code in which individual neurons represent sequences and subsequences enables multiple sequences to be stored without interference. The network is founded on a model of sequence compression in the hippocampus that is robust to variation in sequence element duration and well suited to learn sequences through spike-timing dependent plasticity (STDP). Three additions to the sequence compression model underlie the sparse representation: synapses connecting the neurons of the network that are subject to STDP, a competitive plasticity rule so that neurons specialize to individual sequences, and neural depolarization after spiking so that neurons have a memory. The response to new sequence elements is determined by the neurons that have responded to the previous subsequence, according to the competitively learned synaptic connections. Numerical simulations show that the model can learn sets of intersecting sequences, presented with widely differing frequencies, with elements of varying duration.  相似文献   

12.
This paper presents new findings in the design and application of biologically plausible neural networks based on spiking neuron models, which represent a more plausible model of real biological neurons where time is considered as an important feature for information encoding and processing in the brain. The design approach consists of an evolutionary strategy based supervised training algorithm, newly developed by the authors, and the use of different biologically plausible neuronal models. A dynamic synapse (DS) based neuron model, a biologically more detailed model, and the spike response model (SRM) are investigated in order to demonstrate the efficacy of the proposed approach and to further our understanding of the computing capabilities of the nervous system. Unlike the conventional synapse, represented as a static entity with a fixed weight, employed in conventional and SRM-based neural networks, a DS is weightless and its strength changes upon the arrival of incoming input spikes. Therefore its efficacy depends on the temporal structure of the impinging spike trains. In the proposed approach, the training of the network free parameters is achieved using an evolutionary strategy where, instead of binary encoding, real values are used to encode the static and DS parameters which underlie the learning process. The results show that spiking neural networks based on both types of synapse are capable of learning non-linearly separable data by means of spatio-temporal encoding. Furthermore, a comparison of the obtained performance with classical neural networks (multi-layer perceptrons) is presented.  相似文献   

13.
Repetitions of precise spike patterns observed both in vivo and in vitro have been reported for more than a decade. Studies on the spike volley (a pulse packet) propagating through a homogeneous feedforward network have demonstrated its capability of generating spike patterns with millisecond fidelity. This model is called the synfire chain and suggests a possible mechanism for generating repeated spike patterns (RSPs). The propagation speed of the pulse packet determines the temporal property of RSPs. However, the relationship between propagation speed and network structure is not well understood. We studied a feedforward network with Mexican-hat connectivity by using the leaky integrate-and-fire neuron model and analyzed the network dynamics with the Fokker-Planck equation. We examined the effect of the spatial pattern of pulse packets on RSPs in the network with multistability. Pulse packets can take spatially uniform or localized shapes in a multistable regime, and they propagate with different speeds. These distinct pulse packets generate RSPs with different timescales, but the order of spikes and the ratios between interspike intervals are preserved. This result indicates that the RSPs can be transformed into the same template pattern through the expanding or contracting operation of the timescale.  相似文献   

14.
In this paper, we describe the design of an artificial neural network for spatiotemporal pattern recognition and recall. This network has a five-layered architecture and operates in two modes: pattern learning and recognition mode, and pattern recall mode. In pattern learning and recognition mode, the network extracts a set of topologically and temporally correlated features from each spatiotemporal input pattern based on a variation of Kohonen's self-organizing maps. These features are then used to classify the input into categories based on the fuzzy ART network. In the pattern recall mode, the network can reconstruct any of the learned categories when the appropriate category node is excited or probed. The network performance was evaluated via computer simulations of time-varying, two-dimensional and three-dimensional data. The results show that the network is capable of both recognition and recall of spatiotemporal data in an online and self-organized fashion. The network can also classify repeated events in the spatiotemporal input and is robust to noise in the input such as distortions in the spatial and temporal content.  相似文献   

15.
相较于第1代和第2代神经网络,第3代神经网络的脉冲神经网络是一种更加接近于生物神经网络的模型,因此更具有生物可解释性和低功耗性。基于脉冲神经元模型,脉冲神经网络可以通过脉冲信号的形式模拟生物信号在神经网络中的传播,通过脉冲神经元的膜电位变化来发放脉冲序列,脉冲序列通过时空联合表达不仅传递了空间信息还传递了时间信息。当前面向模式识别任务的脉冲神经网络模型性能还不及深度学习,其中一个重要原因在于脉冲神经网络的学习方法不成熟,深度学习中神经网络的人工神经元是基于实数形式的输出,这使得其可以使用全局性的反向传播算法对深度神经网络的参数进行训练,脉冲序列是二值性的离散输出,这直接导致对脉冲神经网络的训练存在一定困难,如何对脉冲神经网络进行高效训练是一个具有挑战的研究问题。本文首先总结了脉冲神经网络研究领域中的相关学习算法,然后对其中主要的方法:直接监督学习、无监督学习的算法以及ANN2SNN的转换算法进行分析介绍,并对其中代表性的工作进行对比分析,最后基于对当前主流方法的总结,对未来更高效、更仿生的脉冲神经网络参数学习方法进行展望。  相似文献   

16.
Several recent models have proposed the use of precise timing of spikes for cortical computation. Such models rely on growing experimental evidence that neurons in the thalamus as well as many primary sensory cortical areas respond to stimuli with remarkable temporal precision. Models of computation based on spike timing, where the output of the network is a function not only of the input but also of an independently initializable internal state of the network, must, however, satisfy a critical constraint: the dynamics of the network should not be sensitive to initial conditions. We have previously developed an abstract dynamical system for networks of spiking neurons that has allowed us to identify the criterion for the stationary dynamics of a network to be sensitive to initial conditions. Guided by this criterion, we analyzed the dynamics of several recurrent cortical architectures, including one from the orientation selectivity literature. Based on the results, we conclude that under conditions of sustained, Poisson-like, weakly correlated, low to moderate levels of internal activity as found in the cortex, it is unlikely that recurrent cortical networks can robustly generate precise spike trajectories, that is, spatiotemporal patterns of spikes precise to the millisecond timescale.  相似文献   

17.
Few algorithms for supervised training of spiking neural networks exist that can deal with patterns of multiple spikes, and their computational properties are largely unexplored. We demonstrate in a set of simulations that the ReSuMe learning algorithm can successfully be applied to layered neural networks. Input and output patterns are encoded as spike trains of multiple precisely timed spikes, and the network learns to transform the input trains into target output trains. This is done by combining the ReSuMe learning algorithm with multiplicative scaling of the connections of downstream neurons. We show in particular that layered networks with one hidden layer can learn the basic logical operations, including Exclusive-Or, while networks without hidden layer cannot, mirroring an analogous result for layered networks of rate neurons. While supervised learning in spiking neural networks is not yet fit for technical purposes, exploring computational properties of spiking neural networks advances our understanding of how computations can be done with spike trains.  相似文献   

18.
We introduce and test a system for simulating networks of conductance-based neuron models using analog circuits. At the single-cell level, we use custom-designed analog circuits (ASICs) that simulate two types of spiking neurons based on Hodgkin-Huxley like dynamics: "regular spiking" excitatory neurons with spike-frequency adaptation, and "fast spiking" inhibitory neurons. Synaptic interactions are mediated by conductance-based synaptic currents described by kinetic models. Connectivity and plasticity rules are implemented digitally through a real time interface between a computer and a PCI board containing the ASICs. We show a prototype system of a few neurons interconnected with synapses undergoing spike-timing dependent plasticity (STDP), and compare this system with numerical simulations. We use this system to evaluate the effect of parameter dispersion on the behavior of small circuits of neurons. It is shown that, although the exact spike timings are not precisely emulated by the ASIC neurons, the behavior of small networks with STDP matches that of numerical simulations. Thus, this mixed analog-digital architecture provides a valuable tool for real-time simulations of networks of neurons with STDP. They should be useful for any real-time application, such as hybrid systems interfacing network models with biological neurons.  相似文献   

19.
Hlne  Rgis  Samy 《Neurocomputing》2008,71(7-9):1143-1158
We propose a multi-timescale learning rule for spiking neuron networks, in the line of the recently emerging field of reservoir computing. The reservoir is a network model of spiking neurons, with random topology and driven by STDP (spike-time-dependent plasticity), a temporal Hebbian unsupervised learning mode, biologically observed. The model is further driven by a supervised learning algorithm, based on a margin criterion, that affects the synaptic delays linking the network to the readout neurons, with classification as a goal task. The network processing and the resulting performance can be explained by the concept of polychronization, proposed by Izhikevich [Polychronization: computation with spikes, Neural Comput. 18(2) (2006) 245–282], on physiological grounds. The model emphasizes that polychronization can be used as a tool for exploiting the computational power of synaptic delays and for monitoring the topology and activity of a spiking neuron network.  相似文献   

20.
Aoki T  Aoyagi T 《Neural computation》2007,19(10):2720-2738
Although context-dependent spike synchronization among populations of neurons has been experimentally observed, its functional role remains controversial. In this modeling study, we demonstrate that in a network of spiking neurons organized according to spike-timing-dependent plasticity, an increase in the degree of synchrony of a uniform input can cause transitions between memorized activity patterns in the order presented during learning. Furthermore, context-dependent transitions from a single pattern to multiple patterns can be induced under appropriate learning conditions. These findings suggest one possible functional role of neuronal synchrony in controlling the flow of information by altering the dynamics of the network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号