首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Spike-timing-dependent synaptic plasticity (STDP), which depends on the temporal difference between pre- and postsynaptic action potentials, is observed in the cortices and hippocampus. Although several theoretical and experimental studies have revealed its fundamental aspects, its functional role remains unclear. To examine how an input spatiotemporal spike pattern is altered by STDP, we observed the output spike patterns of a spiking neural network model with an asymmetrical STDP rule when the input spatiotemporal pattern is repeatedly applied. The spiking neural network comprises excitatory and inhibitory neurons that exhibit local interactions. Numerical experiments show that the spiking neural network generates a single global synchrony whose relative timing depends on the input spatiotemporal pattern and the neural network structure. This result implies that the spiking neural network learns the transformation from spatiotemporal to temporal information. In the literature, the origin of the synfire chain has not been sufficiently focused on. Our results indicate that spiking neural networks with STDP can ignite synfire chains in the cortices.  相似文献   

2.
In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed.  相似文献   

3.
Chechik G 《Neural computation》2003,15(7):1481-1510
Synaptic plasticity was recently shown to depend on the relative timing of the pre- and postsynaptic spikes. This article analytically derives a spike-dependent learning rule based on the principle of information maximization for a single neuron with spiking inputs. This rule is then transformed into a biologically feasible rule, which is compared to the experimentally observed plasticity. This comparison reveals that the biological rule increases information to a near-optimal level and provides insights into the structure of biological plasticity. It shows that the time dependency of synaptic potentiation should be determined by the synaptic transfer function and membrane leak. Potentiation consists of weight-dependent and weight-independent components whose weights are of the same order of magnitude. It further suggests that synaptic depression should be triggered by rare and relevant inputs but at the same time serves to unlearn the baseline statistics of the network's inputs. The optimal depression curve is uniformly extended in time, but biological constraints that cause the cell to forget past events may lead to a different shape, which is not specified by our current model. The structure of the optimal rule thus suggests a computational account for several temporal characteristics of the biological spike-timing-dependent rules.  相似文献   

4.
The objective of this work is to use a multi-core embedded platform as computing architectures for neural applications relevant to neuromorphic engineering: e.g., robotics, and artificial and spiking neural networks. Recently, it has been shown how spike-timing-dependent plasticity (STDP) can play a key role in pattern recognition. In particular, multiple repeating arbitrary spatio-temporal spike patterns hidden in spike trains can be robustly detected and learned by multiple neurons equipped with spike-timing-dependent plasticity listening to the incoming spike trains. This paper presents an implementation on a biological time scale of STDP algorithm to localize a repeating spatio-temporal spike patterns on a multi-core embedded platform.  相似文献   

5.
Synapses in various neural preparations exhibit spike-timing-dependent plasticity (STDP) with a variety of learning window functions. The window functions determine the magnitude and the polarity of synaptic change according to the time difference of pre- and postsynaptic spikes. Numerical experiments revealed that STDP learning with a single-exponential window function resulted in a bimodal distribution of synaptic conductances as a consequence of competition between synapses. A slightly modified window function, however, resulted in a unimodal distribution rather than a bimodal distribution. Since various window functions have been observed in neural preparations, we develop a rigorous mathematical method to calculate the conductance distribution for any given window function. Our method is based on the Fokker-Planck equation to determine the conductance distribution and on the Ornstein-Uhlenbeck process to characterize the membrane potential fluctuations. Demonstrating that our method reproduces the known quantitative results of STDP learning, we apply the method to the type of STDP learning found recently in the CA1 region of the rat hippocampus. We find that this learning can result in nearly optimized competition between synapses. Meanwhile, we find that the type of STDP learning found in the cerebellum-like structure of electric fish can result in all-or-none synapses: either all the synaptic conductances are maximized, or none of them becomes significantly large. Our method also determines the window function that optimizes synaptic competition.  相似文献   

6.
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.  相似文献   

7.
Reinforcement learning, spike-time-dependent plasticity, and the BCM rule   总被引:1,自引:0,他引:1  
Baras D  Meir R 《Neural computation》2007,19(8):2245-2279
Learning agents, whether natural or artificial, must update their internal parameters in order to improve their behavior over time. In reinforcement learning, this plasticity is influenced by an environmental signal, termed a reward, that directs the changes in appropriate directions. We apply a recently introduced policy learning algorithm from machine learning to networks of spiking neurons and derive a spike-time-dependent plasticity rule that ensures convergence to a local optimum of the expected average reward. The approach is applicable to a broad class of neuronal models, including the Hodgkin-Huxley model. We demonstrate the effectiveness of the derived rule in several toy problems. Finally, through statistical analysis, we show that the synaptic plasticity rule established is closely related to the widely used BCM rule, for which good biological evidence exists.  相似文献   

8.
The balanced random network model attracts considerable interest because it explains the irregular spiking activity at low rates and large membrane potential fluctuations exhibited by cortical neurons in vivo. In this article, we investigate to what extent this model is also compatible with the experimentally observed phenomenon of spike-timing-dependent plasticity (STDP). Confronted with the plethora of theoretical models for STDP available, we reexamine the experimental data. On this basis, we propose a novel STDP update rule, with a multiplicative dependence on the synaptic weight for depression, and a power law dependence for potentiation. We show that this rule, when implemented in large, balanced networks of realistic connectivity and sparseness, is compatible with the asynchronous irregular activity regime. The resultant equilibrium weight distribution is unimodal with fluctuating individual weight trajectories and does not exhibit development of structure. We investigate the robustness of our results with respect to the relative strength of depression. We introduce synchronous stimulation to a group of neurons and demonstrate that the decoupling of this group from the rest of the network is so severe that it cannot effectively control the spiking of other neurons, even those with the highest convergence from this group.  相似文献   

9.
《Advanced Robotics》2013,27(10):1177-1199
A novel integrative learning architecture based on a reinforcement learning schemata model (RLSM) with a spike timing-dependent plasticity (STDP) network is described. This architecture models operant conditioning with discriminative stimuli in an autonomous agent engaged in multiple reinforcement learning tasks. The architecture consists of two constitutional learning architectures: RLSM and STDP. RLSM is an incremental modular reinforcement learning architecture, and it makes an autonomous agent acquire several behavioral concepts incrementally through continuous interactions with its environment and/or caregivers. STDP is a learning rule of neuronal plasticity found in cerebral cortices and the hippocampus of the human brain. STDP is a temporally asymmetric learning rule that contrasts with the Hebbian learning rule. We found that STDP enabled an autonomous robot to associate auditory input with its acquired behaviors and to select reinforcement learning modules more effectively. Auditory signals interpreted based on the acquired behaviors were revealed to correspond to 'signs' of required behaviors and incoming situations. This integrative learning architecture was evaluated in the context of on-line modular learning.  相似文献   

10.
We introduce and test a system for simulating networks of conductance-based neuron models using analog circuits. At the single-cell level, we use custom-designed analog circuits (ASICs) that simulate two types of spiking neurons based on Hodgkin-Huxley like dynamics: "regular spiking" excitatory neurons with spike-frequency adaptation, and "fast spiking" inhibitory neurons. Synaptic interactions are mediated by conductance-based synaptic currents described by kinetic models. Connectivity and plasticity rules are implemented digitally through a real time interface between a computer and a PCI board containing the ASICs. We show a prototype system of a few neurons interconnected with synapses undergoing spike-timing dependent plasticity (STDP), and compare this system with numerical simulations. We use this system to evaluate the effect of parameter dispersion on the behavior of small circuits of neurons. It is shown that, although the exact spike timings are not precisely emulated by the ASIC neurons, the behavior of small networks with STDP matches that of numerical simulations. Thus, this mixed analog-digital architecture provides a valuable tool for real-time simulations of networks of neurons with STDP. They should be useful for any real-time application, such as hybrid systems interfacing network models with biological neurons.  相似文献   

11.
We demonstrate that the BCM learning rule follows directly from STDP when pre- and postsynaptic neurons fire uncorrelated or weakly correlated Poisson spike trains, and only nearest-neighbor spike interactions are taken into account.  相似文献   

12.
Leen TK  Friel R 《Neural computation》2012,24(5):1109-1146
Online machine learning rules and many biological spike-timing-dependent plasticity (STDP) learning rules generate jump process Markov chains for the synaptic weights. We give a perturbation expansion for the dynamics that, unlike the usual approximation by a Fokker-Planck equation (FPE), is well justified. Our approach extends the related system size expansion by giving an expansion for the probability density as well as its moments. We apply the approach to two observed STDP learning rules and show that in regimes where the FPE breaks down, the new perturbation expansion agrees well with Monte Carlo simulations. The methods are also applicable to the dynamics of stochastic neural activity. Like previous ensemble analyses of STDP, we focus on equilibrium solutions, although the methods can in principle be applied to transients as well.  相似文献   

13.
Spike timing-dependent plasticity (STDP) is a learning rule that modifies the strength of a neuron's synapses as a function of the precise temporal relations between input and output spikes. In many brains areas, temporal aspects of spike trains have been found to be highly reproducible. How will STDP affect a neuron's behavior when it is repeatedly presented with the same input spike pattern? We show in this theoretical study that repeated inputs systematically lead to a shaping of the neuron's selectivity, emphasizing its very first input spikes, while steadily decreasing the postsynaptic response latency. This was obtained under various conditions of background noise, and even under conditions where spiking latencies and firing rates, or synchrony, provided conflicting informations. The key role of first spikes demonstrated here provides further support for models using a single wave of spikes to implement rapid neural processing.  相似文献   

14.
Karsten  Andreas  Bernd  Ana D.  Thomas 《Neurocomputing》2008,71(7-9):1694-1704
Biologically plausible excitatory neural networks develop a persistent synchronized pattern of activity depending on spontaneous activity and synaptic refractoriness (short term depression). By fixed synaptic weights synchronous bursts of oscillatory activity are stable and involve the whole network. In our modeling study we investigate the effect of a dynamic Hebbian-like learning mechanism, spike-timing-dependent plasticity (STDP), on the changes of synaptic weights depending on synchronous activity and network connection strategies (small-world topology). We show that STDP modifies the weights of synaptic connections in such a way that synchronization of neuronal activity is considerably weakened. Networks with a higher proportion of long connections can sustain a higher level of synchronization in spite of STDP influence. The resulting distribution of the synaptic weights in single neurons depends both on the global statistics of firing dynamics and on the number of incoming and outgoing connections.  相似文献   

15.
程龙  刘洋 《控制与决策》2018,33(5):923-937
脉冲神经网络是目前最具有生物解释性的人工神经网络,是类脑智能领域的核心组成部分.首先介绍各类常用的脉冲神经元模型以及前馈和循环型脉冲神经网络结构;然后介绍脉冲神经网络的时间编码方式,在此基础上,系统地介绍脉冲神经网络的学习算法,包括无监督学习和监督学习算法,其中监督学习算法按照梯度下降算法、结合STDP规则的算法和基于脉冲序列卷积核的算法3大类别分别展开详细介绍和总结;接着列举脉冲神经网络在控制领域、模式识别领域和类脑智能研究领域的应用,并在此基础上介绍各国脑计划中,脉冲神经网络与神经形态处理器相结合的案例;最后分析脉冲神经网络目前所存在的困难和挑战.  相似文献   

16.
人工神经网络(Artificial neural networks,ANNs)与强化学习算法的结合显著增强了智能体的学习能力和效率.然而,这些算法需要消耗大量的计算资源,且难以硬件实现.而脉冲神经网络(Spiking neural networks,SNNs)使用脉冲信号来传递信息,具有能量效率高、仿生特性强等特点,且有利于进一步实现强化学习的硬件加速,增强嵌入式智能体的自主学习能力.不过,目前脉冲神经网络的学习和训练过程较为复杂,网络设计和实现方面存在较大挑战.本文通过引入人工突触的理想实现元件——忆阻器,提出了一种硬件友好的基于多层忆阻脉冲神经网络的强化学习算法.特别地,设计了用于数据——脉冲转换的脉冲神经元;通过改进脉冲时间依赖可塑性(Spiking-timing dependent plasticity,STDP)规则,使脉冲神经网络与强化学习算法有机结合,并设计了对应的忆阻神经突触;构建了可动态调整的网络结构,以提高网络的学习效率;最后,以Open AI Gym中的CartPole-v0(倒立摆)和MountainCar-v0(小车爬坡)为例,通过实验仿真和对比分析,验证了方案的有效性和相对于传统强化学习方法的优势.  相似文献   

17.
Hlne  Rgis  Samy 《Neurocomputing》2008,71(7-9):1143-1158
We propose a multi-timescale learning rule for spiking neuron networks, in the line of the recently emerging field of reservoir computing. The reservoir is a network model of spiking neurons, with random topology and driven by STDP (spike-time-dependent plasticity), a temporal Hebbian unsupervised learning mode, biologically observed. The model is further driven by a supervised learning algorithm, based on a margin criterion, that affects the synaptic delays linking the network to the readout neurons, with classification as a goal task. The network processing and the resulting performance can be explained by the concept of polychronization, proposed by Izhikevich [Polychronization: computation with spikes, Neural Comput. 18(2) (2006) 245–282], on physiological grounds. The model emphasizes that polychronization can be used as a tool for exploiting the computational power of synaptic delays and for monitoring the topology and activity of a spiking neuron network.  相似文献   

18.
We present a spiking neuron model that allows for an analytic calculation of the correlations between pre- and postsynaptic spikes. The neuron model is a generalization of the integrate-and-fire model and equipped with a probabilistic spike-triggering mechanism. We show that under certain biologically plausible conditions, pre- and postsynaptic spike trains can be described simultaneously as an inhomogeneous Poisson process. Inspired by experimental findings, we develop a model for synaptic long-term plasticity that relies on the relative timing of pre- and post-synaptic action potentials. Being given an input statistics, we compute the stationary synaptic weights that result from the temporal correlations between the pre- and postsynaptic spikes. By means of both analytic calculations and computer simulations, we show that such a mechanism of synaptic plasticity is able to strengthen those input synapses that convey precisely timed spikes at the expense of synapses that deliver spikes with a broad temporal distribution. This may be of vital importance for any kind of information processing based on spiking neurons and temporal coding.  相似文献   

19.
Spike-timing-dependent plasticity (STDP) is described by long-term potentiation (LTP), when a presynaptic event precedes a postsynaptic event, and by long-term depression (LTD), when the temporal order is reversed. In this article, we present a biophysical model of STDP based on a differential Hebbian learning rule (ISO learning). This rule correlates presynaptically the NMDA channel conductance with the derivative of the membrane potential at the synapse as the postsynaptic signal. The model is able to reproduce the generic STDP weight change characteristic. We find that (1) The actual shape of the weight change curve strongly depends on the NMDA channel characteristics and on the shape of the membrane potential at the synapse. (2) The typical antisymmetrical STDP curve (LTD and LTP) can become similar to a standard Hebbian characteristic (LTP only) without having to change the learning rule. This occurs if the membrane depolarization has a shallow onset and is long lasting. (3) It is known that the membrane potential varies along the dendrite as a result of the active or passive backpropagation of somatic spikes or because of local dendritic processes. As a consequence, our model predicts that learning properties will be different at different locations on the dendritic tree. In conclusion, such site-specific synaptic plasticity would provide a neuron with powerful learning capabilities.  相似文献   

20.
Polychronization: computation with spikes   总被引:10,自引:0,他引:10  
We present a minimal spiking network that can polychronize, that is, exhibit reproducible time-locked but not synchronous firing patterns with millisecond precision, as in synfire braids. The network consists of cortical spiking neurons with axonal conduction delays and spike-timing-dependent plasticity (STDP); a ready-to-use MATLAB code is included. It exhibits sleeplike oscillations, gamma (40 Hz) rhythms, conversion of firing rates to spike timings, and other interesting regimes. Due to the interplay between the delays and STDP, the spiking neurons spontaneously self-organize into groups and generate patterns of stereotypical polychronous activity. To our surprise, the number of coexisting polychronous groups far exceeds the number of neurons in the network, resulting in an unprecedented memory capacity of the system. We speculate on the significance of polychrony to the theory of neuronal group selection (TNGS, neural Darwinism), cognitive neural computations, binding and gamma rhythm, mechanisms of attention, and consciousness as "attention to memories."  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号