首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
突触长时程可塑性使突触效能加强(长时程增强,LTP)或突触效能减弱(长时程压抑,LTD)是神经系统中潜在的学习和记忆。兴奋神经元突触的LTP和LTD是由突触前和突触后活动的精确定时所诱导。突触长时程可塑性传递改变涉及许多复杂的过程;例如,突触可塑性的两种形式由不同时程的Ca2+进入突触细胞诱导。我们给出了突触可塑性的LTP和LTD合成的过程、动力学模型及突触增强变化的预测模型,当用预测尖峰的不同频率训练和Poisson分布训练时,将产生不同的突触前和突触后的平均电压。  相似文献   

2.
Spike-timing-dependent synaptic plasticity (STDP), which depends on the temporal difference between pre- and postsynaptic action potentials, is observed in the cortices and hippocampus. Although several theoretical and experimental studies have revealed its fundamental aspects, its functional role remains unclear. To examine how an input spatiotemporal spike pattern is altered by STDP, we observed the output spike patterns of a spiking neural network model with an asymmetrical STDP rule when the input spatiotemporal pattern is repeatedly applied. The spiking neural network comprises excitatory and inhibitory neurons that exhibit local interactions. Numerical experiments show that the spiking neural network generates a single global synchrony whose relative timing depends on the input spatiotemporal pattern and the neural network structure. This result implies that the spiking neural network learns the transformation from spatiotemporal to temporal information. In the literature, the origin of the synfire chain has not been sufficiently focused on. Our results indicate that spiking neural networks with STDP can ignite synfire chains in the cortices.  相似文献   

3.
Spike-timing-dependent plasticity (STDP) is described by long-term potentiation (LTP), when a presynaptic event precedes a postsynaptic event, and by long-term depression (LTD), when the temporal order is reversed. In this article, we present a biophysical model of STDP based on a differential Hebbian learning rule (ISO learning). This rule correlates presynaptically the NMDA channel conductance with the derivative of the membrane potential at the synapse as the postsynaptic signal. The model is able to reproduce the generic STDP weight change characteristic. We find that (1) The actual shape of the weight change curve strongly depends on the NMDA channel characteristics and on the shape of the membrane potential at the synapse. (2) The typical antisymmetrical STDP curve (LTD and LTP) can become similar to a standard Hebbian characteristic (LTP only) without having to change the learning rule. This occurs if the membrane depolarization has a shallow onset and is long lasting. (3) It is known that the membrane potential varies along the dendrite as a result of the active or passive backpropagation of somatic spikes or because of local dendritic processes. As a consequence, our model predicts that learning properties will be different at different locations on the dendritic tree. In conclusion, such site-specific synaptic plasticity would provide a neuron with powerful learning capabilities.  相似文献   

4.
The synaptic phenomena of long-term potentiation (LTP) and long-term depression (LTD) have been intensively studied for over twenty-five years. Although many diverse aspects of these forms of plasticity have been observed, no single theory has offered a unifying explanation for them. Here, a statistical "bin" model is proposed to account for a variety of features observed in LTP and LTD experiments performed with field potentials in mammalian cortical slices. It is hypothesized that long-term synaptic changes will be induced when statistically unlikely conjunctions of pre- and postsynaptic activity occur. This hypothesis implies that finite changes in synaptic strength will be proportional to information transmitted by conjunctions and that excitatory synapses will obey a Hebbian rule (Hebb, 1949). Using only one set of constants, the bin model offers an explanation as to why synaptic strength decreases in a decelerating manner during LTD induction (Mulkey & Malenka, 1992); why the induction protocols for LTP and LTD are asymmetric (Dudek & Bear, 1992; Mulkey & Malenka, 1992); why stimulation over a range of frequencies produces a frequency-response curve similar to that proposed by the BCM theory (Bienenstock, Cooper, & Munro, 1982; Dudek & Bear, 1992); and why this curve would shift as postsynaptic activity is changed (Kirkwood, Rioult, & Bear, 1996). In addition, the bin model offers an alternative to the BCM theory by predicting that changes in postsynaptic activity will produce vertical shifts in the curve rather than merely horizontal shifts.  相似文献   

5.
It has been shown in studies of biological synaptic plasticity that synaptic efficacy can change in a very short time window, compared to the time scale associated with typical neural events. This time scale is small enough to possibly have an effect on pattern recall processes in neural networks. We study properties of a neural network which uses a cyclic Hebb rule. Then we add the short term potentiation of synapses in the recall phase. We show that this approach preserves the ability of the network to recognize the patterns stored by the network and that the network does not respond to other patterns at the same time. We show that this approach dramatically increases the capacity of the network at the cost of a longer pattern recall process. We discuss that the network possesses two types of recall. The fast recall does not need synaptic plasticity to recognize a pattern, while the slower recall utilizes synaptic plasticity. This is something that we all experience in our daily lives: some memories can be recalled promptly whereas recollection of other memories requires much more time.  相似文献   

6.
TP Lee  DV Buonomano 《Neural computation》2012,24(10):2579-2603
The discrimination of complex auditory stimuli relies on the spatiotemporal structure of spike patterns arriving in the cortex. While recordings from auditory areas reveal that many neurons are highly selective to specific spatiotemporal stimuli, the mechanisms underlying this selectivity are unknown. Using computer simulations, we show that selectivity can emerge in neurons in an entirely unsupervised manner. The model is based on recurrently connected spiking neurons and synapses that exhibit short-term synaptic plasticity. During a developmental stage, spoken digits were presented to the network; the only type of long-term plasticity present was a form of homeostatic synaptic plasticity. From an initially unresponsive state, training generated a high percentage of neurons that responded selectively to individual digits. Furthermore, units within the network exhibited a cardinal feature of vocalization-sensitive neurons in vivo: differential responses between forward and reverse stimulus presentations. Direction selectivity deteriorated significantly, however, if short-term synaptic plasticity was removed. These results establish that a simple form of homeostatic plasticity is capable of guiding recurrent networks into regimes in which complex stimuli can be discriminated. In addition, one computational function of short-term synaptic plasticity may be to provide an inherent temporal asymmetry, thus contributing to the characteristic forward-reverse selectivity.  相似文献   

7.
A spiking neural network that learns temporal sequences is described. A sparse code in which individual neurons represent sequences and subsequences enables multiple sequences to be stored without interference. The network is founded on a model of sequence compression in the hippocampus that is robust to variation in sequence element duration and well suited to learn sequences through spike-timing dependent plasticity (STDP). Three additions to the sequence compression model underlie the sparse representation: synapses connecting the neurons of the network that are subject to STDP, a competitive plasticity rule so that neurons specialize to individual sequences, and neural depolarization after spiking so that neurons have a memory. The response to new sequence elements is determined by the neurons that have responded to the previous subsequence, according to the competitively learned synaptic connections. Numerical simulations show that the model can learn sets of intersecting sequences, presented with widely differing frequencies, with elements of varying duration.  相似文献   

8.
Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employing binary synapses, or the BCPNN rule of Lansner and Ekeberg, for example. Here I show that all of these previous models are limit cases of a general optimal model where synaptic learning is determined by probabilistic Bayesian considerations. Asymptotically, for large networks and very sparse neuron activity, the Bayesian model becomes identical to an inhibitory implementation of the Willshaw and BCPNN-type models. For less sparse patterns, the Bayesian model becomes identical to Hopfield-type networks employing the covariance rule. For intermediate sparseness or finite networks, the optimal Bayesian learning rule differs from the previous models and can significantly improve memory performance. I also provide a unified analytical framework to determine memory capacity at a given output noise level that links approaches based on mutual information, Hamming distance, and signal-to-noise ratio.  相似文献   

9.
Liu JK 《Neural computation》2011,23(12):3145-3161
It has been established that homeostatic synaptic scaling plasticity can maintain neural network activity in a stable regime. However, the underlying learning rule for this mechanism is still unclear. Whether it is dependent on the presynaptic site remains a topic of debate. Here we focus on two forms of learning rules: traditional synaptic scaling (SS) without presynaptic effect and presynaptic-dependent synaptic scaling (PSD). Analysis of the synaptic matrices reveals that transition matrices between consecutive synaptic matrices are distinct: they are diagonal and linear to neural activity under SS, but become nondiagonal and nonlinear under PSD. These differences produce different dynamics in recurrent neural networks. Numerical simulations show that network dynamics are stable under PSD but not SS, which suggests that PSD is a better form to describe homeostatic synaptic scaling plasticity. Matrix analysis used in the study may provide a novel way to examine the stability of learning dynamics.  相似文献   

10.
The cerebellar cortical circuitry may support a distinct second form of associative learning, complementary to the well-known synaptic plasticity (long term depression, LTD) that has been previously shown. As the granule cell axons ascend to the molecular layer, they make multiple synapses on the overlying Purkinje cells (PC). This ascending branch (AB) input, which has been ignored in models of cerebellar learning, is likely to be functionally distinct from the parallel fiber (PF) synaptic input. We predict that AB-PF correlations lead to Hebbian-type learning at the PF-PC synapse, including long term potentiation (LTP), and allowing the cortical circuit to combine AB-PF LTP for feedforward state prediction with climbing fiber LTD for feedback error correction. The new learning mechanism could therefore add computational capacity to cerebellar models and may explain more of the experimental data.  相似文献   

11.
Markram and Tsodyks, by showing that the elevated synaptic efficacy observed with single-pulse long-term potentiation (LTP) measurements disappears with higher-frequency test pulses, have critically challenged the conventional assumption that LTP reflects a general gain increase. This observed change in frequency dependence during synaptic potentiation is called redistribution of synaptic efficacy (RSE). RSE is here seen as the local realization of a global design principle in a neural network for pattern coding. The underlying computational model posits an adaptive threshold rather than a multiplicative weight as the elementary unit of long-term memory. A distributed instar learning law allows thresholds to increase only monotonically, but adaptation has a bidirectional effect on the model postsynaptic potential. At each synapse, threshold increases implement pattern selectivity via a frequency-dependent signal component, while a complementary frequency-independent component nonspecifically strengthens the path. This synaptic balance produces changes in frequency dependence that are robustly similar to those observed by Markram and Tsodyks. The network design therefore suggests a functional purpose for RSE, which, by helping to bound total memory change, supports a distributed coding scheme that is stable with fast as well as slow learning. Multiplicative weights have served as a cornerstone for models of physiological data and neural systems for decades. Although the model discussed here does not implement detailed physiology of synaptic transmission, its new learning laws operate in a network architecture that suggests how recently discovered synaptic computations such as RSE may help produce new network capabilities such as learning that is fast, stable, and distributed.  相似文献   

12.
Different models of attractor networks have been proposed to form cell assemblies. Among them, networks with a fixed synaptic matrix can be distinguished from those including learning dynamics, since the latter adapt the attractor landscape of the lateral connections according to the statistics of the presented stimuli, yielding a more complex behavior. We propose a new learning rule that builds internal representations of input timuli as attractors of neurons in a recurrent network. The dynamics of activation and synaptic adaptation are analyzed in experiments where representations for different input patterns are formed, focusing on the properties of the model as a memory system. The experimental results are exposed along with a survey of different Hebbian rules proposed in the literature for attractors formation. These rules are compared with the help of a new tool, the learning map, where LTP and LTD, as well as homo- and heterosynaptic competition, can be graphically interpreted.  相似文献   

13.
We present a model for spike-driven dynamics of a plastic synapse, suited for aVLSI implementation. The synaptic device behaves as a capacitor on short timescales and preserves the memory of two stable states (efficacies) on long timescales. The transitions (LTP/LTD) are stochastic because both the number and the distribution of neural spikes in any finite (stimulation) interval fluctuate, even at fixed pre- and postsynaptic spike rates. The dynamics of the single synapse is studied analytically by extending the solution to a classic problem in queuing theory (Takacs process). The model of the synapse is implemented in aVLSI and consists of only 18 transistors. It is also directly simulated. The simulations indicate that LTP/LTD probabilities versus rates are robust to fluctuations of the electronic parameters in a wide range of rates. The solutions for these probabilities are in very good agreement with both the simulations and measurements. Moreover, the probabilities are readily manipulable by variations of the chip's parameters, even in ranges where they are very small. The tests of the electronic device cover the range from spontaneous activity (3-4 Hz) to stimulus-driven rates (50 Hz). Low transition probabilities can be maintained in all ranges, even though the intrinsic time constants of the device are short (approximately 100 ms). Synaptic transitions are triggered by elevated presynaptic rates: for low presynaptic rates, there are essentially no transitions. The synaptic device can preserve its memory for years in the absence of stimulation. Stochasticity of learning is a result of the variability of interspike intervals; noise is a feature of the distributed dynamics of the network. The fact that the synapse is binary on long timescales solves the stability problem of synaptic efficacies in the absence of stimulation. Yet stochastic learning theory ensures that it does not affect the collective behavior of the network, if the transition probabilities are low and LTP is balanced against LTD.  相似文献   

14.
It has been a matter of debate how firing rates or spatiotemporal spike patterns carry information in the brain. Recent experimental and theoretical work in part showed that these codes, especially a population rate code and a synchronous code, can be dually used in a single architecture. However, we are not yet able to relate the role of firing rates and synchrony to the spatiotemporal structure of inputs and the architecture of neural networks. In this article, we examine how feedforward neural networks encode multiple input sources in the firing patterns. We apply spike-time-dependent plasticity as a fundamental mechanism to yield synaptic competition and the associated input filtering. We use the Fokker-Planck formalism to analyze the mechanism for synaptic competition in the case of multiple inputs, which underlies the formation of functional clusters in downstream layers in a self-organizing manner. Depending on the types of feedback coupling and shared connectivity, clusters are independently engaged in population rate coding or synchronous coding, or they interact to serve as input filters. Classes of dual codings and functional roles of spike-time-dependent plasticity are also discussed.  相似文献   

15.
Electrical stimulation in the hippocampus leads to an increase in synaptic efficacy that lasts for many hours. This long-term potentiation (LTP) of synaptic transmission is presumed to play a crucial role in learning and memory in the brain. Our experimental data on the hippocampus show that the homosynaptic LTP and the associative LTP are highly sensitive to temporal pattern stimuli given by different correlations between successive interstimulus events, even when the mean rate of the stimuli is held constant; negatively correlated stimuli have relatively little LTP, whereas positively correlated stimuli have greater LTP. This suggests that the detailed temporal properties of the stimulus are an important factor in inducing LTP and supports the possibility that temporal codes are used as indexes in associating/dissociating memory events in the hippocampus. Based on the physiological evidence, we propose a hypothesis on how association and dissociation of event memories are done in the hippocampal-cortical memory system. It is postulated that the association/dissociation memory is carried out by indexing the representations of events (memory contents) with temporal codes. The memory contents are supplied from the sensory association cortices, while the temporal codes are supplied from the decision-making/motivation area. The two inputs are mixed (indexing) in the ento-perirhinal area. Indexed signals are fed to hippocampus, where connection or disconnection of memory contents occurs, depending on the kind of index. Finally, association/dissociation of event memories is done in the association cortex according to a covariance rule: two events memories are associated when direct cortico-cortical inputs and indirect inputs from the hippocampus are positively correlated through the consolidation in hippocampus, and they are dissociated when two inputs are negatively correlated as a consequence of the disconnection in the hippocampus.  相似文献   

16.
Senn W  Fusi S 《Neural computation》2005,17(10):2106-2138
Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information.  相似文献   

17.
Neural systems as nonlinear filters   总被引:1,自引:0,他引:1  
Maass W  Sontag ED 《Neural computation》2000,12(8):1743-1772
Experimental data show that biological synapses behave quite differently from the symbolic synapses in all common artificial neural network models. Biological synapses are dynamic; their "weight" changes on a short timescale by several hundred percent in dependence of the past input to the synapse. In this article we address the question how this inherent synaptic dynamics (which should not be confused with long term learning) affects the computational power of a neural network. In particular, we analyze computations on temporal and spatiotemporal patterns, and we give a complete mathematical characterization of all filters that can be approximated by feedforward neural networks with dynamic synapses. It turns out that even with just a single hidden layer, such networks can approximate a very rich class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. Our characterization result provides for all nonlinear filters that are approximable by Volterra series a new complexity hierarchy related to the cost of implementing such filters in neural systems.  相似文献   

18.
Brader JM  Senn W  Fusi S 《Neural computation》2007,19(11):2881-2912
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).  相似文献   

19.
嗅觉系统是生物感觉神经系统中非常重要的组成部分。当嗅觉感受器接收到气味刺激时,其将化学信号转换为电信号并传递给嗅球,嗅球对信息进行整合与编码,继而将其传递到大脑嗅皮层,最终产生嗅觉。对于嗅觉神经网络的建模以及嗅觉信息处理的研究有助于理解嗅觉系统是如何有效区分不同种类与浓度的气味。本文在由僧帽细胞、颗粒细胞以及球旁细胞所构成的传统嗅球模型基础上,引人了嗅皮层来构建完整的嗅觉网络模型,并考虑了抑制性突触可塑性在网络接受刺激时的学习作用。其仿真结果表明抑制性突触可塑性可以平衡嗅皮层中兴奋性和抑制性的突触电流,从而使得嗅皮层对于气味刺激表现为特定的发放模式。嗅皮层对于不同种类的气味刺激表现为不同的发放模式,而对于同一种类不同浓度的气味刺激表现为相似的发放模式与不同程度的发放强度。同时提出了基于核方法的层次聚类和模糊聚类算法来实现对不同种类纯气味的识别和对混合气味中各种气味成分的识别。  相似文献   

20.
This paper proposes a neural network that stores and retrieves sparse patterns categorically, the patterns being random realizations of a sequence of biased (0,1) Bernoulli trials. The neural network, denoted as categorizing associative memory, consists of two modules: 1) an adaptive classifier (AC) module that categorizes input data; and 2) an associative memory (AM) module that stores input patterns in each category according to a Hebbian learning rule, after the AC module has stabilized its learning of that category. We show that during training of the AC module, the weights in the AC module belonging to a category converge to the probability of a “1” occurring in a pattern from that category. This fact is used to set the thresholds of the AM module optimally without requiring any a priori knowledge about the stored patterns  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号