首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We studied the hypothesis that synaptic dynamics is controlled by three basic principles: (1) synapses adapt their weights so that neurons can effectively transmit information, (2) homeostatic processes stabilize the mean firing rate of the postsynaptic neuron, and (3) weak synapses adapt more slowly than strong ones, while maintenance of strong synapses is costly. Our results show that a synaptic update rule derived from these principles shares features, with spike-timing-dependent plasticity, is sensitive to correlations in the input and is useful for synaptic memory. Moreover, input selectivity (sharply tuned receptive fields) of postsynaptic neurons develops only if stimuli with strong features are presented. Sharply tuned neurons can coexist with unselective ones, and the distribution of synaptic weights can be unimodal or bimodal. The formulation of synaptic dynamics through an optimality criterion provides a simple graphical argument for the stability of synapses, necessary for synaptic memory.  相似文献   

2.
Short-term synaptic plasticity and network behavior.   总被引:3,自引:0,他引:3  
We develop a minimal time-continuous model for use-dependent synaptic short-term plasticity that can account for both short-term depression and short-term facilitation. It is analyzed in the context of the spike response neuron model. Explicit expressions are derived for the synaptic strength as a function of previous spike arrival times. These results are then used to investigate the behavior of large networks of highly interconnected neurons in the presence of short-term synaptic plasticity. We extend previous results so as to elucidate the existence and stability of limit cycles with coherently firing neurons. After the onset of an external stimulus, we have found complex transient network behavior that manifests itself as a sequence of different modes of coherent firing until a stable limit cycle is reached.  相似文献   

3.
Liu JK 《Neural computation》2011,23(12):3145-3161
It has been established that homeostatic synaptic scaling plasticity can maintain neural network activity in a stable regime. However, the underlying learning rule for this mechanism is still unclear. Whether it is dependent on the presynaptic site remains a topic of debate. Here we focus on two forms of learning rules: traditional synaptic scaling (SS) without presynaptic effect and presynaptic-dependent synaptic scaling (PSD). Analysis of the synaptic matrices reveals that transition matrices between consecutive synaptic matrices are distinct: they are diagonal and linear to neural activity under SS, but become nondiagonal and nonlinear under PSD. These differences produce different dynamics in recurrent neural networks. Numerical simulations show that network dynamics are stable under PSD but not SS, which suggests that PSD is a better form to describe homeostatic synaptic scaling plasticity. Matrix analysis used in the study may provide a novel way to examine the stability of learning dynamics.  相似文献   

4.
We present a real-time model of learning in the auditory cortex that is trained using real-world stimuli. The system consists of a peripheral and a central cortical network of spiking neurons. The synapses formed by peripheral neurons on the central ones are subject to synaptic plasticity. We implemented a biophysically realistic learning rule that depends on the precise temporal relation of pre- and postsynaptic action potentials. We demonstrate that this biologically realistic real-time neuronal system forms stable receptive fields that accurately reflect the spectral content of the input signals and that the size of these representations can be biased by global signals acting on the local learning mechanism. In addition, we show that this learning mechanism shows fast acquisition and is robust in the presence of large imbalances in the probability of occurrence of individual stimuli and noise.  相似文献   

5.
Brader JM  Senn W  Fusi S 《Neural computation》2007,19(11):2881-2912
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).  相似文献   

6.
Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.  相似文献   

7.
Learning with two sites of synaptic integration   总被引:3,自引:0,他引:3  
Since the classical work of D O Hebb 1949 The Organization of Behaviour (New York: Wiley) it is assumed that synaptic plasticity solely depends on the activity of the pre- and the postsynaptic cells. Synapses influence the plasticity of other synapses exclusively via the post-synaptic activity. This confounds effects on synaptic plasticity and neuronal activation and, thus, makes it difficult to implement networks which optimize global measures of performance. Exploring solutions to this problem, inspired by recent research on the properties of apical dendrites, we examine a network of neurons with two sites of synaptic integration. These communicate in such a way that one set of synapses mainly influences the neurons' activity; the other set gates synaptic plasticity. Analysing the system with a constant set of parameters reveals: (1) the afferents that gate plasticity act as supervisors, individual to every cell. (2) While the neurons acquire specific receptive fields the net activity remains constant for different stimuli. This ensures that all stimuli are represented and, thus, contributes to information maximization. (3) Mechanisms for maximization of coherent information can easily be implemented. Neurons with non-overlapping receptive fields learn to fire correlated and preferentially transmit information that is correlated over space. (4) We demonstrate how a new measure of performance can be implemented: cells learn to represent only the part of the input that is relevant to the processing at higher stages. This criterion is termed 'relevant infomax'.  相似文献   

8.
9.
Spike-timing-dependent synaptic plasticity (STDP), which depends on the temporal difference between pre- and postsynaptic action potentials, is observed in the cortices and hippocampus. Although several theoretical and experimental studies have revealed its fundamental aspects, its functional role remains unclear. To examine how an input spatiotemporal spike pattern is altered by STDP, we observed the output spike patterns of a spiking neural network model with an asymmetrical STDP rule when the input spatiotemporal pattern is repeatedly applied. The spiking neural network comprises excitatory and inhibitory neurons that exhibit local interactions. Numerical experiments show that the spiking neural network generates a single global synchrony whose relative timing depends on the input spatiotemporal pattern and the neural network structure. This result implies that the spiking neural network learns the transformation from spatiotemporal to temporal information. In the literature, the origin of the synfire chain has not been sufficiently focused on. Our results indicate that spiking neural networks with STDP can ignite synfire chains in the cortices.  相似文献   

10.
Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.  相似文献   

11.
12.
Hoshino O 《Neural computation》2007,19(12):3310-3334
Accumulating evidence suggests that auditory cortical neurons exhibit widespread-onset responses and restricted sustained responses to sound stimuli. When a sound stimulus is presented to a subject, the auditory cortex first responds with transient discharges across a relatively large population of neurons, showing widespread-onset responses. As time passes, the activation becomes restricted to a small population of neurons that are preferentially driven by the stimulus, showing restricted sustained responses. The sustained responses are considered to have a role in expressing information about the stimulus, but it remains to be seen what roles the widespread-onset responses have in auditory information processing. We carried out numerical simulations of a neural network model for a lateral belt area of auditory cortex. In the network, dynamic cell assemblies expressed information about auditory sounds. Lateral excitatory and inhibitory connections were made between cell assemblies, respectively, by direct and indirect projections via interneurons. Widespread-onset neuronal responses to sound stimuli (bandpassed noises) took place over the network if lateral excitation preceded lateral inhibition, making a time widow for the onset responses. The widespread-onset responses contributed to the accelerating reaction time of neurons to sensory stimulation. Lateral interaction among dynamic cell assemblies was essential for maintaining ongoing membrane potentials near thresholds for action potential generation, thereby accelerating reaction time to subsequent sensory input as well. We suggest that the widespread-onset neuronal responses and the ongoing subthreshold cortical state, for which the coordination of lateral synaptic interaction among dissimilar cell assemblies is essential, may work together in order for the auditory cortex to quickly detect the sudden occurrence of sounds from the external environment.  相似文献   

13.
Matsumoto N  Okada M 《Neural computation》2002,14(12):2883-2902
Recent biological experimental findings have shown that synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes. This determines whether long-term potentiation (LTP) or long-term depression (LTD) is induced. This synaptic plasticity has been called temporally asymmetric Hebbian plasticity (TAH). Many authors have numerically demonstrated that neural networks are capable of storing spatiotemporal patterns. However, the mathematical mechanism of the storage of spatiotemporal patterns is still unknown, and the effect of LTD is particularly unknown. In this article, we employ a simple neural network model and show that interference between LTP and LTD disappears in a sparse coding scheme. On the other hand, the covariance learning rule is known to be indispensable for the storage of sparse patterns. We also show that TAH has the same qualitative effect as the covariance rule when spatiotemporal patterns are embedded in the network.  相似文献   

14.
Animal learning is associated with changes in the efficacy of connections between neurons. The rules that govern this plasticity can be tested in neural networks. Rules that train neural networks to map stimuli onto outputs are given by supervised learning and reinforcement learning theories. Supervised learning is efficient but biologically implausible. In contrast, reinforcement learning is biologically plausible but comparatively inefficient. It lacks a mechanism that can identify units at early processing levels that play a decisive role in the stimulus-response mapping. Here we show that this so-called credit assignment problem can be solved by a new role for attention in learning. There are two factors in our new learning scheme that determine synaptic plasticity: (1) a reinforcement signal that is homogeneous across the network and depends on the amount of reward obtained after a trial, and (2) an attentional feedback signal from the output layer that limits plasticity to those units at earlier processing levels that are crucial for the stimulus-response mapping. The new scheme is called attention-gated reinforcement learning (AGREL). We show that it is as efficient as supervised learning in classification tasks. AGREL is biologically realistic and integrates the role of feedback connections, attention effects, synaptic plasticity, and reinforcement learning signals into a coherent framework.  相似文献   

15.
Lüdtke N  Nelson ME 《Neural computation》2006,18(12):2879-2916
We study the encoding of weak signals in spike trains with interspike interval (ISI) correlations and the signals' subsequent detection in sensory neurons. Motivated by the observation of negative ISI correlations in auditory and electrosensory afferents, we assess the theoretical performance limits of an individual detector neuron receiving a weak signal distributed across multiple afferent inputs. We assess the functional role of ISI correlations in the detection process using statistical detection theory and derive two sequential likelihood ratio detector models: one for afferents with renewal statistics; the other for afferents with negatively correlated ISIs. We suggest a mechanism that might enable sensory neurons to implicitly compute conditional probabilities of presynaptic spikes by means of short-term synaptic plasticity. We demonstrate how this mechanism can enhance a postsynaptic neuron's sensitivity to weak signals by exploiting the correlation structure of the input spike trains. Our model not only captures fundamental aspects of early electrosensory signal processing in weakly electric fish, but may also bear relevance to the mammalian auditory system and other sensory modalities.  相似文献   

16.
Different models of attractor networks have been proposed to form cell assemblies. Among them, networks with a fixed synaptic matrix can be distinguished from those including learning dynamics, since the latter adapt the attractor landscape of the lateral connections according to the statistics of the presented stimuli, yielding a more complex behavior. We propose a new learning rule that builds internal representations of input timuli as attractors of neurons in a recurrent network. The dynamics of activation and synaptic adaptation are analyzed in experiments where representations for different input patterns are formed, focusing on the properties of the model as a memory system. The experimental results are exposed along with a survey of different Hebbian rules proposed in the literature for attractors formation. These rules are compared with the help of a new tool, the learning map, where LTP and LTD, as well as homo- and heterosynaptic competition, can be graphically interpreted.  相似文献   

17.
18.
Senn W  Fusi S 《Neural computation》2005,17(10):2106-2138
Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information.  相似文献   

19.
We present a dynamical theory of integrate-and-fire neurons with strong synaptic coupling. We show how phase-locked states that are stable in the weak coupling regime can destabilize as the coupling is increased, leading to states characterized by spatiotemporal variations in the interspike intervals (ISIs). The dynamics is compared with that of a corresponding network of analog neurons in which the outputs of the neurons are taken to be mean firing rates. A fundamental result is that for slow interactions, there is good agreement between the two models (on an appropriately defined timescale). Various examples of desynchronization in the strong coupling regime are presented. First, a globally coupled network of identical neurons with strong inhibitory coupling is shown to exhibit oscillator death in which some of the neurons suppress the activity of others. However, the stability of the synchronous state persists for very large networks and fast synapses. Second, an asymmetric network with a mixture of excitation and inhibition is shown to exhibit periodic bursting patterns. Finally, a one-dimensional network of neurons with long-range interactions is shown to desynchronize to a state with a spatially periodic pattern of mean firing rates across the network. This is modulated by deterministic fluctuations of the instantaneous firing rate whose size is an increasing function of the speed of synaptic response.  相似文献   

20.
Neurons in inferior temporal (IT) cortex exhibit selectivity for complex visual stimuli and can maintain activity during the delay following the presentation of a stimulus in delayed match to sample tasks. Experimental work in awake monkeys has shown that the responses of IT neurons decline during presentation of stimuli which have been seen recently (within the past few seconds). In addition, experiments have found that the responses of IT neurons to visual stimuli also decline as the stimuli become familiar, independent of recency. Here a biologically based neural network simulation is used to model these effects primarily through two processes. The recency effects are caused by adaptation due to a calcium-dependent potassium current, and the familiarity effects are caused by competitive self-organization of modifiable feedforward synapses terminating on IT cortex neurons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号