首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Synapses play a central role in neural computation: the strengths of synaptic connections determine the function of a neural circuit. In conventional models of computation, synaptic strength is assumed to be a static quantity that changes only on the slow timescale of learning. In biological systems, however, synaptic strength undergoes dynamic modulation on rapid timescales through mechanisms such as short term facilitation and depression. Here we describe a general model of computation that exploits dynamic synapses, and use a backpropagation-like algorithm to adjust the synaptic parameters. We show that such gradient descent suffices to approximate a given quadratic filter by a rather small neural system with dynamic synapses. We also compare our network model to artificial neural networks designed for time series processing. Our numerical results are complemented by theoretical analyses which show that even with just a single hidden layer such networks can approximate a surprisingly large class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics.  相似文献   

2.
Karsten  Andreas  Bernd  Ana D.  Thomas 《Neurocomputing》2008,71(7-9):1694-1704
Biologically plausible excitatory neural networks develop a persistent synchronized pattern of activity depending on spontaneous activity and synaptic refractoriness (short term depression). By fixed synaptic weights synchronous bursts of oscillatory activity are stable and involve the whole network. In our modeling study we investigate the effect of a dynamic Hebbian-like learning mechanism, spike-timing-dependent plasticity (STDP), on the changes of synaptic weights depending on synchronous activity and network connection strategies (small-world topology). We show that STDP modifies the weights of synaptic connections in such a way that synchronization of neuronal activity is considerably weakened. Networks with a higher proportion of long connections can sustain a higher level of synchronization in spite of STDP influence. The resulting distribution of the synaptic weights in single neurons depends both on the global statistics of firing dynamics and on the number of incoming and outgoing connections.  相似文献   

3.
Neural systems as nonlinear filters   总被引:1,自引:0,他引:1  
Maass W  Sontag ED 《Neural computation》2000,12(8):1743-1772
Experimental data show that biological synapses behave quite differently from the symbolic synapses in all common artificial neural network models. Biological synapses are dynamic; their "weight" changes on a short timescale by several hundred percent in dependence of the past input to the synapse. In this article we address the question how this inherent synaptic dynamics (which should not be confused with long term learning) affects the computational power of a neural network. In particular, we analyze computations on temporal and spatiotemporal patterns, and we give a complete mathematical characterization of all filters that can be approximated by feedforward neural networks with dynamic synapses. It turns out that even with just a single hidden layer, such networks can approximate a very rich class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. Our characterization result provides for all nonlinear filters that are approximable by Volterra series a new complexity hierarchy related to the cost of implementing such filters in neural systems.  相似文献   

4.
In this work, we study, analytically and employing Monte Carlo simulations, the influence of the competition between several activity-dependent synaptic processes, such as short-term synaptic facilitation and depression, on the maximum memory storage capacity in a neural network. In contrast to the case of synaptic depression, which drastically reduces the capacity of the network to store and retrieve "static" activity patterns, synaptic facilitation enhances the storage capacity in different contexts. In particular, we found optimal values of the relevant synaptic parameters (such as the neurotransmitter release probability or the characteristic facilitation time constant) for which the storage capacity can be maximal and similar to the one obtained with static synapses, that is, without activity-dependent processes. We conclude that depressing synapses with a certain level of facilitation allow recovering the good retrieval properties of networks with static synapses while maintaining the nonlinear characteristics of dynamic synapses, convenient for information processing and coding.  相似文献   

5.
Synapses are crucial elements for computation and information transfer in both real and artificial neural systems. Recent experimental findings and theoretical models of pulse-based neural networks suggest that synaptic dynamics can play a crucial role for learning neural codes and encoding spatiotemporal spike patterns. Within the context of hardware implementations of pulse-based neural networks, several analog VLSI circuits modeling synaptic functionality have been proposed. We present an overview of previously proposed circuits and describe a novel analog VLSI synaptic circuit suitable for integration in large VLSI spike-based neural systems. The circuit proposed is based on a computational model that fits the real postsynaptic currents with exponentials. We present experimental data showing how the circuit exhibits realistic dynamics and show how it can be connected to additional modules for implementing a wide range of synaptic properties.  相似文献   

6.
Matsumoto N  Okada M 《Neural computation》2002,14(12):2883-2902
Recent biological experimental findings have shown that synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes. This determines whether long-term potentiation (LTP) or long-term depression (LTD) is induced. This synaptic plasticity has been called temporally asymmetric Hebbian plasticity (TAH). Many authors have numerically demonstrated that neural networks are capable of storing spatiotemporal patterns. However, the mathematical mechanism of the storage of spatiotemporal patterns is still unknown, and the effect of LTD is particularly unknown. In this article, we employ a simple neural network model and show that interference between LTP and LTD disappears in a sparse coding scheme. On the other hand, the covariance learning rule is known to be indispensable for the storage of sparse patterns. We also show that TAH has the same qualitative effect as the covariance rule when spatiotemporal patterns are embedded in the network.  相似文献   

7.
The problem of spurious patterns in neural associative memory models is discussed. Some suggestions to solve this problem from the literature are reviewed and their inadequacies are pointed out. A solution based on the notion of neural self-interaction with a suitably chosen magnitude is presented for the Hebbian learning rule. For an optimal learning rule based on linear programming, asymmetric dilution of synaptic connections is presented as another solution to the problem of spurious patterns. With varying percentages of asymmetric dilution it is demonstrated numerically that this optimal learning rule leads to near total suppression of spurious patterns. For practical usage of neural associative memory networks a combination of the two solutions with the optimal learning rule is recommended to be the best proposition.  相似文献   

8.
Different models of attractor networks have been proposed to form cell assemblies. Among them, networks with a fixed synaptic matrix can be distinguished from those including learning dynamics, since the latter adapt the attractor landscape of the lateral connections according to the statistics of the presented stimuli, yielding a more complex behavior. We propose a new learning rule that builds internal representations of input timuli as attractors of neurons in a recurrent network. The dynamics of activation and synaptic adaptation are analyzed in experiments where representations for different input patterns are formed, focusing on the properties of the model as a memory system. The experimental results are exposed along with a survey of different Hebbian rules proposed in the literature for attractors formation. These rules are compared with the help of a new tool, the learning map, where LTP and LTD, as well as homo- and heterosynaptic competition, can be graphically interpreted.  相似文献   

9.
Temporal information processing, for instance the temporal association, plays an important role on many functions of brain. Among the various dynamics of neural networks, dynamic depression synapses and chaotic behavior have been regarded as the intriguing characteristics of biological neurons. In this paper, temporal association based on dynamic synapses and chaotic neurons is proposed. Interestingly, by introducing dynamic synapses into a temporal association, we found that the sequence storage capacity can be enlarged, that the transition time between patterns in the sequence can be shortened, and that the stability of the sequence can be enhanced. For particular interest, owing to chaotic neurons, the steady-state period becomes shorter in the temporal association and it can be adjusted by changing the parameter values of chaotic neurons. Simulation results demonstrating the performance of the temporal association are presented.  相似文献   

10.
Hopfield networks are a class of neural network models where non-linear graded response neurons organized into networks with effectively symmetric synaptic connections are able to implement interesting algorithms, thereby introducing the concept of information storage in the stable states of dynamical systems. In addition to opening up the possibility of using system dynamics as a vehicle to gain potentially useful insights into the behaviour of such networks, especially in the field or nonelectrical engineering, we study the dynamics of the state-space trajectory as well as time domain evolution of sensitivities of the states with respect to circuit parameters.  相似文献   

11.
Liu JK 《Neural computation》2011,23(12):3145-3161
It has been established that homeostatic synaptic scaling plasticity can maintain neural network activity in a stable regime. However, the underlying learning rule for this mechanism is still unclear. Whether it is dependent on the presynaptic site remains a topic of debate. Here we focus on two forms of learning rules: traditional synaptic scaling (SS) without presynaptic effect and presynaptic-dependent synaptic scaling (PSD). Analysis of the synaptic matrices reveals that transition matrices between consecutive synaptic matrices are distinct: they are diagonal and linear to neural activity under SS, but become nondiagonal and nonlinear under PSD. These differences produce different dynamics in recurrent neural networks. Numerical simulations show that network dynamics are stable under PSD but not SS, which suggests that PSD is a better form to describe homeostatic synaptic scaling plasticity. Matrix analysis used in the study may provide a novel way to examine the stability of learning dynamics.  相似文献   

12.
Small networks of cultured hippocampal neurons respond to transient stimulation with rhythmic network activity (reverberation) that persists for several seconds, constituting an in vitro model of synchrony, working memory, and seizure. This mode of activity has been shown theoretically and experimentally to depend on asynchronous neurotransmitter release (an essential feature of the developing hippocampus) and is supported by a variety of developing neuronal networks despite variability in the size of populations (10-200 neurons) and in patterns of synaptic connectivity. It has previously been reported in computational models that "small-world" connection topology is ideal for the propagation of similar modes of network activity, although this has been shown only for neurons utilizing synchronous (phasic) synaptic transmission. We investigated how topological constraints on synaptic connectivity could shape the stability of reverberations in small networks that also use asynchronous synaptic transmission. We found that reverberation duration in such networks was resistant to changes in topology and scaled poorly with network size. However, normalization of synaptic drive, by reducing the variance of synaptic input across neurons, stabilized reverberation in such networks. Our results thus suggest that the stability of both normal and pathological states in developing networks might be shaped by variance-normalizing constraints on synaptic drive. We offer an experimental prediction for the consequences of such regulation on the behavior of small networks.  相似文献   

13.
Continuous attractors of a class of recurrent neural networks   总被引:1,自引:0,他引:1  
Recurrent neural networks (RNNs) may possess continuous attractors, a property that many brain theories have implicated in learning and memory. There is good evidence for continuous stimuli, such as orientation, moving direction, and the spatial location of objects could be encoded as continuous attractors in neural networks. The dynamical behaviors of continuous attractors are interesting properties of RNNs. This paper proposes studying the continuous attractors for a class of RNNs. In this network, the inhibition among neurons is realized through a kind of subtractive mechanism. It shows that if the synaptic connections are in Gaussian shape and other parameters are appropriately selected, the network can exactly realize continuous attractor dynamics. Conditions are derived to guarantee the validity of the selected parameters. Simulations are employed for illustration.  相似文献   

14.
Cortical sensory neurons are known to be highly variable, in the sense that responses evoked by identical stimuli often change dramatically from trial to trial. The origin of this variability is uncertain, but it is usually interpreted as detrimental noise that reduces the computational accuracy of neural circuits. Here we investigate the possibility that such response variability might in fact be beneficial, because it may partially compensate for a decrease in accuracy due to stochastic changes in the synaptic strengths of a network. We study the interplay between two kinds of noise, response (or neuronal) noise and synaptic noise, by analyzing their joint influence on the accuracy of neural networks trained to perform various tasks. We find an interesting, generic interaction: when fluctuations in the synaptic connections are proportional to their strengths (multiplicative noise), a certain amount of response noise in the input neurons can significantly improve network performance, compared to the same network without response noise. Performance is enhanced because response noise and multiplicative synaptic noise are in some ways equivalent. So if the algorithm used to find the optimal synaptic weights can take into account the variability of the model neurons, it can also take into account the variability of the synapses. Thus, the connection patterns generated with response noise are typically more resistant to synaptic degradation than those obtained without response noise. As a consequence of this interplay, if multiplicative synaptic noise is present, it is better to have response noise in the network than not to have it. These results are demonstrated analytically for the most basic network consisting of two input neurons and one output neuron performing a simple classification task, but computer simulations show that the phenomenon persists in a wide range of architectures, including recurrent (attractor) networks and sensorimotor networks that perform coordinate transformations. The results suggest that response variability could play an important dynamic role in networks that continuously learn.  相似文献   

15.
Much evidence indicates that the perirhinal cortex is involved in the familiarity discrimination aspect of recognition memory. It has been previously shown under selective conditions that neural networks performing familiarity discrimination can achieve very high storage capacity, being able to deal with many times more stimuli than associative memory networks can in associative recall. The capacity of associative memories for recall has been shown to be highly dependent on the sparseness of coding. However, previous work on the networks of Bogacz et al, Norman and O'Reilly and Sohal and Hasselmo that model familiarity discrimination in the perirhinal cortex has not investigated the effects of the sparseness of encoding on capacity. This paper explores how sparseness of coding influences the capacity of each of these published models and establishes that sparse coding influences the capacity of the different models in different ways. The capacity of the Bogacz et al model can be made independent of the sparseness of coding. Capacity increases as coding becomes sparser for a simplified version of the neocortical part of the Norman and O'Reilly model, whereas capacity decreases as coding becomes sparser for a simplified version of the Sohal and Hasselmo model. Thus in general, and in contrast to associative memory networks, sparse encoding results in little or no advantage for the capacity of familiarity discrimination networks. Hence it may be less important for coding to be sparse in the perirhinal cortex than it is in the hippocampus. Additionally, it is established that the capacities of the networks are strongly dependent on the precise form of the learning rules (synaptic plasticity) used in the network. This finding indicates that the precise characteristics of synaptic plastic changes in the real brain are likely to have major influences on storage capacity.  相似文献   

16.
It has been shown in studies of biological synaptic plasticity that synaptic efficacy can change in a very short time window, compared to the time scale associated with typical neural events. This time scale is small enough to possibly have an effect on pattern recall processes in neural networks. We study properties of a neural network which uses a cyclic Hebb rule. Then we add the short term potentiation of synapses in the recall phase. We show that this approach preserves the ability of the network to recognize the patterns stored by the network and that the network does not respond to other patterns at the same time. We show that this approach dramatically increases the capacity of the network at the cost of a longer pattern recall process. We discuss that the network possesses two types of recall. The fast recall does not need synaptic plasticity to recognize a pattern, while the slower recall utilizes synaptic plasticity. This is something that we all experience in our daily lives: some memories can be recalled promptly whereas recollection of other memories requires much more time.  相似文献   

17.
This paper describes an adaptive neural control system for governing the movements of a robotic wheelchair. It presents a new model of recurrent neural network based on a RBF architecture and combining in its architecture local recurrence and synaptic connections with FIR filters. This model is used in two different control architectures to command the movements of a robotic wheelchair. The training equations and the stability conditions of the control system are obtained. Practical tests show that the results achieved using the proposed method are better than those obtained using PID controllers or other recurrent neural networks models  相似文献   

18.
Animal learning is associated with changes in the efficacy of connections between neurons. The rules that govern this plasticity can be tested in neural networks. Rules that train neural networks to map stimuli onto outputs are given by supervised learning and reinforcement learning theories. Supervised learning is efficient but biologically implausible. In contrast, reinforcement learning is biologically plausible but comparatively inefficient. It lacks a mechanism that can identify units at early processing levels that play a decisive role in the stimulus-response mapping. Here we show that this so-called credit assignment problem can be solved by a new role for attention in learning. There are two factors in our new learning scheme that determine synaptic plasticity: (1) a reinforcement signal that is homogeneous across the network and depends on the amount of reward obtained after a trial, and (2) an attentional feedback signal from the output layer that limits plasticity to those units at earlier processing levels that are crucial for the stimulus-response mapping. The new scheme is called attention-gated reinforcement learning (AGREL). We show that it is as efficient as supervised learning in classification tasks. AGREL is biologically realistic and integrates the role of feedback connections, attention effects, synaptic plasticity, and reinforcement learning signals into a coherent framework.  相似文献   

19.
Types of. mechanisms for and stability of synchrony are discussed in the context of two-compartment CA3 pyramidal cell and interneuron model networks. We show how the strength and timing of inhibitory and excitatory synaptic inputs work together to produce either perfectly synchronized or nearly synchronized oscillations, across different burst or spiking modes of firing. The analysis shows how excitatory inputs tend to desynchronize cells, and how common, slowly decaying inhibition can be used to synchronize them. We also introduce the concept of 'equivalent networks' in which networks with different architectures and synaptic connections display identical firing patterns.  相似文献   

20.
Functional abilities of a stochastic logic neural network   总被引:3,自引:0,他引:3  
The authors have studied the information processing ability of stochastic logic neural networks, which constitute one of the pulse-coded artificial neural network families. These networks realize pseudoanalog performance with local learning rules using digital circuits, and therefore suit silicon technology. The synaptic weights and the outputs of neurons in stochastic logic are represented by stochastic pulse sequences. The limited range of the synaptic weights reduces the coding noise and suppresses the degradation of memory storage capacity. To study the effect of the coding noise on an optimization problem, the authors simulate a probabilistic Hopfield model (Gaussian machine) which has a continuous neuron output function and probabilistic behavior. A proper choice of the coding noise amplitude and scheduling improves the network's solutions of the traveling salesman problem (TSP). These results suggest that stochastic logic may be useful for implementing probabilistic dynamics as well as deterministic dynamics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号