首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Neural systems as nonlinear filters   总被引:1,自引:0,他引:1  
Maass W  Sontag ED 《Neural computation》2000,12(8):1743-1772
Experimental data show that biological synapses behave quite differently from the symbolic synapses in all common artificial neural network models. Biological synapses are dynamic; their "weight" changes on a short timescale by several hundred percent in dependence of the past input to the synapse. In this article we address the question how this inherent synaptic dynamics (which should not be confused with long term learning) affects the computational power of a neural network. In particular, we analyze computations on temporal and spatiotemporal patterns, and we give a complete mathematical characterization of all filters that can be approximated by feedforward neural networks with dynamic synapses. It turns out that even with just a single hidden layer, such networks can approximate a very rich class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. Our characterization result provides for all nonlinear filters that are approximable by Volterra series a new complexity hierarchy related to the cost of implementing such filters in neural systems.  相似文献   

2.
In most neural network models, synapses are treated as static weights that change only with the slow time scales of learning. It is well known, however, that synapses are highly dynamic and show use-dependent plasticity over a wide range of time scales. Moreover, synaptic transmission is an inherently stochastic process: a spike arriving at a presynaptic terminal triggers the release of a vesicle of neurotransmitter from a release site with a probability that can be much less than one. We consider a simple model for dynamic stochastic synapses that can easily be integrated into common models for networks of integrate-and-fire neurons (spiking neurons). The parameters of this model have direct interpretations in terms of synaptic physiology. We investigate the consequences of the model for computing with individual spikes and demonstrate through rigorous theoretical results that the computational power of the network is increased through the use of dynamic synapses.  相似文献   

3.
忆阻器是一种动态特性的电阻,其阻值可以根据外场的变化而变化,并且在外场撤掉后能够保持原来的阻值,具有类似于生物神经突触连接强度的特性,可以用来存储突触权值。在此基础上,为了实现基于Temporal rule对IRIS数据集识别学习的功能,建立了以桥式忆阻器为突触的神经网络SPICE仿真电路。采用单个脉冲的编码方式,脉冲的时刻代表着数据信息,该神经网络电路由48个脉冲输入端口、144个突触、3个输出端口组成。基于Temporal rule学习规则对突触的权值修改,通过仿真该神经网络电路对IRIS数据集的分类正确率最高能达到93.33%,表明了此神经系统结构设计在类脑脉冲神经网络中的可用性。  相似文献   

4.
In this work, we study, analytically and employing Monte Carlo simulations, the influence of the competition between several activity-dependent synaptic processes, such as short-term synaptic facilitation and depression, on the maximum memory storage capacity in a neural network. In contrast to the case of synaptic depression, which drastically reduces the capacity of the network to store and retrieve "static" activity patterns, synaptic facilitation enhances the storage capacity in different contexts. In particular, we found optimal values of the relevant synaptic parameters (such as the neurotransmitter release probability or the characteristic facilitation time constant) for which the storage capacity can be maximal and similar to the one obtained with static synapses, that is, without activity-dependent processes. We conclude that depressing synapses with a certain level of facilitation allow recovering the good retrieval properties of networks with static synapses while maintaining the nonlinear characteristics of dynamic synapses, convenient for information processing and coding.  相似文献   

5.
Cortical sensory neurons are known to be highly variable, in the sense that responses evoked by identical stimuli often change dramatically from trial to trial. The origin of this variability is uncertain, but it is usually interpreted as detrimental noise that reduces the computational accuracy of neural circuits. Here we investigate the possibility that such response variability might in fact be beneficial, because it may partially compensate for a decrease in accuracy due to stochastic changes in the synaptic strengths of a network. We study the interplay between two kinds of noise, response (or neuronal) noise and synaptic noise, by analyzing their joint influence on the accuracy of neural networks trained to perform various tasks. We find an interesting, generic interaction: when fluctuations in the synaptic connections are proportional to their strengths (multiplicative noise), a certain amount of response noise in the input neurons can significantly improve network performance, compared to the same network without response noise. Performance is enhanced because response noise and multiplicative synaptic noise are in some ways equivalent. So if the algorithm used to find the optimal synaptic weights can take into account the variability of the model neurons, it can also take into account the variability of the synapses. Thus, the connection patterns generated with response noise are typically more resistant to synaptic degradation than those obtained without response noise. As a consequence of this interplay, if multiplicative synaptic noise is present, it is better to have response noise in the network than not to have it. These results are demonstrated analytically for the most basic network consisting of two input neurons and one output neuron performing a simple classification task, but computer simulations show that the phenomenon persists in a wide range of architectures, including recurrent (attractor) networks and sensorimotor networks that perform coordinate transformations. The results suggest that response variability could play an important dynamic role in networks that continuously learn.  相似文献   

6.
Attractor neural networks (ANNs) based on the Ising model are naturally fully connected and are homogeneous in structure. These features permit a deep understanding of the underlying mechanism, but limit the applicability of these models to the brain. A more biologically realistic model can be derived from an equally simple physical model by utilizing recurrent self-trapping inputs to supplement very sparse intranetwork interactions. This paper reports the analysis of a one-dimensional (1-D) ANN coupled to a second system that computes overlaps with a single stored memory. Results show that: 1) the 1-D self-trapping model is equivalent to an isolated ANN with both full connectivity of one strength and nearest neighbor synapses of an independent strength; 2) the dynamics of ANN and self-trapping updates are independent; 3) there is a critical synaptic noise level below which memory retrieval occurs; 4) the 1-D self-trapping model converges to a fully connected Hopfield model for zero strength nearest neighbor synapses, and has a greater magnitude memory overlap for nonzero strength nearest neighbor synapses; and (5) the mechanism of self-trapping is an iterative map on the mean overlap as a function of the reentrant input.  相似文献   

7.
This paper discusses issues related to the approximation capability of neural networks in modeling and control. We show that neural networks are universal models and universal controllers for a class of nonlinear dynamic systems. That is, for a given dynamic system, there exists a neural network which can model the system to any degree of accuracy over time. Moreover, if the system to be controlled is stabilized by a continuous controller, then there exists a neural network which can approximate the controller such that the system controlled by the neural network is also stabilized with a given bound of output error.  相似文献   

8.
We postulate that a simple, three-state synaptic switch governs changes in synaptic strength at individual synapses. Under this switch rule, we show that a variety of experimental results on timing-dependent plasticity can emerge from temporal and spatial averaging over multiple synapses and multiple spike pairings. In particular, we show that a critical window for the interaction of pre- and postsynaptic spikes emerges as an ensemble property of the collective system, with individual synapses exhibiting only a minimal form of spike coincidence detection. In addition, we show that a Bienenstock-Cooper-Munro-like, rate-based plasticity rule emerges directly from such a model. This demonstrates that two apparently separate forms of neuronal plasticity can emerge from a much simpler rule governing the plasticity of individual synapses.  相似文献   

9.
Synapses are crucial elements for computation and information transfer in both real and artificial neural systems. Recent experimental findings and theoretical models of pulse-based neural networks suggest that synaptic dynamics can play a crucial role for learning neural codes and encoding spatiotemporal spike patterns. Within the context of hardware implementations of pulse-based neural networks, several analog VLSI circuits modeling synaptic functionality have been proposed. We present an overview of previously proposed circuits and describe a novel analog VLSI synaptic circuit suitable for integration in large VLSI spike-based neural systems. The circuit proposed is based on a computational model that fits the real postsynaptic currents with exponentials. We present experimental data showing how the circuit exhibits realistic dynamics and show how it can be connected to additional modules for implementing a wide range of synaptic properties.  相似文献   

10.
Human and animal studies show that mammalian brains undergo massive synaptic pruning during childhood, losing about half of the synapses by puberty. We have previously shown that maintaining the network performance while synapses are deleted requires that synapses be properly modified and pruned, with the weaker synapses removed. We now show that neuronal regulation, a mechanism recently observed to maintain the average neuronal input field of a postsynaptic neuron, results in a weight-dependent synaptic modification. Under the correct range of the degradation dimension and synaptic upper bound, neuronal regulation removes the weaker synapses and judiciously modifies the remaining synapses. By deriving optimal synaptic modification functions in an excitatory-inhibitory network, we prove that neuronal regulation implements near-optimal synaptic modification and maintains the performance of a network undergoing massive synaptic pruning. These findings support the possibility that neural regulation complements the action of Hebbian synaptic changes in the self-organization of the developing brain.  相似文献   

11.
A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6% more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces.  相似文献   

12.
It has been shown in studies of biological synaptic plasticity that synaptic efficacy can change in a very short time window, compared to the time scale associated with typical neural events. This time scale is small enough to possibly have an effect on pattern recall processes in neural networks. We study properties of a neural network which uses a cyclic Hebb rule. Then we add the short term potentiation of synapses in the recall phase. We show that this approach preserves the ability of the network to recognize the patterns stored by the network and that the network does not respond to other patterns at the same time. We show that this approach dramatically increases the capacity of the network at the cost of a longer pattern recall process. We discuss that the network possesses two types of recall. The fast recall does not need synaptic plasticity to recognize a pattern, while the slower recall utilizes synaptic plasticity. This is something that we all experience in our daily lives: some memories can be recalled promptly whereas recollection of other memories requires much more time.  相似文献   

13.
14.
Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employing binary synapses, or the BCPNN rule of Lansner and Ekeberg, for example. Here I show that all of these previous models are limit cases of a general optimal model where synaptic learning is determined by probabilistic Bayesian considerations. Asymptotically, for large networks and very sparse neuron activity, the Bayesian model becomes identical to an inhibitory implementation of the Willshaw and BCPNN-type models. For less sparse patterns, the Bayesian model becomes identical to Hopfield-type networks employing the covariance rule. For intermediate sparseness or finite networks, the optimal Bayesian learning rule differs from the previous models and can significantly improve memory performance. I also provide a unified analytical framework to determine memory capacity at a given output noise level that links approaches based on mutual information, Hamming distance, and signal-to-noise ratio.  相似文献   

15.
Neural networks, artificial or biophysical, consist of large numbers of neurons that communicate by sending impulses via synapses. One interesting observation is that due to the presence of massive parallelism and redundancy, neural networks are often inherently fault tolerant. If a small number of neurons or synapses are disconnected the network still performs most, if not all, of its functions correctly. It is exactly this property we are interested in when studying neural networks. There is, however, a problem if one tries to analyze neural networks within a formal framework such as the McCulloch and Pitts (1943) calculus: neural nets are too large to analyze without proper tools. Alternatively, one could simulate neural networks on a computer system. We propose to model neural networks by means of networks of finite automata in order to simulate large neural assemblies of unreliable components.  相似文献   

16.
Image preprocessing with dynamic synapses   总被引:1,自引:0,他引:1  
Different algorithms suitable for a specific class of picture were developed for image processing. We will represent the filtering capability of a spiking neural network based on dynamic synapses. For this intention we chose an x-ray image of the human coronary trees and another noisy image. In other words the task at hand is to show how accurately such a network is able to store various aspects (object/background) of stimulus in the variables which describe dynamic of synaptic response. The behavior of these synapses influences the effective connection in the network in a short time-scale. Such a network has a low activity and a balanced behavior. Dynamic synapses are able to adjust their behavior by fast changing stimuli. These synapses retain the information in the variables, such as potential and time.  相似文献   

17.
We introduce a model of generalized Hebbian learning and retrieval in oscillatory neural networks modeling cortical areas such as hippocampus and olfactory cortex. Recent experiments have shown that synaptic plasticity depends on spike timing, especially on synapses from excitatory pyramidal cells, in hippocampus, and in sensory and cerebellar cortex. Here we study how such plasticity can be used to form memories and input representations when the neural dynamics are oscillatory, as is common in the brain (particularly in the hippocampus and olfactory cortex). Learning is assumed to occur in a phase of neural plasticity, in which the network is clamped to external teaching signals. By suitable manipulation of the nonlinearity of the neurons or the oscillation frequencies during learning, the model can be made, in a retrieval phase, either to categorize new inputs or to map them, in a continuous fashion, onto the space spanned by the imprinted patterns. We identify the first of these possibilities with the function of olfactory cortex and the second with the observed response characteristics of place cells in hippocampus. We investigate both kinds of networks analytically and by computer simulations, and we link the models with experimental findings, exploring, in particular, how the spike timing dependence of the synaptic plasticity constrains the computational function of the network and vice versa.  相似文献   

18.
The brain is not a huge fixed neural network, but a dynamic, changing neural network that continuously adapts to meet the demands of communication and computational needs. In classical neural networks approaches, particularly associative memory models, synapses are only adjusted during the training phase. After this phase, synapses are no longer adjusted. In this paper we describe a new dynamical model where synapses of the associative memory could be adjusted even after the training phase as a response to an input stimulus. We provide some propositions that guarantee perfect and robust recall of the fundamental set of associations. In addition, we describe the behavior of the proposed associative model under noisy versions of the patterns. At last, we present some experiments aimed to show the accuracy of the proposed model.  相似文献   

19.
We analyzed different approaches to developing ensembles of neural networks in respect to their forecasting accuracy. We describe a two level model of ensembles of neural networks for forecasting of telemetry time series of spacecraft’s subsystems. A possibility of additional training of these ensembles of neural networks is examined. Our results show that use of ensembles of neural networks with dynamic weighing allows us to reduce the forecasting error.  相似文献   

20.
We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号